id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.15458
A universal Kaluzhnin--Krasner embedding theorem
Given two groups $A$ and $B$, the Kaluzhnin--Krasner universal embedding theorem states that the wreath product $A\wr B$ acts as a universal receptacle for extensions from $A$ to $B$. For a split extension, this embedding is compatible with the canonical splitting of the wreath product, which is further universal in a precise sense. This result was recently extended to Lie algebras and to cocommutative Hopf algebras. The aim of the present article is to explore the feasibility of adapting the theorem to other types of algebraic structures. By explaining the underlying unity of the three known cases, our analysis gives necessary and sufficient conditions for this to happen. From those we may for instance conclude that a version for crossed modules can indeed be attained, while the theorem cannot be adapted to, say, associative algebras, Jordan algebras or Leibniz algebras, when working over an infinite field: we prove that then, amongst non-associative algebras, only Lie algebras admit a universal Kaluzhnin--Krasner embedding theorem.
Bo Shan Deval, Xabier García-Martínez, Tim Van der Linden
2023-06-27T13:22:25Z
http://arxiv.org/abs/2306.15458v2
# A universal Kaluzhnin-Krasner ###### Abstract. Given two groups \(A\) and \(B\), the _Kaluzhnin-Krasner universal embedding theorem_ states that the wreath product \(A\wr B\) acts as a universal receptacle for extensions from \(A\) to \(B\). For a split extension, this embedding is compatible with the canonical splitting of the wreath product, which is further universal in a precise sense. This result was recently extended to Lie algebras and to cocommutative Hopf algebras. The aim of the present article is to explore the feasibility of adapting the theorem to other types of algebraic structures. By explaining the underlying unity of the three known cases, our analysis gives necessary and sufficient conditions for this to happen. From those we may for instance conclude that a version for crossed modules can indeed be attained, while the theorem cannot be adapted to, say, associative algebras, Jordan algebras or Leibniz algebras, when working over an infinite field: we prove that then, amongst non-associative algebras, only Lie algebras admit a Kaluzhnin-Krasner theorem. Key words and phrases:Wreath product; (split) extension; locally algebraically cartesian closed category 2020 Mathematics Subject Classification: 16B50, 16W25, 17A36, 18C05, 18E13, 20E22 The first author's research is supported by a grant of the Fund for Research Training in Industry and Agriculture (FRIA). The second author is supported by Ministerio de Ciencia e Innovacion (Spain), with grant number PID2021-127075NA-I00. The third author is a Senior Research Associate of the Fonds de la Recherche Scientifique-FNRS The theorem says that any given extension \(E\) from \(A\) to \(B\) embeds into it, via a monomorphism of group extensions \(\phi\) as in Here we use that the square on the left is a pullback, that pullbacks preserve monomorphisms, and that monomorphisms of extensions are component-wise. As we explained above, the morphism \(\phi\) depends on the choice of a set-theoretical section \(s\colon B\to G\) of the surjection \(f\). Whenever this \(s\) is a group homomorphism, we have that \(\phi\) is equivariant with respect to the induced action: indeed, for each \(a\in A\) and \(b\in B\) we have that \(\phi_{A}(a)^{b}=\phi_{A}(a^{b})\), because \[\phi_{A}(a)^{b}(b^{\prime}) =h_{a}(b^{\prime}b)=s(b^{\prime}b)as(b^{\prime}b)^{-1}=s(b^{ \prime})s(b)as(b)^{-1}s(b^{\prime})^{-1}\] \[=h_{s(b)as(b)^{-1}}(b^{\prime})=\phi_{A}(a^{b})(b^{\prime}),\] for all \(b^{\prime}\in B\). Thus \(\phi\) becomes a morphism of _split_ extensions (\(\phi_{G}\circ s=\sigma\)). The wreath product \(A\wr B\) further satisfies a strong type of universality which we will study in detail in this article. Recently there has been some effort towards extending this result to other algebraic settings: [22] considers a Kaluzhnin-Krasner embedding theorem for Lie algebras, while [2] obtains a theorem in the context of cocommutative Hopf algebras. In both cases, the key problem is of course to determine how the wreath product \(A\wr B\) should be defined in the given context. The aim of the present article is to expose the underlying unity of these different results, and explain that there is a general recipe for the wreath product--a _universal Kaluzhnin-Krasner embedding theorem_--solving this problem once and for all, for any type of algebraic structure whatsoever, under some precise conditions, which allows us to make predictions about the feasibility of further extending the result to other contexts. ## 2. The case of split extensions We start by considering a special case of the embedding theorem: its restriction to split extensions, which turns out to be at the heart of the problem, because it gives us a formula for \(A\wr B\). ### Universality of the wreath product Fix \(B\). Given a split extension over \(B\) \[S=\big{(}\begin{array}{c}0\\ \end{array}\xrightarrow{\makebox[0.0pt]{$\hskip 0.0pt\xrightarrow{\makebox[0.0pt]{ \makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{ \makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{ \makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{ \makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{ \makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt{\makebox[0.0pt]{ \makebox[0.0pt{\makebox[0.0pt]{\makeboxbox[0.0pt{\makeboxbox[0.0pt]{\makeboxbox[0.0pt ]{\makebox[0.0pt{\makeboxbox[0.0pt{\makebox[0.0]{\makeboxboxbox[[[]{\makeboxboxbox[[ ]{\makeboxboxbox[{\makebox[[{\make then there exists a unique morphism \(\overline{\alpha}\colon A\to C\) such that \(R(\overline{\alpha})\circ\eta_{S}=\alpha\), where we write \(R(\overline{\alpha})\) for the morphism of split extensions induced by the map \[A\wr B\to C\wr B\colon(h\colon B\to A,b)\mapsto(K(\alpha)\circ h,b),\] ( \[\ddagger\] ) the \(K(\alpha)\) in this formula being the component between the kernel objects of the morphism \(\alpha\colon S\to R(C)\). Note that we make this explicit for groups, but it is not hard to see that the same holds for Lie algebras or cocommutative Hopf algebras, each with their respective universal embeddings. This "universality of the wreath product" just means that \(\eta_{S}\) is the \(S\)-component of the unit of an adjunction \(K\dashv R\), where \(K\) is the forgetful functor sending a split extension \(S\) over \(B\) as in (\(*\)) to the kernel object \(A\), and \(R\) is the functor sending an object \(A\) to the wreath product split extension over \(B\) determined by \(A\wr B\). (In the case of groups, \(R\) is defined on morphisms by the formula (\(\ddagger\)).) Now the only thing missing for a general result unifying the cases of groups, Lie algebras and cocommutative Hopf algebras is a precise description of a context where such an adjunction (1) makes sense and (2) exists. ### A context for the general analysis An appropriate context is the setting of _semi-abelian categories_ in the sense of Janelidze-Marki-Tholen [19]: it includes the three types of algebraic structures as examples (see [14] for the Hopf algebra case), next to any type of (universal) algebras containing a group operation and a single constant in their signature (the _varieties of \(\Omega\)-groups_ of Higgins [18]), as well as some "strange" categories such as loops, Heyting semilattices and the dual of the category of pointed sets (see [4, 9] for an overview). By definition, a category \(\mathcal{X}\) is _semi-abelian_ if and only if it is pointed, Barr exact, Bourn protomodular with binary coproducts. _pointed_ means that there is a _zero object_: an initial object \(0\) which is also terminal. Recall that for a category to be _abelian_, one needs to add _additivity_ to Barr exactness: the existence of a natural abelian group structure on the hom-sets, which is not available for groups, Lie algebras or Hopf algebras. The protomodularity condition which replaces it says that the _Split Short Five Lemma_ holds in \(\mathcal{X}\), or equivalently, that the middle object \(G\) in a split extension such as (\(*\)) is "covered" or "generated" by the outer objects \(A\) and \(B\) in the sense that they do not both factor through the same proper subobject of \(G\). ### Local algebraic cartesian closedness We fix a semi-abelian category \(\mathcal{X}\), and write \(\mathsf{ExtS}(B)\) for the category of split extensions over \(B\) in \(\mathcal{X}\): a morphism \(\phi=(\phi_{A},\phi_{G})\) from \(S=(k,f,s)\) to \(T=(l,g,t)\) satisfies \(\phi_{G}\circ k=l\circ\phi_{A}\), \(g\circ\phi_{G}=f\) and \(\phi_{G}\circ s=t\). We write \(K\colon\mathsf{ExtS}(B)\to\mathcal{X}\) for the forgetful functor which sends a split extension \(S=(k,f,s)\) such as (\(*\)) to the kernel \(A\), so that \(A=K(S)\). The existence of a right adjoint \(R\colon\mathcal{X}\to\mathsf{ExtS}(B)\) to \(K\) was first considered by James R. A. Gray in his Ph.D. thesis [15] and further studied in the articles [6, 16, 17], amongst others. The description of \(R(A)\) for groups in [17] is precisely the classical wreath product, while \(R(A)\) for Lie algebras described in [10, 16] coincides with the wreath product of [22]. We do not know of an explicit description in the literature of \(R(A)\) in the case of Hopf algebras, but the existence of the functor \(R\) was clear from the fact that the category of cocommutative Hopf algebras over a field \(\mathcal{K}\) is the category of internal groups in the category of cocommutative coalgebras over \(\mathcal{K}\), which is known to be cartesian closed [1, 11]. Of course the wreath product of [22] now provides such an explicit description. In fact, the existence of a right adjoint \(R\colon\mathcal{X}\to\mathsf{ExtS}(B)\) does not come for free, and few semi-abelian categories are known where it does exist. If so, then \(\mathcal{X}\) is said to be _locally algebraically cartesian closed (LACC)_. The relative strength of the condition is witnessed by the fact that the only examples of (LACC) categories currently known in the literature are: essentially affine categories (which include all additive categories) [5]; internal groups in a cartesian closed category with pullbacks (which include the examples of classical groups, crossed modules and cocommutative Hopf algebras) [6]; and internal Lie algebras in an additive cocomplete symmetric closed monoidal category [10]. On the other hand, almost any other known semi-abelian category is known _not_ to be (LACC)--see below (Theorem 2.7) for a concrete result in the context of algebras over a field. ### Towards an embedding theorem for split extensions Let us now reason within a chosen semi-abelian category \(\mathbb{X}\). Fix an object \(B\). The existence of a right adjoint \(R\) for the forgetful functor \(K\colon\mathsf{ExtS}(B)\to\mathbb{X}\) implies that each split extension \(S\) as in \((*)\) comes equipped with a morphism \(\eta_{S}\colon S\to RK(S)=R(A)\), the \(S\)-component of the adjunction unit \(\eta\). As in the case of groups, we call the middle object of the split extension \(R(A)\) as in the _wreath product_ of \(A\) and \(B\) and denote it by \(A\wr B\). It turns out that this morphism of extensions is always a monomorphism: **Lemma 2.5**.: _The unit \(\eta\) of the adjunction \(K\dashv R\) is a monomorphism._ Proof.: It is a well-known categorical fact that the components of the unit of an adjunction are monomorphisms if and only if the left adjoint is a faithful functor. In the case of \(K\dashv R\), faithfulness amounts to the condition that whenever we have two morphisms \(\alpha\) and \(\beta\) of split extensions over \(B\) if \(\alpha_{A}=\beta_{A}\), then \(\alpha_{G}=\beta_{G}\). In any semi-abelian category this is indeed the case, because the protomodularity implies that the morphisms \(k\) and \(s\) are jointly epimorphic. Thus we proved, essentially without any effort: **Theorem 2.6**.: _In a semi-abelian category \(\mathbb{X}\), for every object \(A\) there exists a universal1 split extension_ Footnote 1: Universality here means that the property depicted in \((\dagger)\) holds. \[R(A)=\big{(}0\to KR(A)\to A\wr B\leftrightarrows B\to 0\big{)}\] _over \(B\) into which each split extension_ \[S=\big{(}0\to A\to G\leftrightarrows B\to 0\big{)}\] _embeds if and only if the category \(\mathbb{X}\) is locally algebraically cartesian closed, in which case \(K\dashv R\) and the embedding is given by the \(S\)-component \(\eta_{S}\colon S\to RK(S)\) of the unit of this adjunction. _ In other words, a _universal Kaluzhnin-Krasner embedding theorem for split extensions_ exists for locally algebraically cartesian closed semi-abelian categories, and only for those. This means that within the semi-abelian context, we can only hope for the validity of a Kaluzhnin-Krasner embedding theorem (for all extensions) when the category is (LACC). Now as already mentioned above, such categories are scarce. This becomes especially concrete in the setting of non-associative algebras over a field, by which we mean any type of algebras over a field in the ordinary sense, where we have a vector space equipped with a bilinear multiplication satisfying certain identities which need not include associativity. Such a category is called a _variety of non-associative algebras over a field_. It is indeed known that over an infinite field no such variety can be (LACC), unless it is the variety of Lie algebras [12, 13], from which we deduce an important consequence: **Theorem 2.7**.: _A variety of non-associative algebras over an infinite field admits a Kaluzhnin-Krasner embedding theorem for split extensions if and only if it is the variety of Lie algebras. _ That is to say, there is no hope of ever extending the result of [22] to other types of algebras over a field, such as associative algebras, Jordan algebras or Leibniz algebras. Other semi-abelian categories which are excluded because they are known not to be (LACC) are the categories of (commutative or non-commutative) loops, Heyting semilattices,igroups [9, Examples 4.10]. Leaving this potential obstacle aside, we now focus on the positive side of Theorem 2.6, extending the general Kaluzhnin-Krasner embedding theorem from split extensions to arbitrary extensions. ## 3. Arbitrary extensions: crude embedding Again fixing a semi-abelian category, we write for the category of extensions over \(B\) in : a morphism \(\phi=(\phi_{A},\phi_{G})\) from \(E=(k,f)\) to \(F=(l,g)\) satisfies \(\phi_{G}\circ k=l\circ\phi_{A}\) and \(g\circ\phi_{G}=f\). We write for the forgetful functor which sends an extension \(E=(k,f)\) such as (8) to the kernel \(A\), so that \(A=U(E)\). _Remark 3.1_.: Note that, unlike the forgetful functor \(K\), the functor \(U\) is not faithful. A counterexample in the category of abelian groups is given by the morphisms of extensions in the diagram where \(\beta(m,n)=(m+n,n)\). We may instead use that \(U\) preserves and reflects monomorphisms: this follows from the fact that in any semi-abelian category, pullbacks reflect monos--see for instance [4, Lemma 3.1.20]--while the square on the kernel side of a morphism in is always a pullback, and it is easily checked that monomorphisms in this category are component-wise. We first work towards a crude version of the Kaluzhnin-Krasner embedding theorem for arbitrary extensions, based on a simple reduction from the non-split to the split case. This depends on the adjunction which exists between extensions and split extensions over \(B\). ### The split extension universally induced by an extension The forgetful functor \(P\colon\mathsf{ExtS}(B)\to\mathsf{Ext}(B)\) which sends a split extension \((k,f,s)\) to the extension \((k,f)\) has a left adjoint \(L\colon\mathsf{Ext}(B)\to\mathsf{ExtS}(B)\). It suffices to see that there is a natural bijection between the two types of situations in Figure 1. The \(E\)-component \(\lambda_{E}\colon E\to PL(E)\) of the adjunction unit \(\lambda\) is induced by the inclusion \(\iota_{G}\colon G\to G+B\). Note that a morphism of extensions \(\phi\colon PL(E)\to E\) such that \(\phi\circ\lambda_{E}=1_{E}\) is completely determined by the choice of a splitting \(s\colon B\to G\) of the morphism \(f\): in the diagram the condition \(\phi\circ\lambda_{E}=1_{E}\) forces \(?=1_{G}\). This means that a split extension is the same thing as an algebra for the pointed endofunctor \[(PL\colon\mathsf{Ext}(B)\to\mathsf{Ext}(B),\ \lambda\colon 1_{\mathsf{Ext}(B)} \Rightarrow PL)\] and the category of such algebras is isomorphic to \(\mathsf{ExtS}(B)\). ### Towards a crude embedding theorem for extensions Adjunctions compose, and this provides us with a crude embedding theorem for extensions. We do indeed have that the composite \(UPL\colon\mathsf{Ext}(B)\to\mathbb{X}\) has a right adjoint \(W\coloneqq PR\), as in the diagram For any object \(A\) of \(\mathbb{X}\), the extension \(W(A)\) is just the wreath product again, but now with the section \(\sigma\) forgotten. Note that the left adjoint \(KL=UPL\) does _not_ coincide with the forgetful functor \(U\), which might be unexpected in view of the original Kaluzhnin-Krasner embedding theorem. Here the \(E\)-component \(\upsilon_{E}\) of the unit \(\upsilon\) of the adjunction takes the form \(\upsilon_{E}\colon E\to WUPL(E)=WKL(E)\). Just as in the case of split extensions (Lemma 2.5), we may prove that this natural transformation is always a monomorphism. **Lemma 3.4**.: _The unit \(\upsilon\) of the adjunction \(KL\dashv W\) is a monomorphism._ Proof.: For each \(E\) we may write \(\upsilon_{E}\colon E\to PRKL(E)\) as the composite of \[P(\eta_{L(E)})\colon PL(E)\to PRKL(E),\] which is a monomorphism because so is \(\eta_{L(E)}\) by Lemma 2.5, while the right adjoint \(P\) preserves monomorphisms, and the \(E\)-component \(\lambda_{E}\colon E\to PL(E)\) of unit of the adjunction \(L\dashv P\), which is a monomorphism by its construction (as a coproduct inclusion). _Remark 3.5_.: From this it follows that the left adjoint \(KL=UPL\colon\mathsf{Ext}(B)\to\mathbb{X}\) is a faithful functor. We find: **Theorem 3.6**.: _In a locally algebraically cartesian closed semi-abelian category \(\mathbb{X}\), for every object \(X\) there exists a universal extension_ \[W(X)=\big{(}0\to KR(X)\to X\wr B\to B\to 0\big{)}\] _over \(B\) into which each extension_ \[E=\big{(}0\to A\to G\to B\to 0\big{)}\] _such that \(X=KL(E)\) embeds. For a given extension \(E\), the embedding is given by the \(E\)-component \(\upsilon_{E}\colon E\to WKL(E)\) of the unit of the adjunction \(KL\dashv W\). _ Our aim is now to deduce from this result an embedding theorem which is closer to the original one for groups: it will, for instance, take into account the set-theoretical splittings an extension may have. For each extension this involves the construction of a non-canonical map, one which cannot be deduced from the adjointness coming from local algebraic cartesian closedness. ## 4. Embedding into the wreath product \(A\wr B\) We would like to be able to embed an extension \(E\) from \(A=U(E)\) to \(B\) into the wreath product \(W(A)\) rather than into \(WUPL(E)\). In other words, we require the existence of a monomorphism \(\phi\colon E\to WU(E)\) for each \(E\). Let us analyze this situation in detail. ### On the existence of \(\phi\colon E\to W(A)\) Assuming that a morphism of extensions \(\phi\colon E\to W(A)\) does indeed exist, by universality of \(WUPL(E)\) we obtain a unique morphism \(\overline{\phi}\colon UPL(E)\to U(E)\) in \(\mathbb{X}\) as in Theorem 2.6 implies that such a \(\phi\) exists when \(E=P(S)\) is a split extension: then we may take \(\phi=P(\eta_{S})\). In this case, the induced morphism \(\overline{\phi}\) is \(U\) applied to the \((PL,\lambda)\)-algebra structure of \(P(S)\)--see 3.2--which is \(P\) of the counit \(\epsilon_{S}\colon LP(S)\to S\) of \(L\dashv P\) at \(S\). Indeed, the composite \(WUP(\epsilon_{S})\circ\upsilon_{E}\) is equal to \(P(\eta_{S})\)--since \(\upsilon_{E}=P\big{(}\eta_{L(E)}\big{)}\circ\lambda_{E}\), by naturality of \(\eta\) and by the triangular identity for \(L\dashv P\) as in the commutative diagram Conversely, if \(\overline{\phi}\colon UPL(E)\to U(E)\) happens to be induced by a morphism of extensions \(\mathit{PL}(E)\to E\), then \(E\) carries a \(PL\)-algebra structure--just an arrow \(PL(E)\to E\), which _a priori_ need not be compatible with the unit \(\lambda\), but is enough to imply that \(E\) was a split extension in the first place. This means that we cannot hope that \(\overline{\phi}\) is \(U(\underline{\phi})\) for some morphism of extensions \(\underline{\phi}\colon PL(E)\to E\). As a consequence, its existence (as a morphism of \(\mathbb{X}\) which does not underlie an extension) must follow from a construction outside of the realm of the adjoint functors which we have been considering so far. The non-canonicity of these maps forces us to work on a case-by-case basis. This means understanding the structure of the kernel \(KL(E)\) in \[PL(E)=\big{(}\;0\rTo KL(E)\rTo G+B\rTo B\rTo B\rTo 0\;\big{)}\] for any given extension \[E=\big{(}\;0\rTo A\rTo G\rTo B\rTo 0\;\big{)}\] (SS) in some concrete (LACC) semi-abelian category. This becomes feasible when the objects in the category have underlying sets--for instance, when we work in a semi-abelian variety of algebras. In our examples, the morphism \(\phi\) will then be induced by a set-theoretical splitting \(s\) of \(f\). Note that when \(\phi\colon E\to WU(E)\) is a monomorphism as desired, such a section \(s\) of \(f\) can only be compatible with the section \(\sigma\) of the wreath product split extension (i.e., we can only have \(\phi_{G}\circ s=\sigma\)) if \(s\) is a morphism, so that \(E\) is a split extension. ### The case of groups In the category of groups, let us consider an extension \(E\) as in (SS) above, in order to describe the structure of the induced group \(KL(E)\), which will provide us with a group monomorphism \(\phi\colon E\to WU(E)\). **Lemma 4.3**.: _The kernel \(\mathrm{Ker}(\langle f,1_{B}\rangle)\) of the induced arrow \(\langle f,1_{B}\rangle\colon G+B\to B\) admits the presentation \(P=\langle S\mid R\rangle\) with \(S=G\times B\) and_ \[R=\{(1,b)=1\mid b\in B\}\cup\{(g,b)(g^{\prime},bf(g))=(gg^{\prime},b)\mid g \text{, }g^{\prime}\in G\text{, }b\in B\}.\] _The idea is to see an element \((g,b)\) of \(S\) as the word \(bg(bf(g))^{-1}\) in \(G+B\)._ Proof.: Let \(P\) denote a group admitting the presentation of the statement. We will construct an isomorphism between \(P\) and \(\mathrm{Ker}(\langle f,1_{B}\rangle)\). First, according to the idea given in the statement of the lemma, let us define \(\widetilde{\phi}\colon P\to G+B\) sending a generator \((g,b)\) of \(P\), where \(g\in G\), \(b\in B\), to the element \(bg(bf(g))^{-1}\) of \(G+B\). Since these elements verify the relations of \(R\), this assignment forms a well-defined group homomorphism from \(P\) to \(G+B\). Moreover, for all \(g\in G\) and all \(b\in B\), \[\langle f,1_{B}\rangle\big{(}\widetilde{\phi}(g,b)\big{)}=\langle f,1_{B} \rangle\big{(}bg(bf(g))^{-1}\big{)}=bf(g)(bf(g))^{-1}=1\] so \(\widetilde{\phi}\) corestricts to the kernel of \(\langle f,1_{B}\rangle\) to give us a morphism \(\phi\colon P\to\mathrm{Ker}(\langle f,1_{B}\rangle)\). Conversely, any \(h\in\mathrm{Ker}(\langle f,1_{B}\rangle)\leqslant G+B\) can be uniquely written in reduced form \(h=b_{1}g_{1}\cdots b_{n}g_{n}\), i.e. with \(b_{i}\in B\backslash\{1\}\) for \(i\in\{2,\ldots,n\}\), \(g_{i}\in G\backslash\{1\}\) for \(i\in\{1,\ldots,n-1\}\), \(b_{1}\in B\) and \(g_{n}\in G\). Then we define \(\psi\colon\operatorname{Ker}(\langle f,1_{B}\rangle)\to P\) by setting \[\psi(h)\coloneqq(g_{1},b_{1})\big{(}g_{2},b_{1}f(g_{1})b_{2}\big{)}\cdots\big{(} g_{n-1},b_{1}f(g_{1})\cdots b_{n-1}\big{)}\big{(}g_{n},f(g_{n})^{-1}\big{)}\] for \(h=b_{1}g_{1}\cdots b_{n}g_{n}\in\operatorname{Ker}(\langle f,1_{B}\rangle)\) in reduced form. This morphism is well defined since the reduced form is unique. We actually do not know if it is a group homomorphism, but this is not needed for our purposes. Finally, let us check that \(\phi\) and \(\psi\) are each other's inverse. For \(h=b_{1}g_{1}\cdots b_{n}g_{n}\in\operatorname{Ker}(\langle f,1_{B}\rangle)\) in reduced form, we compute \[\phi\big{(}\psi(h)\big{)} =\phi\big{(}(g_{1},b_{1})\big{(}g_{2},b_{1}f(g_{1})b_{2}\big{)} \cdots\big{(}g_{n-1},b_{1}f(g_{1})\cdots b_{n-1}\big{)}\big{(}g_{n},f(g_{n})^ {-1}\big{)}\big{)}\] \[=\big{(}b_{1}g_{1}(b_{1}f(g_{1}))^{-1}\big{)}\big{(}b_{1}f(g_{1} )b_{2}g_{2}(b_{1}f(g_{1})b_{2}f(g_{2}))^{-1}\big{)}\cdots\] \[\quad\cdot\big{(}b_{1}f(g_{1})\cdots b_{n-1}g_{n-1}(b_{1}f(g_{1} )\cdots b_{n-1}f(g_{n-1}))^{-1}\big{)}\] \[\quad\cdot\big{(}f(g_{n})^{-1}g_{n}(f(g_{n})^{-1}f(g_{n}))^{-1} \big{)}\] \[=b_{1}g_{1}\cdots b_{n}g_{n}=h\] as wanted. For the other direction, take \(p=(g_{1},b_{1})\cdots(g_{n},b_{n})\in P\) written in a minimal way (which is possible since the relations of \(R\) reduce the length of the words). We compute \[\phi(p) =\big{(}b_{1}g_{1}(b_{1}f(g_{1}))^{-1}\big{)}\cdots\big{(}b_{n}g_ {n}(b_{n}f(g_{n}))^{-1}\big{)}\] \[=b_{1}g_{1}\big{(}(b_{1}f(g_{1}))^{-1}b_{2}\big{)}g_{2}\cdots g_{ n-1}\big{(}(b_{n-1}f(g_{n-1}))^{-1}b_{n}\big{)}g_{n}(b_{n}f(g_{n}))^{-1}1.\] Then, since we chose a minimal way of writing \(p\), we have two possibilities for the reduced form of \(\phi(p)\). If \(b_{n}f(g_{n})\neq 1\), the rewriting of \(\phi(p)\) above is its reduced form and, by definition of \(\psi\), we have \[\psi\big{(}\phi(p)\big{)} =(g_{1},b_{1})\big{(}g_{2},b_{1}f(g_{1})((b_{1}f(g_{1}))^{-1}b_{2} )\big{)}\cdots\] \[\quad\cdot\big{(}g_{n},b_{1}f(g_{1})\cdots((b_{n-1}f(g_{n-1}))^{ -1}b_{n})\big{)}\big{(}1,f(1)^{-1}\big{)}\] \[=(g_{1},b_{1})(g_{2},b_{2})\cdots(g_{n},b_{n})=p.\] In the other case, the reduced form of \(\phi(p)\) is given by \[b_{1}g_{1}\big{(}(b_{1}f(g_{1}))^{-1}b_{2}\big{)}g_{2}\cdots\big{(}(b_{n-2}f(g _{n-2}))^{-1}b_{n-1}\big{)}g_{n-1}\big{(}(b_{n-1}f(g_{n-1}))^{-1}b_{n}\big{)} g_{n}\] and we also have \[\psi\big{(}\phi(p)\big{)} =(g_{1},b_{1})\big{(}g_{2},b_{1}f(g_{1})((b_{1}f(g_{1}))^{-1}b_{2} )\big{)}\cdots\] \[\quad\cdot\big{(}g_{n-1},b_{1}f(g_{1})\cdots((b_{n-2}f(g_{n-2}))^ {-1}b_{n-1})\big{)}\big{(}g_{n},f(g_{n})^{-1}\big{)}\] \[=(g_{1},b_{1})(g_{2},b_{2})\cdots(g_{n-1},b_{n-1})(g_{n},b_{n})=p,\] finishing the proof. If now \(s\colon B\to G\) is a set-theoretical splitting of \(f\), then we may consider the group homomorphism \[\overline{\phi}\colon KL(E)\to U(E)\colon(g,b)\mapsto s(b)\cdot g\cdot s(b \cdot f(g))^{-1}\] which is well defined because \((1,b)\) is sent to \(s(b)\cdot 1\cdot s(b\cdot 1)^{-1}=1\) and, for \((g,b)(g^{\prime},bf(g))\), we have \[\overline{\phi}(g,b)\cdot\overline{\phi}(g^{\prime},bf(g)) =\big{(}s(b)\cdot g\cdot s(b\cdot f(g))^{-1}\big{)}\cdot\big{(}s(bf(g ))\cdot g^{\prime}\cdot s(bf(g)\cdot f(g^{\prime}))^{-1}\big{)}\] \[=s(b)\cdot gg^{\prime}\cdot s(b\cdot f(gg^{\prime}))^{-1}\] which is the image of \((gg^{\prime},b)=(g,b)(g^{\prime},bf(g))\). The corresponding morphism \(\phi\colon E\to WU(E)\) is the composite \[W(\overline{\phi})\circ v_{E}=W(\overline{\phi})\circ P(\eta_{L(E)})\circ \lambda_{E}\colon\] its \(G\)-component \(\phi_{G}\) sends \(g\in G\) to \(g\in G+B\) to \[(h_{g},f(g))\in KL(E)\wr B=\mathsf{Set}(B,KL(E))\rtimes B\] where \(h_{g}\colon B\to KL(E)\colon b\mapsto(g,b)\) as explained in the paragraph immediately below Theorem 1.1. In accordance with (\(\ddagger\)), this couple \((h_{g},f(g))\) in \(KL(E)\wr B\) is in turn sent to \((\overline{\phi}\circ h_{g},f(g))\in A\wr B\). Note that \((\overline{\phi}\circ h_{g})(b)=s(b)\cdot g\cdot s(b\cdot f(g))^{-1}\), so that we regain the classical formula [20] for the Kaluzhnin-Krasner embedding. ### The case of Lie algebras Given a field \(\mathbb{K}\) and a split extension of \(\mathbb{K}\)-Lie algebras \(S\) as in (\(\ast\)), it is known [16] that \(KR(A)\) is isomorphic to \(\mathsf{Vect}_{\mathbb{K}}(\overline{B},A)\), where \(\overline{B}\) denotes the universal enveloping algebra of \(B\). Given an extension \(E\) as in (SS), the canonical embedding \(E\to RKL(E)\) restricting to \(A\to KRKL(E)\) sends any \(a\in A\) to \[h_{a}\colon\overline{B}\to KL(E)\colon b_{1}\cdots b_{r}\mapsto b_{1}(b_{2}( \cdots(b_{r}a)\cdots)).\] Let \(s\) be a linear splitting of \(f\). Then \(h_{a}\) induces \[h^{\prime}_{a}\colon\overline{B}\to A\colon b_{1}\cdots b_{r}\mapsto s(b_{1}) (s(b_{2})(\cdots(s(b_{r})a)\cdots)).\] To check that this morphism is well defined, some computations need to be made; here we may mimic [22, Section 3]. Note that choosing a linear section is the same as choosing a basis that complements \(A\) in \(G\). Hence we obtain a morphism \(A\to UWU(E)=KRU(E)=\mathsf{Vect}_{\mathbb{K}}(\overline{B},A)\), which in turn induces the morphism of extensions \(\phi\colon E\to WU(E)\), whose \(G\)-component sends \(g\in G\) to \[(h^{\prime}_{g-sf(g)},f(g))\in WU(E)\wr B=\mathsf{Vect}_{\mathbb{K}}(\overline {B},A)\rtimes B.\] Thus we recover the Kaluzhnin-Krasner embedding from [22]. ### Further examples From the above it is clear that, even though a Kaluzhnin-Krasner embedding in its standard form does not follow right away from Theorem 3.6, there is very little hope of establishing such an embedding in contexts where that theorem is not also valid. As a consequence, the category of crossed modules, as well as the examples worked out in [10], being (LACC) semi-abelian categories, all satisfy Theorem 3.6, hence are good candidates for a "classical" Kaluzhnin-Krasner embedding. We will, however, end the article with a slightly different result: an embedding theorem for _abelian_ split extensions which holds in _any_ semi-abelian variety of algebras. ## 5. The case of abelian actions We pick a semi-abelian variety of algebras \(\mathbb{X}\). Note that these are exactly the protomodular varieties, which were characterized in [4, 7]. We are now going to explain that the condition (LACC) is not necessary in such a category, if we want to embed just the _abelian_ actions, which correspond to split extensions equipped with a Beck module structure [3]. Given an object \(B\), recall that a **Beck module over**\(B\) is an extension (SS) that carries an internal abelian group structure in \(\mathsf{Ext}(B)\)--determined, in particular, by a unit \(s\colon 1_{B}\to f\) and a multiplication \(m\colon f\times_{B}f\to f\) in the category \(\mathbb{X}/_{B}\) of objects over \(B\)--which automatically makes this extension (SS) a split extension. Let us write \(\mathsf{Ab}(\mathsf{Ext}(B))\) for the category of (split) extensions over \(B\), equipped with an abelian group structure. This category is abelian, and since \(\mathbb{X}\) is a variety of algebras, [17, Theorem 2.9] implies that the lifting \(\underline{K}\colon\mathsf{Ab}(\mathsf{Ext}(B))\to\mathsf{Ab}(\mathbb{X})\) of the functor \(K\colon\mathsf{Ext}(B)\to\mathbb{X}\) to abelian group objects has a right adjoint \(\underline{R}\). From this we deduce: **Theorem 5.1**.: _In a semi-abelian variety \(\mathbb{X}\), for every abelian group object \(A\) there exists a universal Beck module_ \[\underline{R}(A)=\big{(}0\to\underline{KR}(A)\to A\underset{\bullet}{\wr}B \leftrightarrows B\to 0\big{)}\] _over \(B\) into which each abelian split extension of the form_ \[S=\big{(}0\to A\to G\leftrightarrows B\to 0\big{)}\] _embeds. This embedding is given by the \(S\)-component \(\underline{\eta}_{S}\colon S\to\underline{RK}(S)\) of the unit of the adjunction \(\underline{K}\dashv\underline{R}\). _ Note that we underlined the wreath product symbol to distinguish \(A\underset{\bullet}{\wr}B\) from the ordinary wreath product \(A\wr B\). _Remark 5.2_.: To show that the unit \(\underline{\eta}\) of the adjunction \(\underline{K}\dashv\underline{R}\) is a monomorphism, the reasoning of Lemma 2.5 applies. _Remark 5.3_.: It is well known that the category \(\mathsf{Ab}(\mathsf{Ext}(B))\) is again a variety of algebras, and since it is an abelian category, it is a category of modules over a ring. The ring \(\Lambda\) in question is the endomorphism ring of the free Beck module (over \(B\)) with a single generator. In particular, write \(\mathsf{Mod}_{\Pi}\simeq\mathsf{Ab}(\mathsf{Ext}(0))\); the morphism \(0\to B\) induces a ring map \(\Pi\to\Lambda\). The functor \(\underline{K}\colon\mathsf{Mod}_{\Lambda}\to\mathsf{Mod}_{\Pi}\) becomes restriction of scalars, so that its right adjoint is \(A\mapsto\mathsf{Mod}_{\Pi}(\Lambda,A)\). By the analysis made in [8], the situation simplifies when the category \(\mathbb{X}\) satisfies a mild additional condition, called the **Smith is Huq condition** in [21]: then a Beck \(B\)-module structure on an internal abelian group object \(A\) is completely determined by a split extension from \(A\) to \(B\). So the abelian split extensions are precisely the split extensions with an abelian kernel. Many examples of categories satisfying this condition (which is much weaker than (LACC)) are given in [9]. For instance, for a field \(\mathbb{K}\), any variety of \(\mathbb{K}\)-algebras is such. Note that in this setting, an abelian object is a \(\mathbb{K}\)-vector space equipped with the trivial multiplication, so that a Beck module over an algebra \(B\) is a split extension such as \(S\) above where the result of multiplying two elements of \(A\) is always zero. For instance, in the case of \(\mathbb{K}\)-Lie algebras, we have \(\Pi=\mathbb{K}\) and \(\Lambda=\overline{B}\), the universal enveloping algebra of the Lie algebra \(B\). Hence \(\underline{K}\colon\mathsf{Mod}_{\overline{B}}\to\mathsf{Vect}_{\mathbb{K}}\) is the forgetful functor, and its right adjoint takes a \(\mathbb{K}\)-vector space \(A\) to the space \(\mathsf{Vect}_{\mathbb{K}}(\overline{B},A)\) with its canonical \(\overline{B}\)-module structure as in 4.4. ## Acknowledgements The authors would like to express their gratitude to the organisers of the Group Theory Seminar at ICMAT and, in particular, to Henrique A. Mendes da Silva e Souza, for asking a question that led to this research.
2310.08489
Broadband mode division multiplexing of OAM-modes by a micro printed waveguide structure
A light beam carrying orbital angular momentum (OAM) is characterized by a helical phase-front that winds around the center of the beam. These beams have unique properties that have found numerous applications. In the field of data transmission, they represent a degree of freedom that could potentially increase capacity by a factor of several distinct OAM modes. While an efficient method for (de)composing beams based on their OAM exists for free-space optics, a device capable of performing this (de)composition in an integrated, compact fiber application without the use of external active optical elements and for multiple OAM modes simultaneously has not been reported. In this study, a waveguide structure is presented that can serve as a broadband OAM (de)multiplexer. The structure design is based on the adiabatic principle used in photonic lanterns for highly efficient conversion of spatially separated single modes into eigenmodes of a few-mode fiber. In addition, an artificial magnetic field is introduced by twisting the structure during the adiabatic evolution, which removes the degeneracy between modes having the same absolute OAM. This structure can simplify, stabilize, and miniaturize the creation or decomposition of OAM beams, making them useful for various applications.
Julian Schulz, Georg von Freymann
2023-10-12T16:49:09Z
http://arxiv.org/abs/2310.08489v1
# Broadband mode division multiplexing of OAM-modes by a micro printed waveguide structure ###### Abstract A light beam carrying orbital angular momentum (OAM) is characterized by a helical phase-front that winds around the center of the beam. These beams have unique properties that have found numerous applications. In the field of data transmission, they represent a degree of freedom that could potentially increase capacity by a factor of several distinct OAM modes. While an efficient method for (de)composing beams based on their OAM exists for free-space optics, a device capable of performing this (de)composition in an integrated, compact fiber application without the use of external active optical elements and for multiple OAM modes simultaneously has not been reported. In this study, a waveguide structure is presented that can serve as a broadband OAM (de)multiplexer. The structure design is based on the adiabatic principle used in photonic lanterns for highly efficient conversion of spatially separated single modes into eigenmodes of a few-mode fiber. In addition, an artificial magnetic field is introduced by twisting the structure during the adiabatic evolution, which removes the degeneracy between modes having the same absolute OAM. This structure can simplify, stabilize, and miniaturize the creation or decomposition of OAM beams, making them useful for various applications. waveguide; fiber; OAM; photonic lantern; adiabatic ## 1 Introduction The special properties of light beams with orbital angular momentum (OAM) have enabled significant advances in astrophysics [1], high resolution microscopy [2], remote sensing [3], optical tweezers [4] and many more. In particular, the field of mode division multiplexing has sparked interest in the mode space of OAM modes to address the exponentially increasing demand for data transmission capacity [5, 6]. A light beam carrying orbital angular momentum \(\ell\,\in\,\mathds{Z}\) has a cross section where the orbit around the beam center acquires a phase of \(2\pi\ell\). For \(\ell\,\neq\,0\), the phase singularity in the beam center causes the intensity to drop to zero. OAM beams with different \(\ell\) are orthogonal and thus allow the OAM to be used as an identifier of different channels to transport an increased amount of information through a single fiber. Other independent properties of light, such as wavelength and polarization, have been used to increase transmission capacity by multiplexing. Multiple optical components can generate OAM beams. The phase of an expanded beam can be altered using spiral phase plates, spatial light modulators, or diffractive phase holograms. A highly effective technique for efficiently generating and decomposing OAM beams was introduced in Ref.[7], which uses a log-polar transformation performed by two fixed optical elements. Metasurfaces and gratings can be designed to carry OAM in the scattered light, but with a limited conversion ratio to the input beam. These components are designed specifically for the wavelength used to imprint the desired phase shift. In addition to techniques based on a fixed spatial phase relationship, q-plates can generate OAM beams from defined spin angular momentum (SAM) beams based on a medium with strong OAM-SAM coupling. These spatial methods are well-established for generating OAM beams in free space. However, for transmitting data in fibers, they are not without drawbacks. Reductions in device volume can lead to decreased mode purity, making miniaturization challenging. Due to the high refractive index difference between air and fiber, fiber coupling results in losses[8]. Additionally, even slight lateral misalignment can lead to crosstalk between modes in fiber[9]. To solve these issues in fiber communication, techniques for generating and decomposing OAM within waveguides and fibers have been proposed and implemented. Twisted spiral waveguides[10, 11] or helical waveguides[12, 13] can be used to transform a the ground mode into an OAM mode. If the twist of these structures is chosen at the right frequency, the ground mode and one OAM mode have the same propagation constant and can couple effectively. Coupled waveguide structures can be designed to clone, invert[14] or couple different[15] OAM modes. By lifting the degeneracy of the LP-modes for a fixed propagation length, those modes can be converted to OAM-modes by acquiring a phase-shift of \(\pi/2\)[16, 17, 18, 19]. Multiple coherent input beams, controlled by external active optical elements for phase and amplitude, can also create OAM modes[20, 21, 22]. In general, waveguide structures can achieve high efficiency, high mode purity, and a wide bandwidth, but only for one or a few OAM modes at a time or in a structure and only for low values of \(\ell\)[23]. ## 2 Principle of the structure In this study, we demonstrate a static waveguide structure capable of (de)multiplexing numerous OAM-modes simultaneously by arranging a ring core waveguide into five single mode waveguides, all without the use of active optical elements. Thus, the structure can be used for the superposition not just of different OAM modes but also at different wavelengths simultaneously. Our findings indicate the effectiveness of this waveguide structure in achieving (de)multiplexing of OAM-modes and suggest possible avenues for future research. As a proof of principle, our structure successfully multiplexed OAM-modes with an absolute value of \(\ell\leq 2\), but it has the potential to scale up and multiplex even higher modes. The main principle of this structure is the adiabatic transformation of eigenmodes from spatially separated single modes into modes in a ring waveguide carrying OAM. In an adiabatic evolution, the population of the eigenmodes remains constant while the eigenmodes change according to the system. Therefore, the change of the system should be significantly slower than the dynamics of the eigenmodes, as determined by the difference in propagation constants between the modes. Two mechanisms are utilized to maintain the propagation constants of each individual mode consistently spaced - individual waveguide detuning and an artificial magnetic field. Figure 1 depicts the resulting structure and the progression from localized eigenmodes to OAM-modes. The propagation constant of a single waveguide can be detuned individually either by changing the refractive index of the core or, as in our case, by changing the diameter of the waveguide. When small changes are made, the propagation constant shows an almost linear dependence on the waveguide's diameter. The distinct propagation constants from differently sized fiber cores in photonic lanterns have already shown this selectivity feature with a high mode purity[24, 18, 19, 21, 22]. The paraxial Helmholtz Figure 1: (a) 3D-Model of the waveguide structure. Dependent in which single-mode waveguide at the input facet light is coupled in, the light will be transformed in a OAM-state with a different \(\ell\) as it reach the output facet. (b) schematic steps of the structure during the adiabatic evolution. equation can describe the evolution of the scalar transverse field amplitude \(\psi\) along the propagation direction \(z\) in these waveguide structures, \[\mathrm{i}\lambda\partial_{z}\psi=\frac{\lam^{2}}{n_{c}}\nabla_{\perp}^{2}\psi- \Delta n\left(\mathbf{r}\right)\psi \tag{1}\] where \(\lam\) is the wavelength divided by \(2\pi\), \(n_{c}\) is the refractive index of the cladding material and \(\Delta n\) is the local change of the refractive index. Although the eigenvalues and eigenmodes of Equation 1 can be adjusted by modifying the diameters of the waveguides individually, the eigenmodes of straight waveguide structures are always real-valued functions because of the system's time-reversal symmetry. To obtain eigenmodes that can only be expressed by complex-valued functions, a constant artificial magnetic field is introduced. This field distinguishes between modes with \(+\ell\) and \(-\ell\) and lifts their degeneracy. To achieve this, the system must rotate as the light travels through the structure[25, 26]. This is described by the coordinate transformation: \[x^{\prime} =x\cos\left(\Omega z\right)-y\sin\left(\Omega z\right) \tag{2}\] \[y^{\prime} =x\sin\left(\Omega z\right)+y\cos\left(\Omega z\right)\] (3) \[z^{\prime} =z \tag{4}\] Here \(\Omega\) is the angular velocity or in a geometric interpretation, \(2\pi/\Omega\) is the pitch of the helical trajectory of a waveguide. The effects on the paraxial Helmholtz equation due to this coordinate transformation in the rotating frame of reference can be summarized in a constant vector potential \(\mathbf{A}\,=\,n_{c}\Omega/\lam\left[-y,x,0\right]^{T}\) and an additional harmonic potential[27]. \[\mathrm{i}\lam\partial_{z}\psi =\frac{\lam}{2n_{c}}\left[\mathrm{i}\nabla_{\perp}-\mathbf{A}\right]^ {2}\psi-\Delta n\left(\mathbf{r}\right)\psi-\frac{n_{c}\Omega^{2}}{2}\left[\hat{x} ^{2}+\hat{y}^{2}\right]\psi \tag{5}\] \[=\frac{\lam^{2}}{2n_{c}}\nabla_{\perp}^{2}\psi+\frac{\lam}{2n_{c }}\mathbf{B}\cdot\hat{L}\psi-\Delta n\left(\mathbf{r}\right)\psi \tag{6}\] It can also be expressed as the dot product of a constant magnetic field \(\mathbf{B}\,=\,2n_{c}\Omega\lam\mathbf{e}_{z}\) and the angular momentum operator \(\hat{L}\,=\,-\mathrm{i}\lambda\nabla_{\perp}\times\hat{r}\) acting on the state. The synthetic magnetic field links the orbital angular momentum of the state with its propagation constant, and thus resolves the degeneracy of modes with equal absolute OAM. The distance between modes correlates with both the angular velocity \(\Omega\) and the \(\ell\) of the mode[28]. It is important to note that the angular velocity cannot be increased without limit as the bound modes will scatter out of the structure due to the harmonic potential, which increases proportionally with \(\Omega^{2}\). The resulting structure and the steps starting from localized eigenmodes and ending at OAM-modes are sketched in Figure 1. 1. The difference in diameter \(\Delta d_{\mathrm{wg}}\) of the waveguides is increased. The separated single-mode-waveguides are detuned against each other, the eigenmodes are localized, and the eigenenergies are gapped. 2. The distance of the waveguides to the center is decreased. The light can couple between the waveguides and the eigenmodes spread over multiple waveguides. 3. Angular velocity is increased. The degeneracy of modes with equal absolute \(\ell\) is lifted. 4. The difference in diameter \(\Delta d_{\mathrm{wg}}\) of the waveguides is decreased again. For all eigenmodes, the intensity in the waveguides is evenly distributed, because the waveguides are no longer detuned. The energy gaps are only caused by the non-zero angular velocity. 5. The waveguides split up to approach a ring waveguide. 6. The diameter of the waveguides \(d_{\mathrm{wg}}\) is decreased. The thickness of the ring waveguide is reduced to filter out unwanted higher OAM-Modes [20]. The perfectly round ring waveguide should be avoided due to its invariant structure under rotation, which restores the degeneracy of \(+\ell\) and \(-\ell\) OAM-modes and can result in crosstalk. Additionally, the evolution process does not require the completion of one step before starting the next, allowing for partial overlap in time and reducing the total device length. To execute an adiabatic step, we employ a function resembling the Fermi-Dirac statistics for a parameter \(s\) to gradually transition from an initial value \(s_{\mathrm{i}}\) to a final value \(s_{\mathrm{f}}\) over the propagation distance, \(L\). The coefficients \(\mu_{s}\) and \(T_{s}\) are utilized to arrange the steps chronologically and establish their relative speed, correspondingly. \[s\left(z\right):\left[0,L\right]\rightarrow\mathbb{R},z\mapsto s_{\mathrm{i}} +\frac{s_{\mathrm{f}}-s_{\mathrm{i}}}{\exp\left(\frac{z/L-\mu_{s}}{T_{s}} \right)+1} \tag{7}\] ## 3 Numerical calculation To demonstrate the general function and capabilities of the structure, such as mode purity and wavelength independence, scalar split-step BPM simulations were performed. The relative intensity \(I_{\ell^{\prime},\ell}\) of the mode with OAM \(\ell^{\prime}\) in the field at the output facet \(\psi_{\ell}\) from the simulation data is calculated by an overlap integral[20, 16, 21]. The index \(\ell\) here represents the OAM into which the light from this input waveguide is supposed to be manly converted to. \[I_{\ell^{\prime},\ell}=\left[\frac{1}{\sqrt{\iint\psi_{\ell}\cdot\psi_{\ell}^{ *}\mathrm{d}x\mathrm{d}y}}\iint\psi_{\ell}\cdot\exp\left(-\mathrm{i}\ell^{ \prime}\arg\left(x+\mathrm{i}y\right)\right)\mathrm{d}x\mathrm{d}y\right]^{2} \tag{8}\] The functions \(\exp\left(-\mathrm{i}\ell^{\prime}\arg\left(x+\mathrm{i}y\right)\right)\) with \(\ell^{\prime}\,\in\,\mathds{Z}\) form an orthogonal basis that only acts on the phase winding of the field and ignores the radial distribution. Since the modes have only trivial radial structure this basis suffices for determining the relative intensity of modes and with that the mode purity \(I_{\ell,\ell}\) in our simulations. The mode crosstalk \(\sigma_{\ell}\), the ratio of intensity which is not converted into the desired mode with OAM \(\ell\) to the total intensity, is then given by \(\sigma_{\ell}=1-I_{\ell,\ell}\). In general, for adiabatic evolution, numerous parameters such as coupling strength, potential depth, and magnetic field strength do not need to be set precisely. If the system evolves slowly and the propagation constants are sufficiently gapped, the final state changes only slightly. This inherent tolerance allows our device to operate effectively across a wide range of wavelengths. According to the adiabatic principle, increasing the length of the device slows down evolution and results in improved performance and a wider range of usable wavelengths. This can be observed in Figure 2a. Figure 2: Simulation results. (a) Efficiency of the structure for different lengths. The mode crosstalk \(\sigma\) is the ratio of intensity which is not converted into the desired mode to the total intensity. The line is the average crosstalk of the five modes while the borders of the transparent areas show the best and the worst mode conversion. With structure length increases adiabaticity and with it the mode purity. (b) Effective refractive index for each mode along the propagation. Besides these quantities the effective refractive index \(n_{\mathrm{eff}}\) over the propagation length for each eigenmode can be extracted form the simulations. Since the phase acquired over one simulation step \(\Delta z\) is proportional to the effective refractive index, the phase of the overlap integral of the field of two following steps is proportional to the effective refractive index. \[n_{\mathrm{eff}}=\frac{\lambda}{\Delta z}\arg\left(\iint\psi\left(z\right)\cdot \psi^{*}\left(z+\Delta z\right)\mathrm{d}x\mathrm{d}y\right) \tag{9}\] As shown in Figure 2b the effective refractive index is kept gapped most of the time till the end where a more rotational symmetric structure is approached. ## 4 Comparison with the experiment To experimentally demonstrate the device, we fabricated corresponding waveguide structures, following principle used in[29, 30]. The sample is fabricated out of IP-Dip (Nanoscribe GmbH) using the commercial direct laser writing (DLW) system Photonic Professional GT from Nanoscribe. For our initial comparison, a structure measuring \(4\,\mathrm{mm}\) in length was fabricated and single mode waveguides were utilized to couple light into the input side. The resulting spatial intensity and phase distribution on the output side was recorded. Upon direct comparison with the simulation in Figure 3a, numerous similarities are apparent. Notably, multiple modes are mixed in both the simulation and measurement. This is most evident by the four spots resulting from the combination of the \(\ell\leavevmode\nobreak\ =\leavevmode\nobreak\ -2\) and \(\ell\leavevmode\nobreak\ =\leavevmode\nobreak\ +2\) modes. Additionally, the phase distribution displays vortices both in simulation and in measurement. Based on the simulations and measured intensity distribution, it is evident that only the \(\ell\leavevmode\nobreak\ =\leavevmode\nobreak\ 0\) mode has intensity in the center, leading us to assume that this mode has a predominantly smooth or constant phase front without any phase vortices. Based on this assumption, we use the \(\ell\leavevmode\nobreak\ =\leavevmode\nobreak\ 0\) mode to specifically measure the relative phase to other modes with respect to the \(\ell=0\) mode. For a structure without the effective magnetic field, the modes at the output facet of the structure are LP-modes. For these modes there are no clear votrecies, instead neighboring regions with nonzero intensity have a phase shift of about \(\pi\) to each other. Again, for the case without an effective magnetic field, the simulation data qualitatively match the measured data, as can be seen in Figure 3b. To experimentally determine the mode selectivity of our device, a \(4\,\mathrm{mm}\) long multiplexing structure followed by a equivalent \(4\,\mathrm{mm}\) long demultiplexing structure is used (MUX/DEMUX)[22, 18]. In this way Figure 3: Comparison of output field of the BPM-simulations to the measured field from the printed structure for light coupled in different input waveguides at \(\lambda\leavevmode\nobreak\ \leavevmode\nobreak\ =\leavevmode\nobreak\ \leavevmode \nobreak\ 750\,\mathrm{nm}\). The intensity distribution is captured by a CMOS Camera while the phase distribution is measured relative to the \(\ell\leavevmode\nobreak\ =\leavevmode\nobreak\ 0\) mode by interference by coupling light in the two waveguides with different phase shifts. For a structure (a) with and (b) without an effective magnetic field errors from misalignment of the OAM-mode can be excluded [20]. To show that due to the effective magnetic field we are indeed multiplexing OAM-modes and not LP-modes the demultiplexer is rotated by \(2\pi/5\) against the multiplexer. Since OAM modes are rotationally invariant except for phase, they can be accurately demultiplexed, while significant crosstalk is anticipated for LP-modes. For this compression two structures following this MUX/DEMUX setup are considered one with and one without an effective magnetic field, as sketched in Figure 4(a,d), respectively. The relative intensities at the output per input waveguide are plotted in crosstalk matrices Figure 4(b,c,e,f). The y-axis shows the input waveguide where the light was coupled into the multiplexer and the x-axis shows the relative intensities at the output waveguides after the demultiplexer. Due to this normalization, the sum of the intensities in each row is 1. In a perfect MUX/DEMUX setup, the crosstalk matrix would be the unity matrix. Since the conversion to OAM-modes depends on the use of the effective magnetic field, Figure 4(b,c) shows that the largest elements are on the diagonal of the crosstalk matrix. Without the effective magnetic field, only LP-modes are generated and demultiplexed, leading to high crosstalk entries outside the diagonal in Figure 4(e,f). Again, simulation and measurement show qualitatively very similar results. ## 5 Conclusions We have presented a waveguide structure that can be used to multiplex or demultiplex multiple OAM modes at different wavelengths simultaneously. By twisting the structure, the light experiences an effective magnetic field, so that the modes with equal absolute of OAM split energetically. Because the structure is based on the adiabatic principle, it can be used intrinsically over a wide wavelength range, which we have demonstrated in simulations. The experiments on the fabricated structures are in good qualitative agreement with the simulations and thus demonstrate as proof of principle of the applicability for the generation or decomposition of OAM-modes. Figure 4: Characterisation of the mode conversion by a MUX/DEMUX setup. (a) and (d) Sketch of the structure. The demultiplexer is rotated against the multiplexer by \(2\pi/5\) which causes crosstalk for LP-modes while OAM-modes are unaffected. (b) and (e) simulated and (c) and (f) measurement crosstalk matrices of the structures for the case with and without an effective magnetic field at \(\lambda=710\,\)nm, respectively. ## 6 Materials and Methods The trajectories of the waveguides are parameterized in polar coordinates, where the radial component for all waveguides is given by \(|\mathbf{r}|\). To split a waveguide into two waveguides in step 5, the angle \(\alpha\) (see Figure 0(b)) is slowly increased. For this purpose, instead of defining one waveguide, two equivalent waveguides are defined, which differ from the original one only in that \(\pm\alpha\) is added to the polar coordinate, respectively. At the end facet, the 10 waveguides have a polar spacing of about \(2\pi/10\) from each other. The area counted as the waveguide core is the area in which at least one of the parameterized waveguide trajectories lies. The area where multiple waveguides overlap is not exposed multiple times. The values to parameterize the steps we used for all simulations and the structure in the experiment are captured in Table 1. The sample is fabricated in the negative photoresin IP-Dip (Nanoscribe) using a commercial direct laser writing (DLW) system (Nanoscribe Photonic Professional GT). The structure is written layer by layer, where each layer is stacked onto the other in the \(z\)-direction with a distance of 250 nm. Each layer, is written line by line with a line distance of 100 nm. The refractive index contrast of approximately \(\Delta n\approx 0.008\) is achieved by choosing a high laser intensity for the area of the waveguide core (LaserPower 60%) and low intensity for the surrounding area (LaserPower 24%) close to the polymerization threshold (LaserPower 22%) at a writing speed of 20 mm/s. Thereby, a LaserPower of 100% refers to a laser intensity of 68 mW before the 63\(\times\) focusing objective of the DLW system. For each layer the position and diameter of each waveguide is calculated according to the \(z\)-position by equation (7) with the parameters in Table 1. After the writing process, excess photoresist on the tip of the sample, that would cause distortions at the measurement, is removed by dipping the tip in PGMEA for a minute[30]. For the measurement, laser light from a white light laser (NKT photonics) and a VARIA filter box is used. The light was linear polarized, expanded and send on a spatial light modulator (SLM). Afterwards, all light besides the first diffraction order of the blazed grating on the SLM is blocked. With a 20\(\times\) objective (NA=0.4), the Fourier transformed hologram on the SLM is imaged on the input facet of the waveguide sample. With the hologram, we can choose the intensity and phase profile on the input facet. Intensity of the light from the output facet at the waveguide structure is imaged onto a CMOS camera. Using the SLM, we couple into one of the \(\ell\,=\,\{\pm 1,\pm 2\}\) waveguides and with different phase shifts in the \(\ell=0\) waveguide simultaneously. Using the interference of the two modes, the relative phase for each pixel on the camera can be determined as the phase of a sinusoidal fit[31]. ### Acknowledgements G.v.F. and J.S. acknowledge funding by the Deutsche Forschungsgemeinschaft through CRC/Transregio 185 OSCAR (project No. 277625399). ## 7 Conflict of Interest The authors declare no conflict of interest. \begin{table} \begin{tabular}{l|l|c|c|c|c} \hline \hline & Description of the step & \(s_{\mathrm{i}}\) & \(s_{\mathrm{f}}\) & \(\mu_{s}\) & \(T_{s}\) \\ \hline 1 & \(\Delta d_{\mathrm{wg}}\) increase & 0 μm & 0.3204 μm & 0.0541 & 0.0716 \\ 2 & \(|\mathbf{r}|\) decrease & 10 μm & 2.6 μm & 0.1219 & 0.1604 \\ 3 & \(\Omega\) increase & 0 mm\({}^{-1}\) & 2\(\pi/(1748.7\,\mathrm{mm})\) & 0.6119 & 0.2158 \\ 4 & \(\Delta d_{\mathrm{wg}}\) decrease & 0.3204 μm & 0 μm & 0.6683 & 0.0983 \\ 5 & waveguides split up \(\alpha\) & 0 & 2\(\pi/20\) & 0.6794 & 0.0736 \\ 6 & \(d_{\mathrm{wg}}\) decrease & 2.6 μm & 1.7812 μm & 0.8857 & 0.0858 \\ \hline \hline \end{tabular} \end{table} Table 1: parameter of the structure used in the simulations and experiment ## 8 Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2301.12390
Enhancing Efficiency in Parallel Louvain Algorithm for Community Detection
Community detection is a key aspect of network analysis, as it allows for the identification of groups and patterns within a network. With the ever-increasing size of networks, it is crucial to have fast algorithms to analyze them efficiently. It is a modularity-based greedy algorithm that divides a network into disconnected communities better over several iterations. Even in big, dense networks, it is renowned for establishing high-quality communities. However it can be at least a factor of ten slower than community discovery techniques that rely on label-propagation, which are generally extremely fast but obtain communities of lower quality. The researchers have suggested a number of methods for parallelizing and improving the Louvain algorithm. To decide which strategy is generally the best fit and which parameter values produce the highest performance without compromising community quality, it is critical to assess the performance and accuracy of these existing approaches. As we implement the single-threaded and multi-threaded versions of the static Louvain algorithm in this report, we carefully examine the method's specifics, make the required tweaks and optimizations, and determine the right parameter values. The tolerance between each pass can be changed to adjust the method's performance. With an initial tolerance of 0.01 and a tolerance decline factor of 10, an asynchronous version of the algorithm produced the best results. Generally speaking, according to our findings, the approach is not well suited for shared-memory parallelism; however, one potential workaround is to break the graph into manageable chunks that can be independently executed and then merged back together.
Subhajit Sahu
2023-01-29T08:31:44Z
http://arxiv.org/abs/2301.12390v1
# Enhancing Efficiency in Parallel Louvain Algorithm for Community Detection ###### Abstract Community detection is a key aspect of network analysis, as it allows for the identification of groups and patterns within a network. With the ever-increasing size of networks, it is crucial to have fast algorithms to analyze them efficiently. It is a modularity-based greedy algorithm that divides a network into disconnected communities better over several iterations. Even in big, dense networks, it is renowned for establishing high-quality communities. However it can be at least a factor of ten slower than community discovery techniques that rely on label-propagation, which are generally extremely fast but obtain communities of lower quality. The researchers have suggested a number of methods for parallelizing and improving the Louvain algorithm. To decide which strategy is generally the best fit and which parameter values produce the highest performance without compromising community quality, it is critical to assess the performance and accuracy of these existing approaches. As we implement the single-threaded and multi-threaded versions of the static Louvain algorithm in this report, we carefully examine the method's specifics, make the required tweaks and optimizations, and determine the right parameter values. The tolerance between each pass can be changed to adjust the method's performance. With an initial tolerance of 0.01 and a tolerance decline factor of 10, an asynchronous version of the algorithm produced the best results. Generally speaking, according to our findings, the approach is not well suited for shared-memory parallelism; however, one potential workaround is to break the graph into manageable chunks that can be independently executed and then merged back together. ## 1 Introduction The proliferation of interconnected data from real-world sources, such as social and biological networks, has led to an increase in the use of graphs as a means of representation [1]. These graphs, however, are often massive in size, necessitating the use of parallelism to handle the scale of the data. Additionally, many real-world graphs are dynamic, with edges being constantly added and removed [2]. As a result, research into parallel algorithms for analyzing and updating graph analytics on dynamic graphs has gained significant attention in recent years. Examples of this research include the dynamic calculation of centrality scores [3, 4, 5, 6, 7], maintenance of biconnected components [8, 9, 10], and computation of shortest paths [11, 12, 13]. Community detection is a widely studied problem in graph analysis, with practical applications in fields such as e-commerce, communication networks, and healthcare. It involves identifying groups of vertices in a graph, known as communities, that are densely connected within the group but sparsely connected to the rest of the network. When these structures can be identified based solely on the topology of the network, they are referred to as intrinsic communities. On the other hand, extrinsic communities are defined based on external information or attributes of the nodes, such as membership in a particular organization or geographic location. There are various types of communities that can be identified in a graph. Disjoint communities are those in which each vertex belongs to exactly one community (studied here). Alternatively, overlapping communities [14] allow each vertex to belong to more than one community, while hierarchical communities have a multi-level membership structure. Community detection is a challenging problem as it is NP-hard and the number of communities and their size distribution is not known in advance. There are various techniques for addressing this problem, such as label propagation [15, 14, 16], random walk [17], diffusion [18], spin dynamics [19], fitness metric optimization [20, 21], statistical inference [22, 23], simulated annealing [24, 19], clique percolation [25, 26, 27], and more. These methods can be grouped into two main categories: divisive and agglomerative. Divisive methods, also called top-down methods, start by assuming all vertices in a graph belong to one community and iteratively identify and remove bridges to split into more well-connected communities [28, 29]. Agglomerative methods, or bottom-up methods, merge two or more communities together such that a certain score is maximized [30, 2]. Another approach is seed set expansion, which begins with a set of relevant seed vertices of interest, and expands to form communities surrounding them [31]. When using different methods for community detection, the communities that are returned can be evaluated using a quality function. One popular measure is modularity, introduced by Newman and Girvan, which compares the number of edges within a community to the expected number in a random-null model. It ranges from -0.5 (non-modular clustering) to 1.0 (fully modular clustering) and optimizes this function theoretically results in the best possible grouping. Another fitness score is conductance, which measures the community cut or inter-community edges. Another popular quality function is the Constant Potts Model (CPM), which aims to overcome some limitations of modularity [32]. ## 2 Literature survey There are several techniques that have been developed for detecting communities in networks. A number of them are based on modularity-optimization, hierarchical clustering, label propagation, region density, core clustering [33], game theory, information theory (infomap) [34, 35, 36, 37, 38], and biological evolution (genetics) [39, 40, 41]. Metrics such as the modularity score [20, 42, 40], Normalized Mutual Information index (NMI) [43, 44], and Jaccard Index [43] are used to compare the quality of communities obtained with different approaches. The Louvain method is a greedy modularity-based optimization algorithm that hierarchically agglomerates vertices in a graph to obtain communities. It was created by Blondel et al. [42] from the University of Louvain, and is one of the most popular heuristics in this field. It has an average time complexity of \(\Theta(n\log n)\), with \(n\) being the total number of nodes in the network [45]. Approaches to perform the Louvain algorithm can be either **coarse-grained**, where a set of vertices are processed in parallel; or fine-grained, where all the vertices are processed in parallel. Several parallelization heuristics for the Louvain algorithm have been implemented in Grappolo software library [46]. It should be noted though that community detection methods such as the Louvain that rely on modularity maximization are known to suffer from **resolution limit problem**. This prevents identification of communities of certain sizes [47]. Some **improvements on the Louvain algorithm** include using a suitable heuristic based partitioning [48], dealing with ghost vertices between graph partitions [48], restricting the internal search rules [49], and early pruning of the non-promising candidates [49]. Other interesting approaches include the use of MapReduce in a BigData batch processing framework [50]. One of the main advantages of the Louvain algorithm is its ability to find communities with high modularity, which is a measure of the density of connections within communities and the sparsity of connections between communities. However, the Louvain algorithm does have some limitations. It can be sensitive to the initial order of the nodes, which can lead to non-reproducible results. It can also have difficulty detecting communities that are very dense or tightly interconnected, as it tends to break up dense communities in favor of finding larger, more cohesive communities. ## 3 Evaluation In this section, we first describe our experimental setup, such as the system we use and our dataset. We then investigate the static Louvain algorithm, taking note of the subtle details and explore various optimizations, while implementing their single-threaded and multi-threaded OpenMP-based versions. ### Experimental setup #### 3.1.1 System used In our experiments, we employed a system comprised of two Intel Xeon Silver 4116 64-bit processors running at 2.10 GHz, and 128GB of DDR4 Synchronous Registered DRAM operating at 2666 MHz. Each processor featured 12 x86 cores, each with 2 hyper-threads per core, and 16.5M L3 cache. Our server was running the CentOS version 7.9, with GCC version 9.3 and OpenMP version 5.0 used for compilation with optimization level 3 (-O3) and simultaneous multi-threading (SMT) was enabled for all experiments. #### 3.1.2 Dataset The graphs used in our experiments are detailed in Table 1. These graphs were obtained from the SuiteSparse Matrix Collection [51]. The total number of vertices in the graphs varies from \(74.9\) thousand to 12 million, and the total number of edges varies from \(811\) thousand to \(304\) million. All edges are considered to be undirected and weighted with a default weight of one, and self-loops were added to each vertex in all the graphs. ### Louvain algorithm **Louvain algorithm**, as mentioned before, is an agglomerative-hierarchical community detection method that greedily optimizes for modularity (iteratively). Given an undirected weighted graph, all vertices are first considered to be their own communities. In the first phase, also known as the **local-moving phase**, each vertex greedily decides to move to the community of one of its neighbors which gives the greatest increase in modularity. If moving to no neighbor's community leads to an increase in modularity, the vertex chooses to stay with its own community. This is done sequentially for all the vertices. If the total change in modularity is more than a certain threshold (tolerance parameter), this phase is repeated. Once this phase is complete, all vertices have formed their first hierarchy of communities. The next phase is called the **aggregation phase**, where all the vertices belonging to a community are collapsed into a single super-vertex, such that edges between communities are represented as edges between respective super-vertices (edge weights are combined), and edges within each community are represented as self-loops in respective super-vertices (again, edge weights are combined). Together, the _local-moving_ and the _aggregation phases_ constitute a **pass**. This _super-vertex graph_ is then used as input for the next pass. This process continues until the increase in modularity is below a certain threshold (pass_tolerance parameter, which is generally \(0\) as we want to maximize our modularity gain). As a result from each pass, we have a hierarchy of community memberships for each ver \begin{table} \begin{tabular}{||c||c|c|c|} \hline \hline **Graph** & \(|V|\) & \(|E|\) & \(D_{avg}\) \\ \hline \multicolumn{4}{|c|}{Web Graphs} \\ \hline web-Stanford & 282K & 3.99M & 14.1 \\ \hline web-BerkStan & 685K & 13.3M & 19.4 \\ \hline web-Google & 916K & 8.64M & 9.43 \\ \hline web-NotreDame & 326K & 2.21M & 6.78 \\ \hline indochina-2004 & 7.41M & 304M & 41.0 \\ \hline \multicolumn{4}{|c|}{Social Networks} \\ \hline soc-Slashdot0811 & 77.4K & 1.02M & 13.2 \\ \hline soc-Slashdot0902 & 82.2K & 1.09M & 13.3 \\ \hline soc-Epinions1 & 75.9K & 811K & 10.7 \\ \hline soc-LiveJournal1 & 4.85M & 86.2M & 17.8 \\ \hline \multicolumn{4}{|c|}{Collaboration Networks} \\ \hline coAuthorsDBLP & 299K & 1.96M & 6.56 \\ \hline coAuthorsCiteseer & 227K & 1.63M & 7.18 \\ \hline coPapersCiteseer & 434K & 32.1M & 74.0 \\ \hline coPapersDBLP & 540K & 30.5M & 56.5 \\ \hline \multicolumn{4}{|c|}{Road Networks} \\ \hline italy\_osm & 6.69M & 14.0M & 2.09 \\ \hline great-britain\_osm & 7.73M & 16.3M & 2.11 \\ \hline germany\_osm & 11.5M & 24.7M & 2.15 \\ \hline asia\_osm & 12.0M & 25.4M & 2.12 \\ \hline \hline \end{tabular} \end{table} Table 1: In our experiments, we use a list of 17 graphs. Each graph has its edges duplicated in the reverse direction to make them undirected, and a weight of 1 is assigned to each edge. The table lists the total number of vertices (\(|V|\)), total number of edges (\(|E|\)) after making the graph undirected, and the average degree of vertices (\(D_{avg}\)) for each graph. The number of vertices and edges are rounded to the nearest thousand or million, as appropriate. tex as a _dendrogram_[52]. We generally consider the _top-level hierarchy_ as the final result of the community detection process. Adjusting tolerance between each pass (known as _threshold scaling_) has been observed to impact runtime of the algorithm, without significantly affecting the modularity of obtained communities. We conduct experiments to obtain a suitable rate at which _tolerance_ can be decreased between each pass (tolerance_decline_factor parameter), in addition to the initial tolerance parameter value that would be suitable on average for most graphs. In our first experiment, we implement a **single-threaded CPU-based version** of the Louvain algorithm. We adjust the initial value of tolerance from \(1\) to \(10^{-12}\) in steps of \(10\), and adjust the tolerance_decline_factor from \(10\) to \(10^{4}\). From the results, we observe that an initial tolerance of \(0.01\) yields communities with the best possible modularity while requiring the least computation time. In addition, increasing the tolerance_decline_factor increases the computation time (as one might expect), but does not seem to impact resulting modularity. Thus, a tolerance_decline_factor of \(10\) would be good. Authors of the original paper use an **asynchronous** version of the algorithm, where the community membership of each vertex can be dependent upon the community membership of its neighbors in the current iteration (similar to _Gauss-Seidel_ method) [42]. Anyhow, it suffers from reads and writes to the same memory area, which can be _detrimental_ to performance in a _multi-threaded implementation_ due to cache coherence overhead. We therefore consider experimenting with a **synchronous** version of the algorithm, where the community membership of each vertex can only be dependent upon the community membership of its neighbors in the previous iteration (similar to _Jacobi_ method). We perform this comparison on the _single-threaded CPU-based implementation_ of the algorithm. From the results, we observe that both the synchronous and asynchronous version of the algorithm are able to provide communities of _equivalent quality_ in terms of _modularity_, with the asynchronous version providing slightly higher quality communities for certain graphs. However, the _synchronous version_ is quite a bit slower than the asynchronous one in terms of the total _time_ taken, as well as the total number of _iterations_ of the local-moving phase (which is the most expensive part of the algorithm). We therefore conclude that _asynchronous / partially asynchronous approaches for vertex processing_ are likely to provide _good performance_ over fully synchronous versions for parallel implementations of the Louvain algorithm. Partially asynchronous vertex ordering via graph coloring has been explored by Halappanavar et al [46]. For the second experiment, we implement a **multi-threaded OpenMP-based version** of the Louvain algorithm. Similar to earlier algorithms, each thread is given a _separate hashtable_, which it can use for choosing to move to a community with the highest delta-modularity. If multiple communities yield the highest delta-modularity, we pick only the first one in the hashtable. As before, hashtables are _allocated separately_ for better performance (instead of storing them contiguously on a vector). We use an OpenMP schedule of "auto" and a total of \(12\) threads for the time being (not an optimal choice). We adjust the number of threads from \(2\) to \(48\) threads on a _dual-socket system_ with each CPU having \(12\)_cores each_ and \(2\)_hyperthreads_ per core. From the results, we observe that increasing the number of threads only decreases the runtime of the Louvain algorithm by a small amount. This indicates that when multiple reader threads and a writer thread access common memory locations (here it is the community membership of each vertex) performance is degraded (likely due to higher pressure on cache coherence), and would tend to approach the performance of a sequential algorithm if there is just too much overlap. Utilizing _all_\(48\) threads for community detection significantly increases the time required for obtaining the results, and is likely due to thread switching with the operating system. The number of iterations required to converge also increases with the number of threads, indicating that the behavior of the asynchronous multi-threaded implementation starts to approach the behavior of a synchronous version of the algorithm, which converges much more slowly than the asynchronous version. One approach to resolve these issues could be to _partition_ the graph in such a way that each partition can be run _independently_, and then combined back together. ## 4 Conclusion The Louvain algorithm is well-liked among researchers because of its ability to locate high-quality communities in networks. Our research focuses on the static Louvain algorithm. We pay close attention to the algorithm's minute details when implementing its single-threaded and multi-threaded OpenMP-based variants, making any necessary adjustments or optimizations, and obtaining appropriate parameter values. By altering the tolerance between each pass, also known as _threshold scaling_, the method's performance can be modified. According to our findings, communities with high modularity are produced while using the least amount of computing time when an _asynchronous_ version of the algorithm is used, with an initial tolerance of \(0.01\) and a tolerance_decline_factor of \(10\). A parallel OpenMP-based implementation of the algorithm suggests that the algorithm is _not_ generally well-suited for shared-memory _parallelism_ (unless the input graph has a large number of vertices). A possible solution can be to divide the graph into manageable _partitions_ that can each be independently run and then reassembled.
2307.16278
A Model of the Black Hole Interior
A model is proposed for the interior of a neutral non-rotating black hole. It consists of an ideal fluid with density $\r$ and a negative pressure $p$, obeying an equation of state $p=-\xi\r$. In order to have a solution, $\xi$ must lie in the narrow range between 0.1429 and 0.1716.
C. S. Lam
2023-07-30T17:16:45Z
http://arxiv.org/abs/2307.16278v2
# A Model of the Black Hole Interior ###### Abstract A model is proposed for the interior of a neutral non-rotating black hole. It consists of an ideal fluid with density \(\rho\) and a negative pressure \(p\), obeying an equation of state \(p=-\xi\rho\). In order to have a solution, \(\xi\) must lie in the narrow range between 0.1429 and 0.1716. Pressure, density, mass, temperature, and entropy distributions in the interior are computed. Due to the presence of pressure, the surface temperature \(T_{H}\) and the total entropy \(S_{H}\) do not obey the Bekenstein relation. In this model, \(S_{H}\) is proportional to the surface area as usual, but entropy is concentrated near the center of the blackhole rather than on its surface, making the model non-holographic. Introduction The interior of a black hole is a mystery. When a star collapses into a black hole, its matter loses baryonic number, fermionic number, isotopic spin, hypercharge etc., to become a different kind of matter that shall be referred to as black-hole matter. The nature of this new kind of matter is largely unknown, and its distribution inside the black hole is also unknown, except that there is a singularity at the center [1; 2; 3; 4; 5] which can likely be washed out when quantum effect is taken into account [6; 7; 8; 9]. In this article we propose a model of the black-hole matter, described by an ideal fluid with a positive mass density \(\rho\), and an equation of state \(p=-\xi\rho\) with a negative pressure. To prevent matter from sinking to the center, a repulsive force produced by a negative pressure is required. In order to have a solution, it turns out that only a very narrow range of \(\xi\) between 0.1429 and 0.1716 is allowed. In this way the present model differs from those [10; 11; 12] where black-hole matter is made up of dark energy with \(\xi=1\). The density and the metric of this model have a singularity at the origin as per the singularity theorem, but the internal mass in a finite volume is finite and vanishes as the volume goes to zero. We will compute the density, mass, temperature and the entropy distributions in the interior of the black hole, from which the surface temperature \(T_{H}\) and the total entropy \(S_{H}\) can be extracted. Due to the presence of pressure, Bekenstein's relation [13] is not quite satisfied in this model. Moreover, the black hole entropy turns out to be concentrated near the center and not on the surface, making this model non-holographic, although \(S_{H}\) is still proportional to the surface area of the black hole as usual. ## II The model Consider a spherical black hole given by the line element [14] \[ds^{2} = -e^{2\alpha(r)}dt^{2}+e^{2\beta(r)}dr^{2}+r^{2}d\Omega^{2}. \tag{1}\] Vacuum is assumed to be outside the black hole, so the line element outside is given by the Schwarzschild metric, \[e^{2\alpha(r)} = \left(1-\frac{2GM}{r}\right)\ \text{and}\ \ \ e^{2\beta(r)}=\left(1-\frac{2GM}{r}\right)^{-1}\ \ \ \text{for}\ r>R, \tag{2}\] where \(M\) is the mass of the black hole, \(G\) is the gravitational constant, and \(R=2GM\) is the radius of its horizon. Matter is assumed to have a volume distribution inside the blackhole, with its energy-momentum tensor given by that of an ideal fluid, \[T_{\mu\nu}=(p+\rho)U_{\mu}U_{\nu}+pg_{\mu\nu}. \tag{3}\] The four-velocity is as usual normalized to \(U^{\mu}U_{\mu}=-1\). The energy density \(\rho\) is assumed to be positive, but the pressure determined by the equation of state \(p=-\xi\rho\) can be either positive (\(\xi<0\)) or negative (\(\xi>0\)) at this point. With this energy-momentum tensor, Einstein equation leads to the Tolman-Oppenheimer-Volkoff equation for the pressure gradient [14; 15] \[\frac{dp}{dr} = -\frac{(\rho+p)[Gm(r)+4\pi Gr^{3}p]}{r[r-2Gm(r)]},\qquad\mbox{where} \tag{4}\] \[m(r) = 4\pi\int_{0}^{r}\rho(r^{\prime}){r^{\prime}}^{2}dr^{\prime} \tag{5}\] is the mass of a black-hole matter ball of radius \(r\leq R\). The metric functions \(\alpha\) and \(\beta\) inside the black hole are given by the equations \[\frac{d\alpha}{dr} = -\frac{1}{(\rho+p)}\frac{dp}{dr}, \tag{6}\] \[e^{2\beta} = \left[1-\frac{2Gm(r)}{r}\right]^{-1}. \tag{7}\] In particular, \(e^{2\beta}\) has the same form across the horizon. Since there is no pressure outside the blackhole, we require \(p(R)=0\) for continuity. Hence \(\rho(R)=0\) and \((dm/dr)(R)=0\). For \(R-r\) small and positive, Eq.(4) can be approximated by \[\frac{dp}{dr}=-\frac{(\rho+p)GM}{R[r-2GM]}=\frac{\xi-1}{2\xi}\frac{p}{R-r}, \quad(R-r\ll R), \tag{8}\] whose solution for small and positive \(R-r\) is \[p(r)\simeq-\tilde{c}(R-r)^{(1-\xi)/2\xi} \tag{9}\] for some \(\tilde{c}\). In order for \(p(R)=0\), it is necessary to have \(0<\xi<1\), so the black-hole matter necessarily carries a negative pressure. For small \(R-r\), Eq.(6) and Eq.(4) imply \[\frac{d\alpha}{dr}\simeq\frac{GM}{R}\frac{1}{r-2GM}=\frac{1}{2(r-R)}, \tag{10}\] so it admits the solution \(e^{2\alpha}\simeq(r-R)/R\simeq(1-2GM/r)\), matching the value in Eq.(2) across the horizon. It would be simpler to write the Tolman-Oppenheimer-Volkoff equation in a dimensionless form. To that end, let \(x=r/R,\ \bar{m}(x)=m(r)/M\), and \(\bar{\rho}(x)=\rho(r)(R^{3}/M)\). Then Eq.(4) can be written as \[\frac{d\bar{\rho}(x)}{dx} = -\frac{\xi-1}{2\xi}\frac{\bar{\rho}(x)}{x}\frac{\bar{m}(x)-4\pi \xi x^{3}\bar{\rho}(x)}{x-\bar{m}(x)}, \tag{11}\] \[\frac{d\bar{m}(x)}{dx} = 4\pi x^{2}\bar{\rho}(x), \tag{12}\] with the boundary condition \(\bar{\rho}(1)=0\) and \(\bar{m}(1)=1\). The interior solution equivalent to Eq.(9) near \(x=1\) is then \[\bar{\rho}(x) \simeq c(1-x)^{(1-\xi)/2\xi}, \tag{13}\] \[\bar{m}(x) \simeq 1-\frac{8\pi\xi c}{\xi+1}(1-x)^{(\xi+1)/2\xi},\qquad(1-x\ll 1) \tag{14}\] for some \(c>0\). Let us turn to the behavior near \(x=0\). Assuming the mass function \(\bar{m}(x)\) to increase steadily from \(0\) to \(1\) in the interval \(x\in(0,1)\), then for small \(x\), \(\bar{m}(x)=\mu_{0}x^{\beta}\) for some \(\beta>0\) and \(\mu_{0}>0\). It follows from Eq.(11) that \(\bar{\rho}(x)=(\mu_{0}\beta/4\pi)x^{\beta-3}\), so \(x^{3}\bar{\rho}(x)\) is proportional to \(\bar{m}(x)\), and \(\bar{m}(x)-4\pi\xi x^{3}\bar{\rho}(x)=\bar{m}(x)(1-\beta\xi)\). If \(\beta>1\), then \(x-\bar{m}(x)\simeq x\) for sufficiently small \(x\), in which case the \(x\)-behavior on both sides of Eq.(11) can never match. This shows that \(\beta>1\) is not allowed. As a result, \(\bar{\rho}(x)\sim x^{\beta-3}\) must diverge near \(x=0\). Eq.(6) and Eq.(7) show that the metric functions also misbehave. Specifically, from Eq.(6), one gets \(e^{2\alpha}\simeq(\bar{\rho}/\bar{\rho}_{0})^{2\xi/(1-\xi)}\), which also diverges near \(x=0\), but it remains positive throughout the interior of the black hole. Hence the signature of the metric changes from \((-,+,+,+)\) outside the black hole to \((-,-,+,+)\) inside, whereas the signature \((+,-,+,+)\) of the Schwarzschild metric inside is that of a Kantowski-Sachs metric. The presence of a singularity is hardly surprising [1; 2; 3; 4; 5]. Using the Raydhuchaudri equation, it can be shown that a singularity is present at \(r=0\) if the convergence condition \(R_{\mu\nu}U^{\mu}U^{\nu}\geq 0\) is satisfied. In the present case, the Einstein equation \(R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=8\pi GT_{\mu\nu}\) implies \(R_{\mu\nu}=8\pi G(T_{\mu\nu}-\frac{1}{2}T_{\alpha}^{a}g_{\mu\nu})=8\pi G\left[( p+\rho)U_{\mu}U_{\nu}+\frac{1}{2}(\rho-p)g_{\mu\nu}\right]\), hence \(R_{\mu\nu}U^{\mu}U^{\nu}=4\pi G\rho(1-3\xi)\). With \(\xi<1/3\), as shall be presently shown, the convergence condition is met so a singularity is present. If \(\beta<1\), then \(x-\bar{m}(x)\simeq-\bar{m}(x)\) for sufficiently small \(x\), so Eq.(11) is satisfied when \[\beta=(\!-\!1\!+\!7\xi)/\xi(1\!+\!\xi). \tag{15}\] In order for \(\beta\) to be between \(0\) and \(1\), \(\xi\) is allowed only a narrow range of values between \(\xi=1/7\simeq 0.1429\) (when \(\beta=0\)) and \(\xi=3-2\sqrt{2}\simeq 0.1716\) (when \(\beta=1\)). For larger values of \(\xi\), this formula yields \(\beta>1\), so it is not allowed. The value \(\xi=3-2\sqrt{2}\) when \(\beta=1\) cannot be reached. With \(\beta=1\), both terms in \(x-\bar{m}(x)\) in Eq.(11) are of the same order so both must be retained. In that case, Eq.(11) can be satisfied only if \[\mu_{0}=\frac{4\xi}{6\xi-1-\xi^{2}},\qquad(\beta=1), \tag{16}\] but at the value \(\xi=3-2\sqrt{2}\), \(\mu_{0}=\infty\), so that cannot be reached. A numerical solution for \(\xi=0.16\), somewhere in the middle of the allowed range, is shown in Fig. 1 as an illustration. This solution is obtained numerically by integrating Eq.(4) and Eq.(5), starting at an initial \(x\) so small that the approximation \(\bar{m}(x)=\mu_{0}x^{\beta}\) and \(\bar{\rho}(x)=(\mu_{0}\beta/4\pi)x^{\beta-3}\) is accurate. The value of \(\beta\) given by Eq.(15), and the constant \(\mu_{0}\) is adjusted to yield \(\bar{m}(1)=1\) and \(\bar{\rho}(1)=0\) at the other boundary. Figure 1: Scaled mass and density distribution in the black-hole interior for \(\xi=0.16\) ## III Temperature and entropy It is well known that a neutral and non-rotating black hole with mass \(M\) satisfies the first law of black hole dynamics: \(dM=(\kappa/8\pi G)dA\), where \(\kappa\) is its surface gravity and \(A\) is the area of its event horizon. Moreover, it is known that \(\kappa\) is constant throughout the surface of the black hole and \(A\) cannot decrease [13; 16]. These properties can be easily verified for a Schwarzschild black hole where \(\kappa=GM/R^{2}\), \(R=2GM\), and \(dA=8\pi RdR\). Their similarity with the first and second laws of thermodynamics led Bekenstein [13] to propose that the black-hole entropy \(S_{H}\) is proportional to \(A\), and the black-hole temperature \(T_{H}\) is proportional to \(\kappa\), and that the first law of black-hole dynamics \(dM=(\kappa/8\pi G)dA\) can be interpreted as the first law of thermodynamics \(dM=T_{H}dS_{H}\). A more involved analysis can be found in [17]. Since temperature does not enter into the usual treatment of classical black holes, this identification with the first law of thermodynamics remains a conjecture within classical relativity. However, if the interior structure of the black hole is known and is a thermodynamical system, then temperature, entropy, as well as pressure and density can be computed, and the Bekenstein's conjecture can be verified. We shall do so below assuming the interior of the black hole to consist of the black-hole matter discussed in the last section. Let \(S(r)\) be the entropy of a ball of black matter with radius \(r\leq R\), and \(T(r)\) be its surface temperature. Since \(T\) has the dimension of \(1/r\) (in \(\hbar=c=1\) units) and \(S\) is dimensionless, namely of the dimension \(r^{2}/G\), it follows that \(T^{2}(r)S(r)G:=\eta\) is dimensionless. We shall assume \(\eta\) to be a universal constant, independent of the black-hole size \(R\), or \(r\). The amount of heat flowing into the black-energy ball can be computed from the first law of thermodynamics to be \[T(r)dS(r)=dm(r)+p(r)(4\pi r^{2}dr)=4\pi r^{2}\left[\rho(r)+p(r)\right]dr=(1- \xi)4\pi r^{2}\rho(r)dr. \tag{17}\] Using \(T(r)=\sqrt{\eta/GS(r)}\) so that \(T(r)dS(r)=2\sqrt{\eta/G}\ d\left(\sqrt{S(r)}\right)\), we can integrate the first law from \(r=0\) to \(r=R\). Assuming \(S(0)=0\), one gets \(\sqrt{4\eta/G}\ \sqrt{S(R)}=(1-\xi)M\), which implies \[S(R) = \frac{(1-\xi)^{2}}{4\eta}GM^{2}=\frac{(1-\xi)^{2}}{16\pi\eta} \frac{A}{4G}, \tag{18}\] \[T(R) = \sqrt{\frac{\eta}{GS(R)}}=\frac{2\eta}{1-\xi}\frac{1}{GM}=\frac{ 16\pi\eta}{1-\xi}\frac{1}{4\pi R}. \tag{19}\] If we make the identification \(T_{H}=T(R)\) and \(S_{H}=S(R)\), then \[T_{H}dS_{H}=(1-\xi)dM, \tag{20}\] which differs from Bekenstein conjecture by an extra term proportional to \(\xi\). This comes about because part of the heat flowing into the system is used to do work, in addition to increasing the mass value of \(M\). With \(\sqrt{S(r)}=(1-\xi)(G/4\eta)^{\frac{1}{2}}\)\(\bar{m}(r)\), and \(\beta<1\), \(\bar{m}(r)\) and \(S(r)\) rise fairly quickly with \(r\), as illustrated in Fig. 1. Although \(S_{H}\) is proportional to the surface area \(A\), this model is not holographic because entropy as well as dynamical degrees of freedom are spread throughout the volume, rather than being concentrated on the surface of the black hole. I am grateful to Bei-Lok Hu for discussions and suggestions.
2307.09358
A Dual Band Printed F Antenna using a Trap with small band separation
Trap antenna is well known method and has many applications. With this method, trap(s) are used on antenna to block currents of some frequencies and so electrically divide the antenna into multiple segments and thus one antenna can work on multiple frequencies. In this paper, trap antenna method is used to design a dual band Sub GHz printed F-antenna. The antenna is printed on FR4 board to achieve low cost solution. The two bands are 865-870 MHz and 902-928 MHz. The challenge of this design is that the frequency separation of the two bands is very small. In this case, and also the extra section for low frequency band is too small. Then, the influence of trap LC component variation due to tolerance to the two resonant frequencies is big, and so it is difficult to achieve good in band return loss within the LC tolerance. This is the main difficulty of this design. The problem is solved by placing the low band section away from the end of the antenna.
Prasad Samudrala, Justin Jose, Amit Kulkarni
2023-07-18T15:44:31Z
http://arxiv.org/abs/2307.09358v1
# A Dual Band Printed F-Antenna Using a Trap With Small Band Separation ###### Abstract Trap antenna is well known method and has many applications. With this method, trap(s) are used on antenna to block currents of some frequencies and so electrically divide the antenna into multiple segments and thus one antenna can work on multiple frequencies. In this paper, trap antenna method is used to design a dual band Sub GHz printed F-antenna. The antenna is printed on FR4 board to achieve low cost solution. The two bands are 865-870 MHz and 902-928 MHz. The challenge of this design is that the frequency separation of the two bands is very small. In this case, and also the extra section for low frequency band is too small. Then, the influence of trap LC component variation due to tolerance to the two resonant frequencies is big, and so it is difficult to achieve good in band return loss within the LC tolerance. This is the main difficulty of this design. The problem is solved by placing the low band section away from the end of the antenna. Trap, SubGHz, printed F antenna, FR4 board, LC Component ## 1 Introduction Adding traps on antenna is a commonly used method to make multiband antennas. In [1], this method was used to design Dual-Band PIFA. In [2], traps were added to wires to make Dual-Band Quadrifilar Helix Antenna. In [3], the trap method was used not just to design multiband antenna, but to design broad band antenna. In this paper, the trap method is used to design a dual band printed F-Antennas. The design consists of an inverted F antenna, with a trap for creating the dual bands. The two bands are 865-870 MHz and 902-928 MHz. The design goal is that, the return loss for the two bands is under -10dB within the trap LC component tolerance. For trap F-antenna, a straight forward configuration of the antenna is shown in Fig. 1. A), where the low band section is at the end of the antenna. But this configuration will not work for this antenna design, because the separation of the two bands is small, and thus that section is small. And because this section is small, the influence of trap L and C components variation due to tolerance to the two resonant frequencies is big. Actually, it is too big. It is not possible to achieve the design goal, which is, the return loss for the two bands is under -10dB within trap LC component tolerance. To solve this problem, the low band section is placed away from the end of the antenna as shown in Fig. 1. B), so it becomes bigger. When it is bigger, the LC tolerance's influence is smaller, then the design goal is achieved. The first part of the paper describes the design of the trap, second part will present the inverted F-Antenna design. The antenna is designed to be printed on FR-4 material to minimize space and cost of the board. For all simulations, the software used in the designs is CST (Computer Simulation Technology). ## 2 The Trap design The Trap used in the design are as follows. It consists of one capacitor and two inductors connected in parallel which acts as a band stop filter for high band. The reason of using two inductors in parallel is for getting the required value. LC selection: the readily available tolerance for C is +-0.05pF but up to the value 9.1pF in a popular source. For the percentage reason, the highest value 9.1pF is selected. The L value is selected to make the LC trap resonating at 915 MHz. The readily available tolerance for L is +-2%. The selected capacitor and inductor nominal values and tolerances are C=9.1pF+-0.05pF, L=6.8nF+-0.02%. The trap S21 with the nominal values, and the two ends of tolerances are shown in Fig. 2. Trap S21 variation due to tolerance is clearly seen. ## 3 The inverted F-antennas Design The inverted F antenna design, as shown in Fig. 3, combines a single inverted F antenna with a trap. The ground plane is modeled as 1.6 mm thick,100x100mm PEC. The antenna area is 13.5 mm wide FR4 (Er 4.3, LossTan 0.025). The antenna dimensions are 59.4x11 mm, antenna trace is 2.7 mm wide, feed width is 0.65 mm, antenna PEC thickness is 0.035 mm. Fig. 5 shows the antenna return loss with trap LC nominal values, and the two ends of tolerances. As can be seen, the design goal is achieved, which is, under -10dB for all cases. Fig. 6 shows the antenna 3D far field pattern, gain and efficiencies of the two frequencies with LC nominal values. Figure 1: Two congurations of trap antenna Figure 2: Trap used in the design ## 6 Results & Analysis This section deals with the results of the proposoed Dual band Sub GHz antenna. It mainly includes the Surface current distribution, Return Loss ( dB), 3D Radiation pattern & Antenna efficiency ( dB ). As in Figure 4, the surface current distribution shows strong current distribution along the antenna traces, at both the SubGHz bands, 866 MHz & 915 MHz respectively Figure 4: Surface current distribution at 866 MHz & 915 MHz Figure 3: The design. (a) whole PCB (b) Antenna details The Antenna return loss plot in Figure 5, shows the bandwidth numbers, which is more than 20 MHz for the 868 MHz band (Europe ) & approx.. 50 MHz for 915 MHz which is well beyond the bandwidth requirements. The Radiation patterns shown in Figure 6., states that the antenna radiates a quasi omni directional pattern and having an efficiency of above 87 % for both the Sub GHz bands. ## 5 Conclusion In this paper, inverted F-antenna with trap is used to design dual band (860-870 MHz & 902-928 MHz) F-antenna. The antenna is printed on FR4 board for low cost. Normally the low band section of a trap antenna is placed at the end of the antenna. But it does not work when the two bands separation is small due to LC value tolerance. The problem is solved by placing the low band section away from the end of the antenna. A design is done with good result.
2304.10359
Secondary Controller Design for the Safety of Nonlinear Systems via Sum-of-Squares Programming
We consider the problem of ensuring the safety of nonlinear control systems under adversarial signals. Using Lyapunov based reachability analysis, we first give sufficient conditions to assess safety, i.e., to guarantee that the states of the control system, when starting from a given initial set, always remain in a prescribed safe set. We consider polynomial systems with semi-algebraic safe sets. Using the S-procedure for polynomial functions, safety conditions can be formulated as a Sum-Of-Squares (SOS) programme, which can be solved efficiently. When safety cannot be guaranteed, we provide tools via SOS to synthesize polynomial controllers that enforce safety of the closed loop system. The theoretical results are illustrated through numerical simulations.
Yankai Lin, Michelle S. Chong, Carlos Murguia
2023-04-20T14:55:49Z
http://arxiv.org/abs/2304.10359v1
# Secondary Controller Design for the Safety of Nonlinear Systems ###### Abstract We consider the problem of ensuring the safety of nonlinear control systems under adversarial signals. Using Lyapunov based reachability analysis, we first give sufficient conditions to assess safety, i.e., to guarantee that the states of the control system, when starting from a given initial set, always remain in a prescribed safe set. We consider polynomial systems with semi-algebraic safe sets. Using the S-procedure for polynomial functions, safety conditions can be formulated as a Sum-Of-Squares (SOS) programme, which can be solved efficiently. When safety cannot be guaranteed, we provide tools via SOS to synthesize polynomial controllers that enforce safety of the closed loop system. The theoretical results are illustrated through numerical simulations. ## I Introduction In recent years, cyber-physical systems have gained increasing attention by researchers due to its wide applications in modern industrial systems. In these systems, computation of control laws and the physical behavior are coupled via networked communications. Though more efficient operations of the systems are enabled, more technical challenges arise at the same time. One of the pertinent vulnerabilities is when the network is compromised and malicious data is injected into the system. In [1], many examples including the well-known StuxNet malware incident were reported. Therefore, investigation on controller design methods that ensure safety is of significant importance. We aim to address one particular instance of the safety-ensuring control problem of which is to keep the states of the control system within a prescribed set, called a _safe set_, for an infinite time horizon. In an earlier work [2], we consider adding an output feedback dynamic secondary controller to a linear system that has already been stabilized by a pre-designed primary controller. In this work, we extend the result to polynomial nonlinear systems with semi-algebraic safe sets. There are many approaches to the safe stabilization problem in existing literature. Among them, reachability analysis is one natural way of ensuring that for states from a given initial set are steered into the desired location without entering an unsafe region. However, it is in general computationally expensive to solve these problems exactly due to the associated partial differential equations (PDEs) that need to be dealt with [3, 4]. Another approach that bypasses the difficulty of dealing with PDEs is via tools of set invariance [5]. If there exists a subset of the safe set that is forward invariant, then it is guaranteed that the states of the system always remains within the safe set. Sufficient conditions for the forward invariance of autonomous systems can be given by Lyapunov-like sufficient conditions on functions called barrier certificates [6]. These conditions are later extended in [7], where sufficient conditions on the control barrier function (CBF) are given to guarantee robust forward invariance of the safe sets for nonlinear systems driven by control inputs. Based on these conditions, a control law that guarantees safety can be synthesized by solving a quadratic program online. Tools from dissipativity theory can also be used to formulate a similar condition that verifies safety of interconnected systems [8]. In this work, we consider systems with polynomial dynamics and take the approach of ensuring forward invariance of a given set using tools from Sum-Of-Squares (SOS) programming to address the safe control problem. A motivation to use SOS programming is that, though some progress has been made recently[9, 10], the synthesis of CBF for general nonlinear systems is still a challenging problem. In the seminal work [11], it is shown that a SOS program is equivalent to a semidefinite program which can be solved efficiently. Hence, by restricting the class of Lyapunov-like functions to be polynomial functions, we can efficiently translate the complicated synthesis problem to a convex optimization program. Similar ideas have been successfully applied to various nonlinear control problems such as the search of polynomial Lyapunov functions to check stability [12]. Our contributions are summarized below. **1)** We consider the setup of using limited resources (in terms of limited access to system outputs) to design a secondary controller to ensure safety of a nonlinear controlled system under resource-limited adversaries. Sufficient conditions in terms of SOS programmes are given to synthesise polynomial state feedback controllers. This generalizes our prior work [2] on linear systems to nonlinear systems with polynomial dynamics. **2)** We consider the case where control inputs and external signals appear in the system dynamics. Unlike the previous work [13], where it is assumed that the disturbance has finite energy, we consider sensor and actuator attack signals that are constrained by state-dependent upper bounds. This is to capture the fact that intelligent cyber attackers, depending on their available resources [14], are constrained in the class of signals they can inject to remain stealthy, such that they can continue affecting the system without being detected. The rest of this paper is organized as follows. We first present the preliminaries in Section II. The problem formula tion is given in Section III. Section IV presents the sufficient conditions to verify the safety of nonlinear systems based on SOS programming and the S-procedure for polynomial functions. Section V proposes SOS-based synthesis tools for a polynomial output feedback secondary controller to guarantee the safety of the overall system. Section VI illustrates the main results via numerical simulations on a polynomial system. Lastly, conclusions and future research directions are given in Section VII. ## II Preliminaries ### _Notations_ Let \(\mathbb{R}=(-\infty,\infty)\), \(\mathbb{R}_{\geq 0}=[0,\infty)\), \(\mathbb{R}_{>0}=(0,\infty)\) and \(\mathbb{R}^{n}\) denotes the \(n\)-dimensional Euclidean space. We use **0** to denote the zero matrix with appropriate dimensions. For a given square matrix \(R\), \(\text{Tr}[R]\) denotes the trace of \(R\). We use \(A\succ 0\) (\(A\prec 0\)) and \(A\succeq 0\) (\(A\preceq 0\)) to denote the matrix \(A\) is positive (negative) definite and positive (negative) semidefinite, respectively. Given a polynomial function \(p(x):\mathbb{R}^{n}\rightarrow\mathbb{R}\), \(p\) is called SOS if there exist polynomials \(p_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) such that \(p(x)=\sum_{i=1}^{k}(p_{i}(x))^{2}\). The set of SOS polynomials and set of polynomials with real coefficients in \(x\) are denoted by \(\Sigma[x]\) and \(\mathbb{R}[x]\), respectively. A vector of dimension \(n\) composed of SOS (real) polynomial functions of \(x\) is denoted by \(\Sigma^{n}[x]\) (\(\mathbb{R}^{n}[x]\)). ### _Preliminaries on Polynomial Functions_ A standard SOS program is a convex optimization problem of the following form [15] \[\begin{split}\min_{m}&\;b^{\top}m\\ \text{s.t.}&\;p_{i}(x,m)\in\Sigma[x],\;i=1,2,\ldots,n, \end{split} \tag{1}\] where \(p_{i}(x,m)=c_{i0}(x)+\sum_{j=1}^{k}c_{ij}(x)m_{j}\), \(c_{ij}(x)\in\mathbb{R}[x]\), and \(b\) is a given vector. It is shown in [15, p. 74] that (1) is equivalent to a semidefinite program. A useful tool that will be used extensively in this paper is the generalization of the S-procedure [16] to polynomial functions. This can be done via the Positivstellensatz certificates of set containment [17]. **Lemma 1**: _Given \(p_{0}\), \(p_{1}\), \(\cdots\), \(p_{m}\in\mathbb{R}[x]\), if there exist \(\lambda_{1}\), \(\lambda_{2}\), \(\cdots\), \(\lambda_{m}\in\Sigma[x]\) such that_ \[p_{0}-\sum_{i=1}^{m}\lambda_{i}p_{i}\in\Sigma[x],\] _then we have_ \[\bigcap_{i=1}^{m}\{x|p_{i}(x)\geq 0\}\subseteq\{x|p_{0}(x)\geq 0\}.\] See [18, Chapter 2.2]. ## III Problem Formulation We consider the setup where the plant is modelled as a nonlinear system taking the following form \[\begin{split}\dot{x}_{p}&=f_{p}(x_{p})+g_{p}(x_{p})u \\ y&=h_{p}(x_{p}),\end{split} \tag{2}\] where \(x_{p}\in\mathbb{R}^{n_{p}}\) is the state vector of the plant, \(y\in\mathbb{R}^{n_{y}}\) is the measured output, \(u\in\mathbb{R}^{n_{u}}\) is the input of the plant, function \(f_{p}:\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{n_{p}}\) is continuous with \(f_{p}(0)=0\) and function \(g_{p}:\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{n_{u}}\) is continuous. We assume that system (2) is stabilizable, i.e., there exists a control law \(u_{p}\) generated by the _primary controller_ which has already been designed to stabilize the plant (2) and takes the following form \[\begin{split}\dot{x}_{c}&=f_{c}(x_{c},y+a_{y})\\ u_{p}&=h_{c}(x_{c})+a_{u},\end{split} \tag{3}\] with controller state \(x_{c}\in\mathbb{R}^{n_{e}}\). The functions \(f_{c}\) and \(h_{c}\) are assumed to satisfy the regularity conditions such that the primary controller (3) exists. Controller (3) is pre-designed to stabilize (2) with the input signal \(u_{p}:\mathbb{R}^{n_{c}}\rightarrow\mathbb{R}^{n_{u}}\) and is subject to potential adversarial attacks denoted by the attack vector \(a=[a_{u}^{\top},a_{y}^{\top}]^{\top}\in\mathbb{R}^{n_{u}+n_{y}}\), where \(a_{u}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n_{u}}\) and \(a_{y}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n_{y}}\) denote actuator and sensor attacks respectively. Since the primary controller (3) is pre-designed without being aware of the attacks, the safety of the closed loop may be compromised (precise definition is given later in Definition 1). Therefore, we propose introducing a _secondary controller_, that runs in conjunction with the primary controller (3). The secondary controller takes the form of a static output feedback controller that uses measurements of a subset of the plant outputs \(y\) which are either available locally or known to be safeguarded against malicious manipulation (e.g., via encryption or watermarking): \[u_{s}=h_{s}(x_{s}), \tag{4}\] where \(x_{s}=C_{s}y=C_{s}h_{p}(x_{p})\) is a subset of the plant measurements \(y\) that are available to the secondary controller (4), and \(u_{s}\) is the secondary control law. The overall controller for our _safe_ primary-secondary control scheme is given by \[u=u_{p}+E_{u}u_{s}, \tag{5}\] where \(E_{u}\) is a selection matrix we use to denote what entries of the primary controller are affected by the secondary. In this work, we assume \(C_{s}\) and \(E_{u}\) are given to model the case where a fixed set of resources are locally available. How to optimally choose \(C_{s}\) and \(E_{u}\) is left for future work. Note that the secondary controller (4) uses attck-free measurements only and it generates an input that will be fed back to the plant reliably. Consequently, no attack signal \(a\) appears in (4). The setup is illustrated in Fig. 1. It can be verified that the closed loop system (2)-(5) can be written in the following form \[\dot{x}=f(x,a)+g(x)u_{s},\] here \(x=[x_{p}^{\top},x_{c}^{\top}]^{\top}\), \[f(x,a) =\left[\begin{array}{c}f_{p}(x_{p})+g_{p}(x_{p})(h_{c}(x_{c})+a_{u })\\ f_{c}(x_{c},h_{p}(x_{p})+a_{y})\end{array}\right] \tag{7}\] \[g(x) =\left[\begin{array}{c}g_{p}(x_{p})E_{u}\\ \textbf{0}\end{array}\right].\] It is worth noting that the expression of the closed loop system (6) is also able to capture the case where state feedback is used for the primary controller. Suppose we have \(u_{p}=h_{c}(x_{p}+a_{y})+a_{u}\), then we have \[f(x,a) =f_{p}(x_{p})+g_{p}(x_{p})(h_{c}(x_{p}+a_{y})+a_{u}) \tag{8}\] \[g(x) =g_{p}(x_{p})E_{u}.\] Throughout the paper, we will make the following assumption about the vector field of the closed loop system (6). **Assumption 1**: _The closed loop system (2)-(5) written compactly in (6) is such that \(f(x,a)\in\mathbb{R}^{n_{p}+n_{c}}[x,a]\) and \(g(x)\in\mathbb{R}^{(n_{p}+n_{c})\times n_{x}}[x]\)._ **Remark 1**: _By imposing Assumption 1 on the closed-loop system (6), we make the analysis more computationally tractable via SOS tools. This assumption can be satisfied if functions \(f_{p},g_{p},h_{p},f_{c},h_{c}\) are all polynomials of their respective arguments, which may come from least square regression or polynomial approximation of another nonlinear function._ The goal of the secondary controller (4) is to ensure that when the overall closed loop system (2)-(5) is subject to cyber attacks \(a\), the safety of the closed loop system (6) can be ensured in the following sense. **Definition 1**: _The closed loop system (6) is safe if its state \(x\) remains within a given safe set \(\mathcal{S}\) for all \(t\geq 0\)._ We describe the safe set by \(\mathcal{S}:=\{x|s(x)\geq 0\}\), where \(s(x)\in\mathbb{R}[x]\). We solve the following problems in this paper. 1. Give sufficient conditions to check if the primary controller (3) alone can render the plant (2) safe in the presence of attacks. 2. Enhance safety of the closed loop system (6) by synthesizing the secondary controller (4) such that the attacker needs to invest more resources to violate the safety condition. ## IV Invariant Set Based Analysis The first step of our analysis is to assess if the primary controller can ensure the safety of the closed loop system in the presence of the attack signal \(a\). In [2], it is assumed that the attack signal \(a\) is norm bounded. This is to take into account that, intelligent adversaries often seek to remain stealthy and undetected to be able to continuously send malicious signals to the system. In this work, we impose a more general condition on the attack signal: \[a\in\mathcal{A}:=\{a|A(x,a)\geq 0\}. \tag{9}\] where \(A(x,a)\in\mathbb{R}[x,a]\). The condition (9) covers the situation where adversaries that have access to the states of the system inject attack signals \(a\) to the system based on their measurements of \(x\). Although the requirement that \(A\) is a polynomial in \(x\) and \(a\) might be restrictive in some cases, it generalizes the condition used in [2] which is a special case of (9) and can be used as an outer-approximation of the real constraints on the attack signal. Since we aim to first find the worst attack signals that lead to system trajectories inside the safe set by the primary controller (3), we set \(E_{u}=\textbf{0}\) and \(g(x)=\textbf{0}\) for all \(x\in\mathbb{R}^{n_{p}+n_{c}}\). Note that when \(g(x)=\textbf{0}\), the closed-loop system (6) is a nonlinear system driven by the attack signal \(a\), \[\dot{x}=f(x,a). \tag{10}\] To this end, we define the reachable set of nonlinear systems driven by external signals. **Definition 2**: _The (forward) reachable set \(\mathcal{R}_{a}\) of the nonlinear system \(\dot{x}=f(x,a)\) from the initial set \(\mathcal{T}\) driven by \(a\in\mathcal{A}\) is defined as the set of all trajectories \(\phi(t,x(0),a)\) for all \(t\geq 0\) and \(x(0)\in\mathcal{T}\) where \(\phi(t,x(0),a)\) is a solution to \(\dot{x}=f(x,a)\) at time \(t\) with the initial condition \(x(0)\)._ If we can guarantee that the reachable set of (10) is a subset of the safe set \(\mathcal{S}\), then safety of the system can be ensured since system states can only reach a set that is fully contained in the prescribed safe set. Exact computation of the reachable set of a nonlinear system can be difficult in general. In this work, we construct the set \(\mathcal{E}_{a}\) as an outer approximation of the reachable set such that \[\mathcal{R}_{a}\subseteq\mathcal{E}_{a}, \tag{11}\] where \(\mathcal{R}_{a}\) is the reachable set of (10) from the initial set \(\mathcal{T}\). If we manage to find such an \(\mathcal{E}_{a}\) such that \(\mathcal{E}_{a}\subseteq\mathcal{S}\), then it is sufficient to conclude that \(\mathcal{R}_{a}\subseteq\mathcal{S}\). We will make use of the following result which comes from Nagumo's theorem for autonomous systems and its extension to systems with exogenous inputs by Aubin [19], see [5, Section 3.1]. Fig. 1: Ensuring safety with a secondary controller. **Proposition 1**: _Given system (10), if there exists a continuously differentiable function \(V:\mathbb{R}^{n_{p}+n_{c}}\to\mathbb{R}\) such that \(\mathcal{T}\subseteq\mathcal{E}_{a}:=\{x\in\mathbb{R}^{n_{p}+n_{c}}|V(x)\leq 1\}\) and_ \[\frac{\partial V}{\partial x}f(x,a)\leq 0,\ \forall\ V(x)=1,x\in\mathbb{R}^{n_{p}+n _{c}},a\in\mathcal{A}; \tag{12}\] _then, we have \(\mathcal{R}_{a}\subseteq\mathcal{E}_{a}\)._ We will use the S-procedure for polynomial functions given by Lemma 1 to certify the set containment conditions in Proposition 1. Assuming that the initial set takes the form \(\mathcal{T}:=\{x\in\mathbb{R}^{n_{p}+n_{c}}|T(x)\geq 0\}\) where \(T(x)\in\mathbb{R}[x]\). Our first main result is stated below. **Theorem 1**: _Consider the closed loop system (6) with only the primary controller in feedback, i.e., \(E_{u}=0\) and \(C_{s}=0\). Given \((s(x),T(x))\in\mathbb{R}[x]\times\mathbb{R}[x]\) and \(A(x,a)\in\mathbb{R}[x,a]\), if there exist \(V(x)\in\mathbb{R}[x]\), \((\lambda_{1},\lambda_{2})\in\Sigma[x]\times\Sigma[x]\), \(\lambda_{3}\in\mathbb{R}[x,a]\), and \(\lambda_{4}\in\Sigma[x,a]\) such that_ \[s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\] \[1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\] \[-\frac{\partial V}{\partial x}f(x,a)-\lambda_{3}(V(x)-1)-\lambda _{4}A(x,a)\in\Sigma[x,a], \tag{13}\] _then we have \(\mathcal{R}_{a}\subseteq\mathcal{S}\)._ Applying the S-procedure for polynomial functions, the first condition in (13) guarantees that if \(1-V(x)\geq 0\), we have \(s(x)\geq 0\) which means that \(\mathcal{E}_{a}\subseteq\mathcal{S}\). Similarly, the second condition in (13) implies \(\mathcal{T}\subseteq\mathcal{E}_{a}\). Lastly, the third condition guarantees (12). By Proposition 1, we have \(\mathcal{R}_{a}\subseteq\mathcal{E}_{a}\) which together with \(\mathcal{E}_{a}\subseteq\mathcal{S}\) implies \(\mathcal{R}_{a}\subseteq\mathcal{S}\). Note that the condition (13) is a sufficient condition to guarantee that when the initial state is in \(\mathcal{T}\), the state of the closed loop system never enters the unsafe region of the state space in the presence of attack signals. Any forward invariant set \(\mathcal{E}_{a}\) that verifies (13) will suffice to guarantee safety of the system. It is also possible to optimize the safety performance of the system by minimizing an appropriately chosen objective function. One example is to minimize the volume of the forward invariant set \(\mathcal{E}_{a}\). However, since \(V\) is a polynomial function, finding the exact expression of the volume of \(\mathcal{E}_{a}\) might be intractable. However, we can find an ellipsoid \(\mathcal{E}_{P}:=\{x\in\mathbb{R}^{n_{p}+n_{c}}|x^{\top}Px\leq 1\}\), \(P\succeq 0\) that fully contains \(\mathcal{E}_{a}\) and minimize the volume of the ellipsoid by minimizing the convex function \(-\log\det(P)\), where \(\det(P)\) is the determinant of the matrix \(P\). See [16]. **Corollary 1**: _Consider the closed loop system (6) with only the primary controller in feedback, i.e., \(E_{u}=0\) and \(C_{s}=0\). Given \(s(x),T(x)\in\mathbb{R}[x]\) and \(A(x,a)\in\mathbb{R}[x,a]\), if there exist \(P\succ 0\), \(V(x)\in\mathbb{R}[x]\), \(\lambda_{1},\lambda_{2},\lambda_{5}\in\Sigma[x]\), \(\lambda_{3}\in\mathbb{R}[x,a]\), and \(\lambda_{4}\in\Sigma[x,a]\) such that_ \[\min_{P,V,\lambda_{1-5}} -\log\det(P)\] _s.t._ \[s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\] \[1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\] \[-\frac{\partial V}{\partial x}f(x,a)-\lambda_{3}(V(x)-1)-\lambda _{4}A(x,a)\in\Sigma[x,a],\] \[x^{\top}Px-\lambda_{5}(1-V(x))\in\Sigma[x], \tag{14}\] _where \(\lambda_{1-5}\) denotes the set \(\{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5}\}\), then we have \(\mathcal{R}_{a}\subseteq\mathcal{S}\). Moreover, \(\mathcal{E}_{P}\) is the ellipsoid with minimal volume such that \(\mathcal{E}_{a}\subseteq\mathcal{E}_{P}\)._ By the S-procedure for polynomial functions, the last condition in (14) guarantees that \(\mathcal{E}_{a}\subseteq\mathcal{E}_{P}\). And the rest of proof follows from the proof of Theorem 1. **Remark 2**: _The volume of \(\mathcal{E}_{P}\) (or \(\mathcal{E}_{a}\)) is just one possible metric of safety. Indeed, for a \(\mathcal{E}_{a}\) with a very small volume, it might still be the case that an element of \(\mathcal{E}_{a}\) is very close to the boundary of the safe set \(\mathcal{S}\). Interested readers are referred to [20] for other possible choice of objective functions._ It can be seen that the condition (13) contains bilinear SOS constraints involving decision variables \((\lambda_{1},V)\) and \((\lambda_{3},V)\) rendering the optimization problem non-convex. However, the constraints are linear in \(\lambda_{1}\) and \(\lambda_{3}\) when \(V\) is fixed and linear in \(V\) if \(\lambda_{1}\) and \(\lambda_{3}\) are fixed. As a result, in practice we can solve (13) and similarly (14) in an alternating fashion between variables \((\lambda_{1},\lambda_{3})\) and \(V\) as done in many existing results, see [12, 21, 22] for example. In this work, we adopt an approach similar to [22, Algorithm 2] by introducing a positive slack variable \(\epsilon\) to the last condition in (13). The modified conditions takes the form: \[s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\] \[1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\] \[-\frac{\partial V}{\partial x}f(x,a)+\epsilon-\lambda_{3}(V(x)-1 )-\lambda_{4}A(x,a)\in\Sigma[x,a]. \tag{15}\] The role of \(\epsilon\) is to relax the decreasing condition on \(V\) and allow \(\dot{V}\) to be positive by the margin characterized by \(\epsilon\). Then we alternately minimize \(\epsilon\) over two bilinear groups of decision variables and repeat until \(\epsilon\leq 0\) is satisfied, which can be done by the following steps. 1. Specify the orders of polynomials \(V\) and \(\lambda_{1-4}\) to be found. 2. Start with an initial guess \(\bar{V}\) and minimize \(\epsilon\) over \(\lambda_{1}\) and \(\lambda_{3}\) subject to (15). 3. Set \(\lambda_{1}\) and \(\lambda_{3}\) to the values found in the previous step and minimize \(\epsilon\) over \(V\) subject to (15). 4. Repeat the previous two steps until an \(\epsilon\leq 0\) is found. We will refer to Steps \(1)-4)\) as the alternating search algorithm. **Remark 3**: _Given the specified orders of polynomials, the alternating search algorithm guarantees that after each step, \(\epsilon\) is non-increasing. However, there is no guarantee that \(\epsilon\) will decrease to be non-positive. It is worth noting that finding a \(V\) and \(\lambda_{1-4}\) that satisfy (13) is only a sufficient condition to guarantee that the reachable set \(\mathcal{R}_{a}\) is contained within the safe set \(\mathcal{S}\). If the alternating algorithm fails to give a feasible solution to (13), then one can try to increase the order of the polynomials and start the algorithm again until a maximum allowable order is reached, in which case (13) is claimed to be infeasible (though it can be possibly feasible)._ **Remark 4**: _Once a valid \(V\) is found that verifies the safety of (10), this \(V\) can be used as the initial value when solving (14). A similar alternating algorithm can be constructed to minimize the volume of \(\mathcal{E}_{P}\). First, given \(V\), \(-\log\det(P)\) is minimized with respect to \(\lambda_{1}\), \(\lambda_{3}\), and \(\lambda_{5}\). Then, the obtained \(\lambda_{1}\), \(\lambda_{3}\), and \(\lambda_{5}\) are used to minimize \(-\log\det(P)\) over \(V\). The process is repeated until the decrease in \(-\log\det(P)\) is within a specified tolerance._ ## V SECONDARY CONTROL SYNTHESIS When the primary controller alone is insufficient to guarantee the safety of the overall closed loop system (6), introducing a secondary controller may achieve safety. To this end, we aim to systematically design the secondary controller in this section. To be able to employ the SOS programming tools to computationally solve the synthesis problem, we restrict our class of secondary controller (4) to _polynomial_ static feedback, i.e., \(h_{s}(x_{s})\in\mathbb{R}[x_{s}]\). With the secondary controller (4) included, the closed loop system takes the form \[\dot{x}=f(x,a)+g(x)h_{s}(x_{s}):=\tilde{f}(x,a), \tag{16}\] where the expressions of \(f(x,a)\) and \(g(x)\) are given in (7). Note that the new closed loop system (16) with the secondary controller included takes a form similar to (10). Therefore, we can employ Proposition 1 again to conclude the following result. **Theorem 2**: _Consider the closed loop system (16). Given \((s(x),T(x))\in\mathbb{R}[x]\times\mathbb{R}[x]\) and \(A(x,a)\in\mathbb{R}[x,a]\), if there exist \(h_{s}(x_{s})\in\mathbb{R}[x_{s}]\), \(V(x)\in\mathbb{R}[x]\), \((\lambda_{1},\lambda_{2})\in\Sigma[x]\times\Sigma[x]\), \(\lambda_{3}\in\mathbb{R}[x,a]\), and \(\lambda_{4}\in\Sigma[x,a]\) such that_ \[\begin{split}& s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\\ & 1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\\ &-\frac{\partial V}{\partial x}\tilde{f}(x,a)-\lambda_{3}(V(x)-1 )-\lambda_{4}A(x,a)\in\Sigma[x,a],\end{split} \tag{17}\] _where \(\tilde{f}(x,a)=f(x,a)+g(x)h_{s}(x_{s})\) depends on \(h_{s}(x_{s})\), then we have \(\tilde{\mathcal{R}}_{a}\subseteq\mathcal{S}\), where \(\mathcal{R}_{a}\) is the reachable set of (16) from the initial set \(\mathcal{T}\)._ The proof follows from the proof of Theorem 1 by replacing \(f\) by \(\tilde{f}\). If condition (17) is satisfied by a set of decision variables \(\{h_{s},V,\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\}\), then it is sufficient to conclude that the state of the closed loop system (16) never leaves the safe set \(\mathcal{S}\) when initialized in \(\mathcal{T}\). In the synthesis of the secondary controller, we can also find an ellipsoidal outer-approximation of \(\mathcal{E}_{a}\), \(\mathcal{E}_{P}:=\{x\in\mathbb{R}^{n_{p}+n_{c}}|x^{\top}Px\leq 1\}\) for some \(P\succeq 0\). Then we minimize the volume of the ellipsoid by minimizing the convex function \(-\log\det(P)\). **Corollary 2**: _Consider the closed loop system (16). Given \((s(x),T(x))\in\mathbb{R}[x]\times\mathbb{R}[x]\) and \(A(x,a)\in\mathbb{R}[x,a]\), if there exist \(P\succ 0\), \(h_{s}(x_{s})\in\mathbb{R}[x_{s}]\), \(V(x)\in\mathbb{R}[x]\), \((\lambda_{1},\lambda_{2},\lambda_{5})\in\Sigma[x]\times\Sigma[x]\times\Sigma[x]\), \(\lambda_{3}\in\mathbb{R}[x,a]\), and \(\lambda_{4}\in\Sigma[x,a]\) such that_ \[\min_{P,h_{s},V,\lambda_{1-5}} -\log\det(P)\] s.t. \[\begin{split}& s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\\ & 1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\\ &-\frac{\partial V}{\partial x}\tilde{f}(x,a)-\lambda_{3}(V(x)-1 )-\lambda_{4}A(x,a)\in\Sigma[x,a],\\ & x^{\top}Px-\lambda_{5}(1-V(x))\in\Sigma[x],\end{split} \tag{18}\] _where \(\tilde{f}(x,a)=f(x,a)+g(x)h_{s}(x_{s})\) depends on \(h_{s}(x_{s})\), then we have \(\tilde{\mathcal{R}}_{a}\subseteq\mathcal{S}\), where \(\tilde{\mathcal{R}}_{a}\) is the reachable set of (16) from the initial set \(\mathcal{T}\). Moreover, \(\mathcal{E}_{P}\) is the ellipsoid with minimal volume such that \(\tilde{\mathcal{R}}_{a}\subseteq\mathcal{E}_{P}\)._ By introducing the secondary control term \(h_{s}(x_{s})\), a new bilinear term \(\frac{\partial V}{\partial x}h_{s}(x_{s})\) appears in (17) and (18). This, together with other bilinear terms in (13) and (14), makes (17) and (18) non-convex optimization problems, respectively. Nevertheless, the constraints are linear in \(\lambda_{1-5}\) and \(h_{s}\) when \(V\) is fixed and linear in \(V\) if \(\lambda_{1-5}\) and \(h_{s}\) are fixed. There are no bilinear terms in (17) and (18) involving products of \(\lambda_{1-5}\) and \(h_{s}\). Thus, there is no need to perform an additional round of alternation. The variables \(\lambda_{1-5}\) and \(h_{s}\) can be solved simultaneously when \(V\) is given. We again introduce the slack variable \(\epsilon\) to (17) such that the modified condition takes the form \[\begin{split}& s(x)-\lambda_{1}(1-V(x))\in\Sigma[x],\\ & 1-V(x)-\lambda_{2}T(x)\in\Sigma[x],\\ &-\frac{\partial V}{\partial x}\tilde{f}(x,a)+\epsilon-\lambda_{3} (V(x)-1)-\lambda_{4}A(x,a)\in\Sigma[x,a].\end{split} \tag{19}\] We alternately minimize \(\epsilon\) over \((\lambda_{1,3},h_{s})\) given \(V\) and over \(V\) given \((\lambda_{1,3},h_{s})\) and repeat until \(\epsilon\leq 0\) is satisfied, which can be done by the following steps. 1. Specify the orders of polynomials \(V,h_{s},\lambda_{1-4}\). 2. Start with an initial value of \(\tilde{V}\) and minimize \(\epsilon\) over \(h_{s}\), \(\lambda_{1}\) and \(\lambda_{3}\) subject to (19). 3. Set \(h_{s}\), \(\lambda_{1}\) and \(\lambda_{3}\) to the values found in the last step and minimize \(\epsilon\) over \(V\) subject to (19). 4. Repeat the previous two steps until an \(\epsilon\leq 0\) is found. **Remark 5**: _The initial guess of \(V\) can be taken from the result of checking conditions (14) and (15). To be more specific, if there does not exist a \(V\) that satisfies (15) for a non-positive \(\epsilon\), then the initial value of \(V\) can be set to be the one that minimizes \(\epsilon\) subject to (15). Hopefully, with the additional contribution by \(h_{s}\), \(\epsilon\) can be made negative after several iterations. If there does exist a \(V\) that verifies the safety of the closed loop system (10), then the solution to (14) can be used. In such cases, (15) will always be feasible since \(h_{s}=0\) is a trivial secondary controller that ensures the safety of the closed loop system._ **Remark 6**: _When the alternating algorithm does not find a non-positive \(\epsilon\) that satisfies (19), one can increase the order of the variables including \(h_{s}\) which is the newly added term. Moreover, changing the values of \(C_{s}\) and \(E_{u}\) (asking for more locally available resources) might also be helpful in synthesizing a secondary controller that enforces the safety of the closed loop system._ Once valid \(V\) and \(h_{s}\) are found to satisfy (17), this \(V\) and \(h_{s}\) can be used as the initial value when solving (18) following the discussion in Remark 4. However, it should be noted that the conditions (17) in Theorem 2, though being sufficient to guarantee the safety of the closed loop system, are not sufficient to guarantee that the origin is asymptotic stable in the absence of the attack signal \(a\). This is in contrast to the linear counterpart shown in [2], where the linear secondary controller automatically guarantees asymptotic stability of the origin if it guarantees safety of the closed loop system for bounded attack signals. The design of a secondary controller aiming to recover the performance of the primary controlled system while ensuring safety will be left for future work. ## VI Numerical Simulation In this section, we illustrate our main results via numerical simulations of a second order nonlinear system. Suppose a primary controller has been designed such that the closed loop system (16) takes the following form \[\dot{x}_{1} =-x_{1}+x_{2}+a_{1} \tag{20}\] \[\dot{x}_{2} =-x_{2}-x_{1}^{2}x_{2}+a_{2}+h_{s}(x_{2}),\] where \(a=[a_{1},a_{2}]^{\top}\) is the attack vector and measurement \(x_{s}=x_{2}\) is used to design the secondary controller. For simplicity, the attack signals are assumed to satisfy \(A(x,a)=A(a)=1-a_{1}^{2}+a_{2}^{2}\geq 0\). Moreover, we assume that the initial set \(\mathcal{T}\) is the singleton containing the origin, i.e., \(T(x)=-x_{1}^{2}-x_{2}^{2}\geq 0\). This captures the steady state for a globally asymptotically stable system when there are no attack signals. Under these conditions, we aim to keep the state \(x=[x_{1},x_{2}]^{\top}\) within the safe set \(\mathcal{S}\) characterized by \(s(x)=1.3-x_{1}^{2}-x_{2}^{2}\geq 0\). First, we set \(h_{s}(x_{s})=0\) and test if the primary controller alone can keep the states \(x\) inside the safe set \(\mathcal{S}\). We alternately solve the condition (13) in Theorem 1 using SOSTOOLS [23] with SeDuMi being the solver [24]. In this example, we restrict our search of all polynomial variables to polynomials of orders no higher than 4. It turns out that, under these conditions, the condition (13) is infeasible. To explore the limitations of the primary controller, instead of insisting that the state stays in the safe set, we impose the condition that the state \(x\) remains within the set \(\{x\in\mathbb{R}^{2}|s(x)+\gamma\geq 0\}\) for some \(\gamma>0\). We then minimize \(\gamma\) subject to condition (13), alternately over \(V\) and \(\lambda_{1},\lambda_{3}\). After 8 alternating iterations, \(\gamma\) reaches the value of \(0.19\) with \(V=V_{1}(x)=0.67315x_{1}^{2}+0.70356x_{2}^{2}\). Thus, we have failed to find a \(V\) which certifies the safety of the closed loop system via Theorem 1. However, as discussed before, this does not mean the primarily controlled system is unsafe since there might exist polynomials of higher orders that satisfy (13). In the simulation, we have attempted to increase the order of the function \(V(x)\) to 6 which, however, does not result in significant decrease of the value of \(\gamma\). Next we check if a polynomial secondary controller \(h_{s}(x_{2})\) of order no higher than 4 can be found to keep the states within the safe set. We again apply the alternating algorithm to check if condition (17) is feasible with the initial guess being \(V=V_{1}(x)\). It turns out that, \(h_{s}(x_{2})=-0.31761x_{2}^{3}-1.2534x_{2}\) can ensure safety of the closed loop system (20) with \(V=V_{2}(x)=0.8881x_{1}^{2}+2.669x_{2}^{2}\). The plots of the ellipsoidal over-approximations \(\mathcal{E}_{a}\) and \(\mathcal{\bar{E}}_{a}\) of the respective reachable sets \(\mathcal{R}_{a}\) and \(\mathcal{\bar{R}}_{a}\) found via the corresponding alternating algorithms and the safe set are given in Fig. 2. It can be seen that, after introducing the secondary controller \(h_{s}(x_{2})=-0.31761x_{2}^{3}-1.2534x_{2}\) to the closed loop system, its state \(x\) always remains in a subset of the safe set when initialized at the origin. ## VII Conclusions In this work, based on invariant set analysis and SOS programming, we provide sufficient conditions for safety verification and control design of a class of polynomial nonlinear systems in the presence of adversarial signals. The conditions are computationally tractable via an alternating algorithm. We show that, it is possible to improve the safety performance of a nonlinear system by using a subset of sensors that are attack free. A numerical simulation on a second order nonlinear system verifies the theoretical result. There are several possible future research direction to be explored. First, it is interesting to investigate a secondary controller design approach that recovers the performance achieved by the primary controller at least locally while ensuring safety. Another interesting topic would be the analysis of how the choice of sensors characterized by the matrix \(C_{s}\) impacts the performance of the secondary controller. Fig. 2: Plots of the safe set \(\mathcal{S}\) and the ellipsoidal over-approximation \(\mathcal{E}_{a}\) of the reachable set \(\mathcal{R}_{a}\) with the primary control only (left) and ellipsoidal over-approximation \(\mathcal{\bar{E}}_{a}\) of the reachable set \(\mathcal{\bar{R}}_{a}\) with primary and secondary controls (right).
2308.01181
On Collatz Conjecture for binary polynomials
We build a variant of Collatz Conjecture for polynomials over $\mathbb{F}_2$ and we prove that it is solved. By the way, we give several examples.
Luis H. Gallardo, Olivier Rahavandrainy
2023-08-02T14:43:48Z
http://arxiv.org/abs/2308.01181v2
# On Collatz Conjecture for binary polynomials ###### Abstract We build a variant of Collatz Conjecture for polynomials over \(\mathbb{F}_{2}\) and we prove that it is solved. By the way, we give several examples. 1. Running head: Collatz Conjecture 2. Keywords: finite fields, characteristic 2, odd (even) polynomials 3. Mathematics Subject Classification (2010): 11T55, 11T06. 4. Corresponding author: ## 1 Introduction The Collatz conjecture is one of the most famous unsolved problems in Arithmetics. As written in ([1]), "it concerns sequences of integers in which each term is obtained from the previous term as follows: if the previous term is even, the next term is one half of the previous term. If the previous term is odd, the next term is 3 times the previous term plus 1. The conjecture says that these sequences always reach 1, no matter which positive integer is chosen to start the sequence". We may reformulate this construction. For a given positive integer \(n\), consider the 2-adic valuation \(a_{0}\) of \(n\): \(n=2^{a_{0}}n_{1}\), where \(n_{1}\) is odd. Put \(n_{2}=1+3n_{1}\), and again, consider the 2-adic valuation, \(a_{2}\), of \(n_{2}\): \(n_{2}=2^{a_{2}}n_{3}\), \(n_{3}\) odd and so on... We get two sequences of odd and even integers: \([n_{1},n_{3},\ldots]\) and \([n_{2},n_{4},\ldots]\). The conjecture states that for any integer \(n\), there exists a finite integer \(m\) such that for all \(t\geq m\), \(n_{2t}=2\) and \(n_{2t+1}=1\). So, the above two sequences \((n_{2k})_{k}\) and \((n_{2k+1})_{k}\) are both eventually constant. Many studies are done in order to reach a proof of that Conjecture. See for example, [2] and [4]. Now, we consider a variant (of this problem) with binary polynomials. Let \(A\in\mathbb{F}_{2}[x]\) be a nonzero polynomial. We may think of \(x(x+1)\in\mathbb{F}_{2}[x]\) as being the analogue of \(2\in\mathbb{Z}\). So, we say ([3]) that \(A\) is _odd_ if \(\gcd(A,x(x+1))=1\), i.e., if it has no linear factor. \(A\) is _even_, otherwise. The first odd polynomial after 1 is \(M:=x^{2}+x+1\). So, the variant of \(1+3n\) (for integers) is \(1+MA\) (for polynomials). We denote by \(val_{x}(S)\) (resp. \(val_{x+1}(S)\)) the valuation at \(x\) (resp. at \(x+1\)) of a polynomial \(S\): \[S=x^{val_{x}(S)}(x+1)^{val_{x+1}(S)}S_{1},\text{ where }S_{1}\text{ is odd.}\] For a fixed nonzero binary polynomial \(A\), we define the "Collatz transformations" by giving the following three sequences of integers \((a_{2k})_{k}\) and of polynomials \((A_{2k})_{k}\), \((A_{2k+1})_{k}\): \[A_{0}=A,\ a_{0}=val_{x}(A_{0}),\,b_{0}=val_{x+1}(A_{0}),\] \[A_{1}\text{ the odd polynomial such that }A_{0}=x^{a_{0}}(x+1)^{b_{0}}A_{1},\] \[A_{2}=1+MA_{1},\,a_{2}=val_{x}(A_{2}),\ b_{2}=val_{x+1}(A_{2}),\,A_{3}=\frac{ A_{2}}{x^{a_{2}}(x+1)^{b_{2}}}.\] \[\vdots\] \[A_{2k}=1+MA_{2k-1},a_{2k}=val_{x}(A_{2k}),\,b_{2k}=val_{x+1}(A_{2k}),A_{2k+1}= \frac{A_{2k}}{x^{a_{2k}}(x+1)^{b_{2k}}}.\] \[\vdots\] Note that \(A_{0}\) may be odd and \(A_{2k}\) (resp. \(A_{2k-1}\)) is even (resp. odd) if \(k\geq 1\). We may formulate a Collatz Conjecture for binary polynomials as follows. **Conjecture 1.1**.: _For a given \(A\in\mathbb{F}_{2}[x]\setminus\{0\}\), there exists \(m\in\mathbb{N}^{*}\) such that for all \(k\geq m\), \(A_{2k}=x(x+1)\) and \(A_{2k+1}=1\)._ We prove it by Theorem 1.2. **Theorem 1.2**.: _Let \(A\) be nonzero binary polynomial, then the sequences of polynomials obtained from Collatz transformations are of finite length \(\ell_{A}\). More precisely, these sequences are:_ \[[A_{0},\ldots,A_{2m-2},x^{2}+x]\text{ and }\ [A_{1},\ldots,A_{2m-1},1],\] _where \(m\in\mathbb{N}^{*}\), \(\ell_{A}=m+1\leq 2^{\deg(A)-1}\)._ The upper bound for the length \(\ell_{A}\) seems too big. But, we are unable to improve it. Many computations show that \(\ell_{A}\leq\dfrac{\deg(A)}{2}\). However, it happens that \(\ell_{A}\simeq\deg(A)\). For several families of polynomials, we obtain some regularity on the degrees involved in the sequences. We also may have \(\ell_{A}\simeq\dfrac{\deg(A)}{2}\) (see Section 3). ## 2 Proof of Theorem 1.2 First, we shall need general results about the numbers of odd and even polynomials of a given degree \(d\geq 2\). Denote by \(\mathcal{P}_{d}\) the set of all polynomials of degree \(d\) and consider: \[\mathcal{P}_{d}^{0,0}:=\{S\in\mathcal{P}_{d}:S(0)=0\},\mathcal{P}_{d}^{0,1}:= \{S\in\mathcal{P}_{d}:S(0)=1\}\] \[\mathcal{P}_{d}^{1,0}:=\{S\in\mathcal{P}_{d}:S(1)=0\},\mathcal{P}_{d}^{1,1}:= \{S\in\mathcal{P}_{d}:S(1)=1\}\] \[\mathcal{O}_{d}:=\{S\in\mathcal{P}_{d}:S\text{ is odd}\}=\mathcal{P}_{d}^{0,1} \cap\mathcal{P}_{d}^{1,1}.\] One has \(\mathcal{P}_{0}=\mathcal{O}_{0}=\{1\}\) and \(\mathcal{O}_{1}=\emptyset\). **Lemma 2.1**.: _i) The four sets \(\mathcal{P}_{d}^{0,0},\mathcal{P}_{d}^{0,1},\mathcal{P}_{d}^{1,0}\) and \(\mathcal{P}_{d}^{1,1}\) have all the same cardinality \(2^{d-1}\). ii) The set \(\mathcal{O}_{d}\) contains exactly \(2^{d-2}\) polynomials if \(d\geq 2\)._ Proof.: i): By the bijective map: \(S\mapsto S+1\), one has \(\#\mathcal{P}_{d}^{0,0}=\#\mathcal{P}_{d}^{0,1}\) and \(\#\mathcal{P}_{d}^{1,0}=\#\mathcal{P}_{d}^{1,1}\). Analogously, the bijection: \(S(x)\mapsto S(x+1)\) gives \(\#\mathcal{P}_{d}^{0,0}=\#\mathcal{P}_{d}^{1,0}\). It remains then to see that \(\mathcal{P}_{d}\) is a disjoint union of \(\mathcal{P}_{d}^{0,0}\) and \(\mathcal{P}_{d}^{0,1}\) and \(\#\mathcal{P}_{d}=2^{d}\). ii): By induction on \(d\). The case \(d=2\) is trivial since \(\mathcal{O}_{2}=\{x^{2}+x+1\}\). Now, suppose that \(\#\mathcal{O}_{s}=2^{s-2}\), for \(2\leq s\leq d-1\). We remark that \(\mathcal{P}_{d}^{0,1}=\{(x+1)^{s}S_{1}:0\leq s\leq d,s\neq d-1,S_{1}\in \mathcal{O}_{d-s}\}\). Hence, \[\#\mathcal{P}_{d}^{0,1}=\#\mathcal{O}_{d}+\#\mathcal{O}_{d-1}+\cdots+\# \mathcal{O}_{2}+\#\mathcal{O}_{0}.\] Therefore, \(2^{d-1}=\#\mathcal{O}_{d}+2^{d-3}+\cdots+1+1=\#\mathcal{O}_{d}+2^{d-2}\) and \(\#\mathcal{O}_{d}=2^{d-2}\). For \(k\geq 0\), we put \(d_{2k}:=\deg(A_{2k})\) and \(d_{2k+1}:=\deg(A_{2k+1})\). We obviously get the following lemmas. **Lemma 2.2**.: _One has \(a_{0},b_{0}\geq 0\) and \(a_{2k},b_{2k}\geq 1\), for any \(k\geq 1\)._ Proof.: If \(k\geq 1\), then \(x\) and \(x+1\) both divide \(A_{2k}\), because for \(t\in\{0,1\}\), \(A_{2k}(t)=1+M(t)A_{2k-1}(t)=1+1=0\). **Lemma 2.3**.: _If \(k\geq 1\), then_ \[d_{2k+1}\leq d_{2k-1}\leq\deg(A),\ d_{2k}=d_{2k-1}+2,\ d_{2k}=d_{2k+1}+a_{2k}+b _{2k}.\] Since \((d_{2k+1})_{k}\) is a non-negative and non-increasing sequence, we obtain the **Corollary 2.4**.: _The sequence \((d_{2k+1})_{k}\) and \((d_{2k})_{k}\) are both convergent. One has :_ \[\lim_{k}d_{2k+1}=p_{1},\ \lim_{k}d_{2k}=p_{2}\ \text{where}\ p_{2}=p_{1}+2.\] **Corollary 2.5**.: _There exists \(m\geq 1\) such that for any \(k\geq m\):_ \[d_{2k+1}=p_{1},\ d_{2k}=p_{2},\ a_{2k}=b_{2k}=1.\] Proof.: The convergent sequence \((d_{2k+1})_{k}\) takes its values in the finite set \(\{0,1,\ldots,\deg(A)\}\). So, it is eventually constant. **Corollary 2.6**.: _For any \(k\geq m\), the polynomials \(A_{2k}\) and \(A_{2k+1}\) are respectively of degree \(p_{2}\) and \(p_{1}\)._ **Corollary 2.7**.: _There exists a positive integer \(t\leq\deg(A)\) such that the polynomials \(A_{2(m+t)}=A_{2m}\) and \(A_{2(m+t)+1}=A_{2m+1}\)._ Proof.: For any \(k\geq m\), the polynomial \(A_{2k}\) (resp. \(A_{2k+1}\)) lies in the finite set of polynomials of degree \(p_{2}\) (resp. \(p_{1}\)). **Proposition 2.8**.: _For any \(k\geq m\), \(A_{2k+1}=1\) so that \(p_{1}=0\) and \(t=1\)._ Proof.: For \(k\geq m\), \(a_{2k}=b_{2k}=1\), so the Collatz transformations give: \[\left\{\begin{array}{l}MA_{2m+1}+(1+M)A_{2m+3}=1\\ MA_{2m+3}+(1+M)A_{2m+5}=1\\ \vdots\\ MA_{2m+2t-3}+(1+M)A_{2m+2t-1}=1\\ MA_{2m+2t-1}+(1+M)A_{2m+2t+1}=1\end{array}\right.\] Since \(A_{2m+2t+1}=A_{2m+1}\), we get a linear system of \(t\) equations with coefficients in \(\mathbb{F}_{2}[x]\) and \(t\) unknowns: \(A_{2m+1},\ldots,A_{2m+2t-1}\). Its matrix \(C\) is circulant with first line: \([M,1+M,0,\ldots,0]\). The second member is the transpose of \([1\ldots 1]\). By expanding along the first column of \(C\), we see that \[\det(C)=M^{t}+(1+M)^{t},\] which is nonzero. Thus, this system admits a unique \(t\)-tuple solution which is \((1,\ldots,1)\). **Corollary 2.9**.: _The even and odd sequences are respectively:_ \[[A_{2},\ldots,A_{2m-2},x^{2}+x],\ [A_{1},\ldots,A_{2m-1},1].\] _Moreover, they are of length \(m+1\leq 2^{\deg(A)-1}\)._ Proof.: We have just seen that \(p_{1}=0\) and \(t=1\). So, \(p_{2}=2\), \(A_{2m+1}=1\) and \(A_{2m}=x^{2}+x\). The odd sequence contains at most: - all odd polynomials of degree \(\deg(A)\), - all odd polynomials of degree \(\deg(A)-1\) \(\vdots\) - the polynomials \(x^{2}+x+1\) and \(1\). Thus, by Lemma 2.1-ii), one has \[m+1\leq 2^{\deg(A)-2}+2^{\deg(A)-3}+\cdots+2+1+1=2^{\deg(A)-1}.\] Examples and "Conceivable" facts In this section, we determine the lengths of Collatz (odd polynomials) sequences for several families. In each example, we only give the sequence of their degrees. We recall that \(M:=x^{2}+x+1\) (the first odd and non-constant polynomial). ### Family \(\{M^{2^{r}}+\cdots+M+1:r\geq 1\}\) **Lemma 3.1**.: _If \(A=M^{2^{r}}+\cdots+M+1\) with \(r\geq 1\), then for any \(0\leq k\leq 2^{r}-2\), \(A_{2k+1}=1+M^{k+1}(M+1)^{2^{r}-k-1}\), \(\deg(A_{2k+1})=\deg(A)\) and \(A_{2(2^{r}-1)+1}=1\). The length \(\ell_{A}\) equals \(2^{r}\)._ Proof.: First, for \(0\leq k\leq 2^{r}-2\), \(A_{2k+1}\) is odd and \(\deg(A_{2k+1})=\deg(A)\). We proceed by induction on \(k\). If \(k=0\), then \(A_{1}=A\) because \(A\) is odd. We easily see that \(A=1+M(M+1)^{2^{r}-1}\). Suppose that \(A_{2k+1}=1+M^{k+1}(M+1)^{2^{r}-k-1}\) and prove that \(A_{2k+3}=1+M^{k+2}(M+1)^{2^{r}-k-2}\), for \(k+1\leq 2^{r}-2\). We get \(A_{2k+2}=1+MA_{2k+1}=\cdots=(1+M)[1+M^{k+2}(M+1)^{2^{r}-k-2}]\) with \(2^{r}-k-2\geq 1\). So, \(1+M^{k+2}(M+1)^{2^{r}-k-2}\) is odd and \(A_{2k+3}=1+M^{k+2}(M+1)^{2^{r}-k-2}\). Now, for \(k=2^{r}-2=m\), one has \(A_{2m+2}=1+MA_{2m+1}=\cdots=(1+M)^{2^{r}}\) and \(A_{2m+3}=1\). So, the length \(\ell_{A}\) of \([A_{1},\ldots,A_{2(2^{r}-2)+1},1]\) equals \(m+1+1=(2^{r}-1)+1=2^{r}\). ### Family \(\{(M^{2^{r}}+\cdots+M+1)^{2^{u}}:r\geq 1,u\geq 1\}\) **Lemma 3.2**.: _If \(A=(M^{2^{r}}+\cdots+M+1)^{2^{u}}\), then \(\ell_{A}=2^{u}\cdot(2^{r}-1)+1\)._ Proof.: As above, \(A_{1}=A\). One has \[A_{2}=1+M+M^{2^{u}+1}(1+M+\cdots+M^{2^{r}-1})^{2^{u}}=(1+M)+M^{2^{u}+1}\cdot( 1+M)^{2^{u}(2^{r}-1)}.\] Thus, \(A_{3}=1+M^{2^{u}+1}\cdot(1+M)^{2^{u}(2^{r}-1)-1}\) and \(\deg(A_{3})=\deg(A_{1})=\deg(A)\). We see (by induction on \(k\)) that \[A_{2k+1}=1+M^{2^{u}+k}\cdot(1+M)^{2^{u}(2^{r}-1)-k}\text{ and }\deg(A_{2k+1})=\deg(A).\] For \(k=2^{u}(2^{r}-1)-1=m\), we get \(A_{2m+1}=1+M^{2^{u}+m}\cdot(1+M)\), \[A_{2m+2}=(1+M)+M^{2^{u}+m+1}\cdot(1+M)=(1+M)(1+M^{2^{u}+m+1})=(1+M)\cdot(1+M)^{ 2^{u+r}}.\] So, \(A_{2m+3}=1\) and \(\ell_{A}=m+1+r_{A_{2m+3}}=m+1+1=2^{u}\cdot(2^{r}-1)+1\). ### Family \(\{(M^{2^{r}-2v}+\cdots+M+1)^{2^{u}}:r,v\geq 1,u\geq 1\}\) We assume that \(2^{r}-2v\) is not a power of \(2\) (this case is already treated in Section 3.2). **Lemma 3.3**.: _For \(r\geq 2\), \(u\geq 1\) and \(A=(M^{2^{r}-2}+\cdots+M+1)^{2^{u}}\), the length \(\ell_{A}\) equals \(2^{u}+1\)._ Proof.: \(A_{1}=A\) since \(A\) is odd. \(A_{2}=1+M+M^{2^{u}+1}\cdot(1+M+\cdots+M^{2^{r}-3})^{2^{u}}\). Therefore, \[A_{2} = 1+M+M^{2^{u}+1}\cdot(1+M)^{2^{u}}(1+M+\cdots+M^{2^{r-1}-2})^{2^{u +1}}\] \[= (1+M)\cdot[1+M^{2^{u}+1}\cdot(1+M)^{2^{u}-1}(1+M+\cdots+M^{2^{r-1 }-2})^{2^{u+1}}].\] Thus, \(A_{3}=1+M^{2^{u}+1}\cdot(1+M)^{2^{u}-1}(1+M+\cdots+M^{2^{r-1}-2})^{2^{u+1}}\). We see (by induction on \(k\)) that \[A_{2k+1}=1+M^{2^{u}+k}\cdot(1+M)^{2^{u}-k}(1+M+\cdots+M^{2^{r-1}-2})^{2^{u+1}}.\] In particular, for \(k=2^{u}-1=m\), one has: \[A_{2m+1}=1+M^{2^{u+1}-1}\cdot(1+M)(1+M+\cdots+M^{2^{r-1}-2})^{2^{u+1}}.\] So, \[A_{2m+2} = 1+MA_{2m+1}=(1+M)(1+M^{2^{u+1}}(1+M+\cdots+M^{2^{r-1}-2})^{2^{u+ 1}}\] \[= (1+M)(1+M+\cdots+M^{2^{r-1}-1})^{2^{u+1}}\] \[= (1+M)((1+M)^{2^{r-1}-1})^{2^{u+1}}.\] We deduce that \(A_{2m+3}=1\) and \(\ell_{A}=m+1+\ell_{A_{2m+3}}=m+1+1=2^{u}+1\). **Proposition 3.4**.: _If \(u,v\geq 1,r\geq 2\) and \(A=(M^{2^{r}-2v}+\cdots+M+1)^{2^{u}}\), then \(\ell_{A}=2^{u}(2v-1)+1\)._ Proof.: The case where \(v=1\) is already treated above. Suppose that \(v\geq 2\). Put \(2v=2^{t_{1}}s_{1}\) with \(s_{1}\) odd. \(A_{1}=A\) because \(A\) is odd. \[A_{2} = 1+M+M^{2^{u}+1}\cdot(1+M)^{2^{u}(2^{t}-1)}(1+M+\cdots+M^{2^{r-t_ {1}}-s_{1}+1})^{2^{u+t_{1}}}\] \[= (1+M)\cdot[1+M^{2^{u}+1}\cdot(1+M)^{2^{u}(2^{t_{1}}-1)-1}(1+M+ \cdots+M^{2^{r-t_{1}}-s_{1}+1})^{2^{u+t_{1}}}].\] So, \(A_{3}=1+M^{2^{u}+1}\cdot(1+M)^{2^{u}(2^{t_{1}}-1)-1}(1+M+\cdots+M^{2^{r-t_{1} }-s_{1}+1})^{2^{u+t_{1}}}\). We see (by induction on \(k\)) that \[A_{2k+1}=1+M^{2^{u}+k}\cdot(1+M)^{2^{u}(2^{t_{1}}-1)-k}(1+M+\cdots+M^{2^{r-t_ {1}}-s_{1}+1})^{2^{u+t_{1}}}.\] In particular, for \(k=2^{u}(2^{t_{1}}-1)-1=m_{1}\), one has: \[A_{2m_{1}+1}=1+M^{2^{u}+m_{1}}\cdot(1+M)(1+M+\cdots+M^{2^{r-t_{1}}-s_{1}+1})^{2^{ u+t_{1}}}.\] \(\bullet\) If \(s_{1}=1\), then \[A_{2m_{1}+2}=\cdots=(1+M)\cdot(1+M+\cdots+M^{2^{r-t_{1}}-1})^{2^{u+t_{1}}}=(1+M )^{2^{u+t_{1}}(2^{r-t_{1}}-1)+1}.\] Thus, \(A_{2m_{1}+3}=1\) and \(\ell_{A}=m_{1}+1+1=2^{u}(2^{t_{1}}-1)+1\). \(\bullet\) If \(s_{1}\geq 3\), then set \(s_{1}-1=2^{t_{2}}s_{2}\), \(s_{2}\) odd, \(s_{2}<s_{1}\). In this case, \[A_{2m_{1}+2}=\cdots=(1+M)^{2^{t_{2}}}\cdot(1+M+\cdots+M^{2^{r-t_{1}-t_{2}}-s_{ 2}-1})^{2^{u+t_{1}+t_{2}}}.\] Hence, \(A_{2m_{1}+3}=(1+M+\cdots+M^{2^{r-t_{1}-t_{2}}-s_{2}-1})^{2^{u+t_{1}+t_{2}}}\) and \[\ell_{A}=m_{1}+1+\ell_{A_{2m_{1}+3}}=2^{u}(2^{t_{1}}-1)+r_{A_{2m_{1}+3}}.\] We remark that \(A_{2m_{1}+3}\) has the same form as \(A_{1}\), with \(r-t_{1}-t_{2}\) instead of \(r\), \(s_{2}+1\) instead of \(2v\) and \(u+t_{1}+t_{2}\) instead of \(u\). - If \(s_{2}=1\), then \(s_{1}-1=2^{t_{2}}\) and by Lemma 3.3, \(\ell_{A_{2m_{1}+3}}=2^{u+t_{1}+t_{2}}+1\), \[\ell_{A}=m_{1}+1+r_{A_{2m_{1}+3}}=2^{u}(2^{t_{1}}-1)+2^{u+t_{1}+t_{2}}+1=\cdots =2^{u}(2v-1)+1.\] - If \(s_{2}\geq 3\), then by putting \(s_{2}+1=2^{t_{3}}s_{3}\), \(s_{3}\) odd and \(m_{2}=2^{u+t_{1}+t_{2}}(2^{t_{3}}-1)-1\), one has \(\ell_{A_{2m_{1}+3}}=m_{2}+1+\ell_{A_{2m_{2}+3}}\), \(s_{3}<s_{2}<s_{1}\). And so on... We obtain the following natural number sequences: \[\left\{\begin{array}{l}\cdots<s_{3}<s_{2}<s_{1}\ \mbox{and}\ t_{1},t_{2},t_{3}, \ldots\ \mbox{where}\ s_{1},s_{2},s_{3},\ldots\ \mbox{are all odd},\\ 2v=2^{t_{1}}s_{1},\ s_{1}-1=2^{t_{2}}s_{2}\ \mbox{if}\ s_{1}\geq 3,\\ s_{2}+1=2^{t_{3}}s_{3}\ \mbox{if}\ s_{2}\geq 3\\ \vdots\end{array}\right.\] Therefore, there exists \(c\in\mathbb{N}^{*}\) such that \(s_{2c-1}=1\) or \(s_{2c}=1\). \(\star\) If \(s_{2c-1}=1\), then \[\left\{\begin{array}{l}s_{1}-1=2^{t_{2}}s_{2},\\ s_{2}+1=2^{t_{3}}s_{3},\\ \vdots\\ s_{2c-3}-1=2^{t_{2c-2}}s_{2c-2},\\ s_{2c-2}+1=2^{t_{2c-1}}.\end{array}\right. \tag{1}\] We need the following notations. \[\begin{array}{l}B_{1}^{1}=A_{1},\ldots,B_{2m_{1}+1}^{1}=A_{2m_{1}+1},\ m_{1}=2^{ u}(2^{t_{1}}-1)-1,\\ B_{1}^{3}=A_{2m_{1}+3},\ldots,B_{2m_{3}+1}^{3}=(B_{1}^{3})_{2m_{3}+1},\ m_{3}=2^{ u+t_{1}+t_{2}}(2^{t_{3}}-1)-1,\\ \vdots\\ B_{1}^{2c-1}=(B_{1}^{2c-3})_{2m_{2c-3}+3},m_{2c-1}=2^{u+t_{1}+t_{2}+\cdots+t_{2c -3}+t_{2c-2}}(2^{t_{2c-1}}-1)-1.\end{array}\] The odd Collatz polynomial sequence for \(A\) is the union of: \([B_{1}^{1},...,B_{2m_{1}+1}^{1}],\ [B_{1}^{3},...,B_{2m_{3}+1}^{3}],...,[B_{1}^{2c-1},...,B_{2m_{2c-1}+1}^{2c-1}],\ [B_{2m_{2c-1}+3}^{2c-1}=1]\), which respectively are of length: \(m_{1}+1,m_{3}+1,\ldots,m_{2c-1}+1\) et \(1\). So, we get \[\ell_{A}=2^{u}(2^{t_{1}}-1)+2^{u+t_{1}+t_{2}}(2^{t_{3}}-1)+\cdots+2^{u+t_{1}+ \cdots+t_{2c-3}+t_{2c-2}}(2^{t_{2c-1}}-1)+1.\] By means of relations in (1), we see that \(\ell_{A}=2^{u}(2v-1)+1\). \(\star\) If \(s_{2c}=1\), then \[\left\{\begin{array}{l}s_{1}-1=2^{t_{2}}s_{2},\\ s_{2}+1=2^{t_{3}}s_{3},\\ \vdots\\ s_{2c-3}-1=2^{t_{2c-2}}s_{2c-2},\ s_{2c-2}+1=2^{t_{2c-1}s_{2c-1}},\\ s_{2c-1}-1=2^{t_{2c}}\end{array}\right. \tag{2}\] The odd Collatz polynomial sequence for \(A\) is the union of: \[[B_{1}^{1},\ldots,B_{2m_{1}+1}^{1}],\ [B_{1}^{3},\ldots,B_{2m_{3}+1}^{3}],\ldots,[B_ {1}^{2c-1},\ldots,B_{2m_{2c-1}+1}^{2c-1}]\] with the sequence for \(B:=(1+M+\cdots+M^{2^{a}-2})^{2^{b}}\), where \[a=r-t_{1}-t_{2}-\cdots-t_{2c-1}-t_{2c}\ \mbox{and}\ b=u+t_{1}+t_{2}+\cdots+t_{2c -1}+t_{2c}.\] Thus, the length \(\ell_{B}\) of \(B\) equals \(2^{b}+1\) (Lemma 3.3) and \[\ell_{A}=2^{u}(2^{t_{1}}-1)+2^{u+t_{1}+t_{2}}(2^{t_{3}}-1)+\cdots+2^{u+t_{1}+ \cdots+t_{2c-3}+t_{2c-2}}(2^{t_{2c-1}}-1)+2^{b}+1.\] By means of relations in (2), we obtain \(\ell_{A}=2^{u}(2v-1)+1\). **Corollary 3.5**.: _For \(A=(M^{2v}+\cdots+M+1)^{2^{u}}\), \(\ell_{A}\) equals \(2^{u}(2^{r}-2v-1)+1\), where \(r\) is the least integer such that \(2v<2^{r}\)._ Proof.: Apply the above proposition by writting \(2v=2^{r}-(2^{r}-2v)\) ### Family \(\{M^{2^{r-j}}+\cdots+M+1:r\geq 1,\ 1\leq j\leq 2^{r-1}-1\}\) We suppose that \(2^{r}-j\) is not a power of \(2\) (see Section 3.1 for this case). **Lemma 3.6**.: _i) If \(A=M^{2^{r}-1}+\cdots+M+1\), then \(A_{1}=1\) and \(\ell_{A}=1\). ii) If \(A=M^{2^{r}-2k}+\cdots+M+1\) with \(r>k\geq 1\), then \(\ell_{A}=2k\). iii) If \(A=M^{2^{r}-2k-1}+\cdots+M+1\) with \(r>2k+1\geq 3\), then \(\ell_{A}=2k+1\)._ Proof.: i): For \(j=1\), \(A_{0}=A=(1+M)^{2^{r}-1}=x^{2^{r}-1}(x+1)^{2^{r}-1}\cdot 1\). ii): For \(j=2\), one has \(A_{1}=A\) because \(A\) is odd. \[A_{2}=1+MA_{1}=1+M^{2^{r}-1}+\cdots+M^{2}+M=(1+M)^{2^{r}-1}.\] So, \(A_{3}=1\) and \(\ell_{A}=2\). If \(j=2k\geq 4\), then \(A_{1}=A\) as above. One has: \[A_{2}=1+MA_{1}=M^{2^{r}-2k+1}+\cdots+M+1=\frac{M^{2^{r}-(2k-2)}+1}{M+1}.\] Put \(2k-2=2^{u}w\) where \(u\geq 1\) and \(w=2t-1\) is odd. We get \[A_{2}=(M+1)^{2^{u}-1}\cdot(M^{2^{r-u}-2t}+\cdots+M+1)^{2^{u}},\] and thus \(A_{3}=(M^{2^{r-u}-2t}+\cdots+M+1)^{2^{u}}\). Proposition 3.4 implies that \(\ell_{A_{3}}=2^{u}\cdot(2t-1)+1=2k-1\). Hence, \(\ell_{A}=\ell_{A_{3}}+1=2k\). iii): \(A\) is even. Put \(2k=2^{u}w\) with \(u\geq 1\) and \(w=2t-1\) odd. One has: \[A=(M+1)^{2^{u}-1}\cdot(M^{2^{r-u}-2t}+\cdots+M+1)^{2^{u}}.\] So, \(A_{1}=(M^{2^{r-u}-2t}+\cdots+M+1)^{2^{u}}\), \(\ell_{A}=\ell_{A_{1}}=2^{u}\cdot(2t-1)+1=2k+1\). ### Family \(\{M^{n}+1:n\geq 2\}\) In this section, we take \(A:=M^{n}+1=(x^{2}+x+1)^{n}+1\), for \(n\geq 2\) so that \(A\) is even. Put \(n=2^{r}u\), where \(r\geq 0\) and \(u\) odd. One has \(A=(M_{1}+1)^{2^{r}}\cdot(M^{u-1}+\cdots+M+1)^{2^{r}}\). Hence, the first polynomial in the odd sequence is \(A_{1}=(M^{u-1}+\cdots+M+1)^{2^{r}}\). On the other hand, if \(n\geq 2\), then there exists a unique positive integer \(r\) such that \(2^{r-1}<n\leq 2^{r}\). Thus, we may write \(n=2^{r}-j\), with \(0\leq j\leq 2^{r-1}-1\). **Proposition 3.7**.: _Let \(A=M^{2^{r}-j}+1\) where \(r\geq 1\) and \(0\leq j\leq 2^{r-1}-1\). Then, the odd sequence of \(A\) is of length \(j+1\) (which is small enough)._ Proof.: - If \(j=0\), then \(A=M^{2^{r}}+1=(M+1)^{2^{r}}\). So, \(A_{1}=1\) and \(\ell_{A}=1\). - If \(j=1\), then \(A=M^{2^{r}-1}+1=(M+1)(M^{2^{r}-2}+\cdots+M+1)\). Therefore, \(A_{1}=M^{2^{r}-2}+\cdots+M+1\) and \(\ell_{A}=\ell_{A_{1}}=2\), by Lemma 3.6. - If \(j=2k\) with \(k=2t-1\) odd, then \[A=\cdots=(M+1)^{2}\cdot(M^{2^{r-1}-2t}+\cdots+M+1)^{2}.\] Thus, \(A_{1}=(M^{2^{r-1}-2t}+\cdots+M+1)^{2}\) and from Proposition 3.4, \[\ell_{A}=\ell_{A_{1}}=2(2t-1)+1=2k+1.\] - If \(j=2k\) with \(k=2^{s}w\) even, \(t\geq 1\) and \(w=2t-1\) odd, then \[A=\cdots=(M+1)^{2^{s+1}-2}\cdot(M^{2^{r-s-1}-2t}+\cdots+M+1)^{2^{s+1}}.\] So, \(A_{1}=(M^{2^{r-s-1}-2t}+\cdots+M+1)^{2^{s+1}}\) and from Proposition 3.4, \(\ell_{A}=\ell_{A_{1}}=2^{s+1}\cdot(2t-1)+1=2k+1\). - If \(j=2k-1\) is odd, then \(A=(M+1)\cdot(M^{2^{r}-2k}+\cdots+M+1)\), \(A_{1}=M^{2^{r}-2k}+\cdots+M+1\). By Lemma 3.6, \(\ell_{A}=\ell_{A_{1}}=2k\). For illustration, we give below the odd degree sequences for \(T_{n}=M^{n}+1\), \(n\in\{9,\ldots,16\}\) so that \(n=2^{4}-j\), \(0\leq j\leq 7\). Here, the lengths are all smaller than \(n=\deg(T_{n})/2\). \begin{tabular}{|c|c|c|} \hline \(n\) & Degree sequence & Length \\ \hline 9 & \([16,16,16,16,16,16,16,0]\) & 8 \\ 10 & \([16,16,16,16,16,16,0]\) & 7 \\ 11 & \([20,16,16,16,16,0]\) & 6 \\ 12 & \([16,16,16,16,0]\) & 5 \\ 13 & \([24,24,24,0]\) & 4 \\ 14 & \([24,24,0]\) & 3 \\ 15 & \([28,0]\) & 2 \\ 16 & \([0]\) & 1 \\ \hline \end{tabular} ### Family \(\{M^{n}:n\geq 2\}\) We may write \(n=2^{r}-j\) where \(r\) is the least positive integer such that \(u\leq 2^{r}\) and \(0\leq j\leq 2^{r-1}-1\). We prove **Proposition 3.8**.: _If \(A=M^{2^{r}-j}\) with \(r\geq 1\) and \(0\leq j\leq 2^{r-1}-1\), then the length \(\ell_{A}\) equals \(j+1\) (resp. \(2^{r}+1\)) if \(j\neq 0\) (resp. if \(j=0\))._ Proof.: First, \(A\) is odd so that \(A_{1}=A\). \(\bullet\) If \(j=1\), then \(A_{2}=1+M^{2^{r}}=(1+M)^{2^{r}}\). So, \(A_{3}=1\) and \(\ell_{A}=\ell_{A_{1}}=\ell_{A_{3}}+1=2\). \(\bullet\) If \(j=2\), then \(A_{2}=1+M^{2^{r}-1}=(1+M)(M^{2^{r}-2}+\cdots+M+1)\), \(A_{3}=M^{2^{r}-2}+\cdots+M+1\) with \(\ell_{A_{3}}=2\) (Lemma 3.6). Thus, \(\ell_{A}=3\). \(\bullet\) If \(j=2k\geq 4\), then \(A_{2}=(1+M)A_{3}\), where \(A_{3}=M^{2^{r}-2k}+\cdots+M+1\), \(\ell_{A_{3}}=2k\) (Lemma 3.6) and \(\ell_{A}=2k+1\). \(\bullet\) If \(j=2k+1=2^{u}w+1\) where \(u\geq 1\) and \(w\) odd, then \(A_{2}=(1+M)^{2^{u}}A_{3}\) with \(A_{3}=(M^{2^{r-u}-w-1}+\cdots+M+1)^{2^{u}}\), \(\ell_{A_{3}}=2^{u}w+1\) (Proposition 3.4). Hence, \(\ell_{A}=2^{u}w+1+1=2k+2\). \(\bullet\) Finally, if \(j=0\), then \(A_{2}=1+M^{2^{r}+1}=(1+M)A_{3}\) with \(A_{3}=M^{2^{r}}+\cdots+M+1\), \(\ell_{A_{3}}=2^{r}\) (Lemma 3.1) so that \(\ell_{A}=2^{r}+1\). Example for \(n=2^{r}-j\in\{9,\ldots,16\}\) (\(r=4\) and \(0\leq j\leq 7\)) : \begin{tabular}{|l|c|} \hline \(n\) & Degree sequence & Length \\ \hline 9 & \([18,16,16,16,16,16,16,0]\) & 8 \\ 10 & \([20,20,16,16,16,16,0]\) & 7 \\ 11 & \([22,16,16,16,16,0]\) & 6 \\ 12 & \([24,24,24,24,0]\) & 5 \\ 13 & \([26,24,24,0]\) & 4 \\ 14 & \([28,28,0]\) & 3 \\ 15 & \([30,0]\) & 2 \\ 16 & \([32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,0]\) & 17 \\ \hline \end{tabular} ### Family \(\{(1+M)^{n}+1:n\geq 2\}\) We suppose that \(n\) is not a power of \(2\) (see Section 3.6, for this case). We may write \(n=2^{r}-j\) with \(r\geq 2\) and \(1\leq j\leq 2^{r-1}-1\). **Proposition 3.9**.: _If \(A=(1+M)^{2^{r}-j}+1\) where \(r\geq 2\) and \(1\leq j\leq 2^{r-1}-1\), then the length \(\ell_{A}\) equals \(2^{r}+1\)._ Proof.: For a fixed \(j\geq 1\), we prove (by induction on \(k\)) that \[A_{2k+1}=1+M^{k}(1+M)^{2^{r}-j-k},\,0\leq k\leq 2^{r}-j-1.\] If \(k=0\), then \(A_{1}=A=1+(1+M)^{2^{r}-j}\) since \(A\) is odd. Suppose that \(A_{2k+1}=1+M^{k}(1+M)^{2^{r}-j-k}\). We claim that \(A_{2k+3}=1+M^{k+1}(1+M)^{2^{r}-j-k-1}\). One has \(A_{2k+2}=1+MA_{2k+1}=(1+M)(1+M^{k+1}(1+M)^{2^{r}-j-k-1})\). So, \(A_{2k+3}=1+M^{k+1}(1+M)^{2^{r}-j-k-1}\). Now, for \(k=2^{r}-j-1=m\), we get \(A_{2m+1}=1+M^{2^{r}-j-1}(1+M)\) and \(A_{2m+2}=1+MA_{2m+1}=(1+M)(1+M^{2^{r}-j})\). \(\bullet\) If \(j=2v-1\), then \[A_{2m+2}=(1+M)(1+M^{2^{r}-j})=(1+M)^{2}(M^{2^{r}-2v}+\cdots+M+1).\] Hence, \(A_{2m+3}=M^{2^{r}-2v}+\cdots+M+1\) and \(\ell_{A_{2m+3}}=2v\) (Lemma 3.6). The odd sequence for \(A\) is the union of \([A_{1},A_{3},\ldots,A_{2m+1}]\) with the odd sequence for \(A_{2m+3}\). Therefore, \(\ell_{A}=(m+1)+2v=(2^{r}-2v+1)+2v=2^{r}+1\). \(\bullet\) If \(j=2^{u}w\) with \(u\geq 1\) and \(w\) odd, then \[A_{2m+2}=(1+M)(1+M^{2^{r}-j})=(1+M)^{2^{u}}(M^{2^{r-u}-w-1}+\cdots+M+1)^{2^{u}}.\] So, \(A_{2m+3}=(M^{2^{r-u}-w-1}+\cdots+M+1)^{2^{u}}\) and \(\ell_{A_{2m+3}}=2^{u}\cdot w+1=j+1\) (Proposition 3.4). The odd sequence for \(A\) is the union of \([A_{1},A_{3},\ldots,A_{2m+1}]\) with the odd sequence for \(A_{2m+3}\). So, \(\ell_{A}=(m+1)+j+1=2^{r}+1\). Example for \(n=2^{r}-j\in\{9,\ldots,15\}\) (\(r=4\) and \(1\leq j\leq 7\)) : \begin{tabular}{|l|l|c|} \hline \(n\) & Degree sequence & Length \\ \hline 9 & \([18,18,18,18,18,18,18,18,18,16,16,16,16,16,16,16,0]\) & 17 \\ 10 & \([20,20,20,20,20,20,20,20,20,20,20,16,16,16,16,16,0]\) & 17 \\ 11 & \([22,22,22,22,22,22,22,22,22,22,22,22,20,16,16,16,16,0]\) & 17 \\ 12 & \([24,24,24,24,24,24,24,24,24,24,24,24,24,16,16,16,16,0]\) & 17 \\ 13 & \([26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,26,24,24,24,0]\) & 17 \\ 14 & \([28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,28,24,24,0]\) & 17 \\ 15 & \([30,30,30,30,30,30,30,30,30,30,30,30,30,30,30,28,0]\) & 17 \\ \hline \end{tabular} ### Family \(\{1+M^{a}(M+1)^{b}:a,b\geq 2\}\) **Proposition 3.10**.: _For \(A=1+M^{a}(M+1)^{b}\) where \(a,b\geq 2\), we get_ \[\ell_{A}=\left\{\begin{array}{l}b+1\mbox{ if }a+b=2^{r}\mbox{ with }r\geq 1\\ b+2^{r}(2^{w}-u)+1\mbox{ if }\left\{\begin{array}{l}a+b=2^{r}u\mbox{ with }u\geq 3\mbox{ odd, }r\geq 1\mbox{ and }\\ w\mbox{ is the least positive integer such that }u-1<2^{w}\\ a+2b-1\mbox{ if }a+b=2^{t}+1\mbox{ with }t\geq 1\\ b+2^{r}-2v\mbox{ if }\left\{\begin{array}{l}a+b=2v+1\mbox{, }v\mbox{ is not a power of }2\mbox{, }\\ r\mbox{ is the least positive integer such that }2v<2^{r}.\end{array}\right.\end{array}\right.\] Proof.: One gets \(A_{1}=A\) since \(A\) is odd. We easily see that \[\begin{array}{l}A_{2k+1}=1+M^{a+k}(1+M)^{b-k},\,\mbox{for $1\leq k\leq b-1=m$},\\ A_{2m+1}=1+M^{a+b-1}(1+M),\\ A_{2m+2}=1+MA_{2m+1}=(1+M)(1+M^{a+b}).\end{array}\] \(\bullet\) If \(a+b=2^{r}\), then \(A_{2m+2}=(1+M)^{2^{r}+1}\). So, \[A_{2m+3}=1\mbox{ and }\ell_{A}=m+1+1=b+1.\] \(\bullet\) If \(a+b=2^{r}u\) with \(u\geq 3\) odd, then \[A_{2m+2}=(1+M)(1+M^{u})^{2^{r}}=(1+M)^{2^{r}+1}(1+M+\cdots+M^{u-1})^{2^{r}}.\] Thus, \(A_{2m+3}=(1+M+\cdots+M^{u-1})^{2^{r}}\). Corollary 3.5 implies that \[\ell_{A}=m+1+\ell_{A_{2m+3}}=b+2^{r}(2^{w}-(u-1)-1)+1,\] \(w\) being the least positive integer such that \(u-1<2^{w}\). \(\bullet\) If \(a+b=2^{t}+1\), then \[A_{2m+2}=(1+M)^{2}(1+M+\cdots+M^{2^{t}}),\,\,A_{2m+3}=1+M+\cdots+M^{2^{t}}\] and by Lemma 3.1, \[\ell_{A}=m+1+\ell_{A_{2m+3}}=b+2^{t}=a+2b-1.\] \(\bullet\) If \(a+b=2v+1\) where \(2v\) is not a power of \(2\), then \[A_{2m+2}=(1+M)^{2}(1+M+\cdots+M^{2v}),\,\,A_{2m+3}=1+M+\cdots+M^{2v}.\] Corollary 3.5 implies that \(\ell_{A}=m+1+\ell_{A_{2m+3}}=b+2^{r}-2v\), \(r\) being the least positive integer such that \(2v<2^{r}\). Example for \(2\leq a\leq b\leq 5\), \(a+b\leq 10\) : \begin{tabular}{|l|l|l|} \hline \((a,b)\) & Degree sequence & Length \\ \hline \((2,2)\) & \([8,8,0]\) & 3 \\ \((2,3)\) & \([10,10,10,8,8,8,0]\) & 7 \\ \((2,4)\) & \([12,12,12,12,8,8,0]\) & 7 \\ \((2,5)\) & \([14,14,14,14,14,12,0]\) & 7 \\ \((3,3)\) & \([12,12,12,8,8,0]\) & 6 \\ \((3,4)\) & \([14,14,14,14,12,0]\) & 6 \\ \((3,5)\) & \([16,16,16,16,16,0]\) & 6 \\ \((4,4)\) & \([16,16,16,16,0]\) & 5 \\ \((4,5)\) & \([18,18,18,18,18,16,16,16,16,16,16,16,0]\) & 13 \\ \((5,5)\) & \([20,20,20,20,20,16,16,16,16,16,16,0]\) & 12 \\ \hline \end{tabular} ### Family \(\{(M^{2}+M+1)^{n}:n\geq 2\}\) We state the following conjecture. Note that Lemma 3.3 treats the case where \(n=2^{u}\), \(u\geq 1\). **Conjecture 3.11**.: _If \(A=(M^{2}+M+1)^{n}\) with \(n\geq 2\), then \(\ell_{A}\) equals \(n+1\)._ Example for \(n\in\{9,\ldots,16\}:\) \begin{tabular}{|l|c|c|} \hline \(n\) & Degree sequence & Length \\ \hline 9 & \([36,32,32,32,32,32,30,30,4,0]\) & 10 \\ 10 & \([40,40,32,32,28,28,28,28,8,8,0]\) & 11 \\ 11 & \([44,42,36,34,32,28,26,26,12,10,4,0]\) & 12 \\ 12 & \([48,48,48,48,40,40,40,40,16,16,16,16,0]\) & 13 \\ 13 & \([52,48,46,46,44,40,38,38,20,16,14,14,4,0]\) & 14 \\ 14 & \([56,56,52,52,48,48,44,44,24,24,20,20,8,8,0]\) & 15 \\ 15 & \([60,58,56,54,52,50,48,46,28,26,24,22,12,10,4,0]\) & 16 \\ 16 & \([64,64,64,64,64,64,64,64,64,64,64,64,64,64,64,64,64,0]\) & 17 \\ \hline \end{tabular} ### Family \(\{x^{n}+x+1:n\geq 2\}\) A priori, this family does not contain any polynomial in \(M\), except for \(n=2\) and \(n=4\). We state two conjectures. **Conjecture 3.12**.: _Let \(s\) be the greatest integer such that \(n-2^{s+1}\geq 1\). Then, for any positive integer \(t\leq s-1\), the odd sequence contains \(2^{t}\) polynomials which have the same degree \(d_{t}\). In particular, \(d_{1}=\deg(A_{5})=\deg(A_{7})\) and \(d_{2}=\deg(A_{9})=\deg(A_{11})=\deg(A_{13})=\deg(A_{15})\)._ **Conjecture 3.13**.: _Let \(s\) be the greatest integer such that \(n-2^{s+1}\geq 1\). Then, the length \(\ell_{A}\) equals \(2^{s}+1\)._ Conjecture 3.13 follows from Conjecture 3.12. Indeed, from Corollary 2.9, the sequence of odd polynomials is of length \(m+1\): \[[A_{1},A_{3},A_{5},A_{7},\ldots,A_{2m-1-2^{s}},\ldots,A_{2m-3},A_{2m-1},1].\] One has, by Conjecture 3.12, \[m+1=1+1+2+2^{2}+\cdots+2^{s-1}+1=1+(2^{s}-1)+1=2^{s}+1.\] Example for \(n\in\{7,8,14,15,16,17,18\}\) \begin{tabular}{|l|c|c|} \hline \(n\) & Degree sequence & Length \\ \hline 7 & \([7,5,0]\) & 3 \\ 8 & \([8,4,0]\) & 3 \\ 14 & \([14,11,8,8,0]\) & 5 \\ 15 & \([15,13,8,8,0]\) & 5 \\ 16 & \([16,12,8,8,0]\) & 5 \\ 17 & \([17,15,14,14,12,12,12,12,0]\) & 9 \\ 18 & \([18,15,14,14,12,12,12,12,0]\) & 9 \\ \hline \end{tabular} ### Remarks We denote by \(\overline{S}\) the polynomial obtained from \(S\in\mathbb{F}_{2}[x]\), by replacing \(x\) by \(x+1\). We also consider the reciprocal \(S^{*}\) of \(S\) as: \(S^{*}(x)=x^{\deg(S)}\cdot S(\dfrac{1}{x})\). It is easy to see that the Collatz sequences of \(\overline{A}\) are exactly obtained from those of \(A\) by applying the operation: \(S\mapsto\overline{S}\). But for \(A^{*}\), it is not true (in general). For example, if \(A=x^{8}+x^{3}+1\), then the odd degree sequence is \([8,7,5,5,4,3,0]\), whereas for \(A^{*}=x^{8}+x^{5}+1\), one gets \([8,6,6,0]\).
2307.11517
Further Remarks on the Sampled-Data Feedback Stabilization Problem
The paper deals with the problem of the sampled data feedback stabilization for autonomous nonlinear systems. The corresponding results extend those obtained in earlier works by the same authors. The sufficient conditions we establish are based on the existence of discontinuous control Lyapunov functions and the corresponding results are applicable to a class of nonlinear affine in the control systems.
John Tsinias, Dionysis Theodosis
2023-07-21T12:00:10Z
http://arxiv.org/abs/2307.11517v1
# Further Remarks on the Sampled-Data Feedback Stabilization Problem ###### Abstract The paper deals with the problem of the sampled data feedback stabilization for autonomous nonlinear systems. The corresponding results extend those obtained in [3] by the same authors. The sufficient conditions we establish are based on the existence of discontinuous control Lyapunov functions and the corresponding results are applicable to a class of nonlinear affine in the control systems. Sampled-Data, Time-Varying Feedback, Discontinuous Lyapunov Functions. ## I Introduction We consider autonomous systems of the general form: \[\dot{x}=f(x,u),(x,u)\in\mathbb{R}^{n}\times\mathbb{R}^{n}, \tag{1}\] \[f(0,0)=0\] where \(f:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is Lipschitz continuous. For every initial \(x_{0}\in\mathbb{R}^{n}\), measurable and locally essentially bounded control \(u:[s,t_{\max})\rightarrow\mathbb{R}^{n}\), we denote by \(\pi(\cdot)=\pi(\cdot,s,x_{0},u)\) the corresponding trajectory of (1), that satisfies \(\pi(s,s,x_{0},u)=x_{0}\), where \(t_{\max}=t_{\max}(s,x_{0},u)\) is the corresponding maximal existance time of the trajectory. We say that system (1) is Semi-Globally Asymptotically Stabilizable by Sampled-Data Feedback (SDF-SGAS), if for every constant \(R>0\) and for any given partition of times \(T_{i}:=0<T_{2}<T_{3}<\ldots<T_{\nu}<\ldots\), with \(T_{\nu}\rightarrow\infty\), there exist a neighborhood \(\Omega_{R}\) of zero with \[B[0,R;\mathbb{R}^{n}]:=\left\{x\in\mathbb{R}^{n}:\mid x\mid\leq R\right\} \subset\Omega_{R}\,,\] (where \(\left|x\right|\) denotes the Euclidean norm of the vector \(x\in\mathbb{R}^{n}\)) and a map \(k:\mathbb{R}^{n}\times\Omega_{R}\rightarrow\mathbb{R}^{n}\) such that for any \(x\in\Omega_{R}\) the map \(k(\cdot,x):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is measurable and locally essentially bounded, zero is stable with respect to the sampled-data closed-loop system \(\dot{\pi}=f(\pi,K(t,\pi(T_{i})))\), \(t\in[T_{i},T_{i+1})\), \(i=1,2,\ldots\) with \(\lim_{i\rightarrow\infty}\pi(t)=0\), \(\forall\pi(0)\in\Omega_{R}\). The definition above is adopted in [3,5] and constitutes a time-varying version of both concepts of asymptotic controllability and sampled-data stabilization (see [1,2]). The following property modifies the usual concept of "control Lyapunov function" adopted in [1,2,3] and other related works. **Property 1:** For system (1) assume that there exist \(a_{i},a_{2},a\in K\)(\(a_{1},a_{2},a\) are continuous, strictly increasing with \(a_{1}(0)=a_{2}(0)=a(0)=0\)) and \(V:\mathbb{R}^{n}\rightarrow\mathbb{R}^{+}\) (being in general discontinuous) such that \[a_{1}(\mid x\mid)\leq V(x)\leq a_{2}(\mid x\mid),\ \ x\in\mathbb{R}^{n} \tag{2}\] in such a way that for every \(R>0\), \(x\in B[0,R;\mathbb{R}^{n}],x\neq 0\), there exist constants \(\sigma_{x},L_{x},M_{x}>0\) such that for every \(\varepsilon\in(0,\sigma_{x}]\),there exists an input \[u_{x,x}:=u_{x,x}(\cdot):[0,\varepsilon]\to B[0,M_{x}:\mathbb{R}^{n}] \tag{3}\] with \[V(\pi(\varepsilon,0,\overline{x},u_{x,x}))<V(\overline{x})-L_{x}; \tag{4a}\] \[V(\pi(t,0,\overline{x},u_{x,x}))\leq a(V(\overline{x})),\] (4b) \[\forall t\in[0,\varepsilon],\ \overline{x}\ \text{ near }x\] The following lemma generalizes the result in [3, Proposition 2] establishing SDF-SGAS for (1) under the stronger hypothesis that \(V\) is continuous. **Lemma 1:** Property 1 implies SDF-SGAS for system (1). **Remark 1:** The establishment of the result above is based on the same procedure applied in the proof of [3, Proposition 2] plus certain modifications; we note that the following facts play a central role for the proof of Lemma 1. **Fact 1:** Due to our assumption that, in addition to (4), condition (3) is fulfilled for certain \(M_{x}\in(0,+\infty)\), the corresponding trajectory involved in (4) also satisfies: \[\left|\pi(t,0,\overline{x},u_{x,x})-\overline{x}\right|\leq\varepsilon C_{x} \tag{5}\] \[\forall t\in[0,\varepsilon],\ \overline{x}\ \text{ near }x,\varepsilon>0\ \text{ near zero}\] for certain \(C_{x}>0\). **Fact 2:** The pair of conditions (3) and (4) are equivalent to the following property: For every nonempty compact set \(S\subset\mathbb{R}^{n}\setminus[0]\) there exist constants \(\sigma,L>0\),such that for every \(R>0\), \(x\in S\cap B[0,R;\mathbb{R}^{n}]\), \(x\neq 0\) and \(\varepsilon\in(0,\sigma]\) there exists a constant \(M_{x}>0\) and \(\text{input}=u_{x,x}\) satisfying (3) and such that \[V(\pi(\varepsilon,0,x,u_{x,x}))<V(x)-L; \tag{6a}\] \[V(\pi(t,0,x,u_{x,x}))\leq 2a(V(x));\ \ \forall t\in[0,\varepsilon] \tag{6b}\] **Remark 2:** If \(V\) is lower semicontinuous, then (6) is equivalent to the weaker assumption thatfor the specific \(x\) both (4a,b) hold with \(L_{s}=0\). In particular, instead of (4a,b), we may assume that for any sufficiently small \(\varepsilon\in(0,\sigma_{s}]\) there exists a control \(u_{c_{s,x}}:[0,\varepsilon]\to B[0,M_{x};\mathbb{R}^{m}]\) with \[V(\pi(\varepsilon,0,x,u_{c_{s,x}}))<V(x); \tag{7a}\] \[V(\pi(t,0,x,u_{c_{s,x}}))\leq 2a(V(x)),\ \ \forall t\in[0,\varepsilon]. \tag{7b}\] It turns out that _continuity_ of \(V\) implies equivalence between (4a,b), (6a,b) and (7a,b). Lemma 2 of Section II is the main result of this article and establishes that, under existence of an appropriate family of _continuous_ Lyapunov-like functions \(V_{i}\),with \(\cup domV_{i}=\mathbb{R}^{*}\), a control Lyapunov function W can be found, being _discontinuous_ and satisfying all conditions of Property 1, including (4) with \(V\) instead of \(W\). The latter according to Lemma 1 guarantees solvability of SDF-SGAS for (1). We use the result of Lemma 2 to derive sufficient conditions for the solvability of SDF-SGAS for a class of affine in the control systems. (Section III: Proposition 1, Corollary 1 and Proposition 2). ## II The Main Result We present the main result of this article based on the existence of a family of continuous control Lyapunov function establishing sufficient conditions for SDF-SGAS systems systems (1). Its proof is based on the result of Lemma 1. **Lemma 2 (Main Result):** For system (1) assume that there exist \(\omega_{1},\omega_{2}\in K\) and \(a\in K\), such that for every constant \(R\in\mathbb{R}^{*}\) there exist open regions \[A_{i}\subset\mathbb{R}^{n}\setminus\{0\},\ i=1,2,.... \tag{8}\] with \[A_{i}\cap A_{j}=\emptyset,\ \forall i\neq j \tag{9a}\] \[B[0,R;\mathbb{R}^{n}]\subset cl\left(\bigcup A_{i}\right) \tag{9b}\] and continuous mappings \[V_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{*},V_{i}(0)=0,i=1,2,... \tag{10}\] with \[\omega_{1}(\,|\ x\,|\,)\leq V_{i}(x)\leq\omega_{2}(\,|\ x\,|),\ i=1,2..,\ \ x\in A_{i} \tag{11}\] and in such a way that for every \(i=1,2,...,\) nonzero \(x\) belonging to \(cl\left(A_{i}\right)\), and sufficiently small \(\varepsilon>0\), there exist a constant \(M_{i}>0\) and a control \[u^{i}_{c,x}:[0,\varepsilon]\to B[0,M_{i};\mathbb{R}^{m}] \tag{12}\] with \[V_{i}(\pi(\varepsilon,0,x,u^{i}_{c_{s,x}}))<V_{i}(x); \tag{13a}\] \[V_{i}(\pi(t,0,x,u^{i}_{c_{s,x}}))\leq a(V_{i}(x)),\ \forall t\in[0,\varepsilon] \tag{13b}\] Then, system (1) satisfies all conditions of Lemma 1 for some appropriate Upper Semicontinuous (USC) Lyapunov function \(W:\mathbb{R}^{n}\rightarrow\mathbb{R}^{*}\) satisfying (2)-(4) with \(V:=W\), therefore, according to Lemma 1, the system (1) is SDF-SGAS. **Remark 3:** For the case where there exist a finite number of open sets \(A_{i}\subset\mathbb{R}^{n}\setminus\{0\}\), \(i=1,2,...,N\), \(N\in\mathbb{N}\) such that (9.a,b) hold and associated with continuous functions \(V_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{*}\) as in statement of Proposition 2, then _positive definiteness_ of each \(V_{i}:A_{i}\rightarrow\mathbb{R}^{*},i,...,N\) implies (11). **Proof of Lemma 2:** Without any loss of generality, we may assume that all regions \(A_{i}\) are bounded and, due to (9) and (10), for every \(x\neq 0\) there exists an integer \(k\) and finite number of indices \(i_{i},i_{2},i_{3},...,i_{k}\) for which \(x\in cl(A_{i}\cup A_{i},...\cup A_{k})\) and \(x\neq cl(A_{j})\), \(\forall j\neq i_{1},i_{2},...,i_{k}\). Consider for each \(i=1,2,...,\) a constant \[c_{i}=c_{i}(A_{i})>0 \tag{14}\] and functions \(a_{1},a_{2}\in K\) such that \[a_{1}(\,|\ x\,|\,)\leq a_{1}(\,|\ x\,|)+\min\left\{c_{i}\ \text{ for those $i$ for which $x\in cl(A_{i})$}\right\} \tag{15a}\] \[\omega_{2}(\,|\ x\,|)+\max\left\{c_{i}\ \text{ for those $i$ for which $x\in cl(A_{i})$}\right\}\leq a_{2}(\,|\ x\,|)\] (15b) \[a(V_{i}(x))+c_{i}<2a(V_{i}(x)+c_{i});\ \forall x\in cl(A_{i}) \tag{15c}\] where \(a\) is defined in (13b), and in such a way that, if we define \[W_{i}:=V_{i}+c_{i},\ x\in A_{i} \tag{16}\] the following holds \[W_{i}(x)\neq W_{j}(x),\ \forall x\in cl(A_{i})\bigcap cl(A_{j}),\ x\neq 0 \tag{17}\] Define \[W(x)\left\{\begin{array}{ll}\coloneqq W_{i}(x),&x\in A_{i}\\ \coloneqq\max\left\{W_{j}(x);j=i,i_{2},...,i_{k}\in\mathbb{N}\right\}\text{ for}\\ (0\neq)x\in\cup\partial A_{j}\partial A_{i}\cup\partial A_{i_{2}}...\cap \partial A_{j}:\\ x\neq\bigcap\partial A_{j};j\neq i_{1},i_{2},...,i_{k}\ \text{ for certain $k=k(x)\in\mathbb{N}$}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad input \(u^{i}_{{}_{e,x}}:[0,\varepsilon]\to B[0,M_{i};\mathbb{R}^{m}]\) satisfying both (13a,b) and (12). The latter implies: \[\left|\pi(t,0,x,u_{{}_{e,x}})-x\right|\leq\varepsilon C_{i},\forall t\in[0, \varepsilon],\varepsilon>0 \tag{19}\] for certain \(C_{i}>0\). It follows that for sufficiently small \(\varepsilon>0\) and \(x\in A_{i}\) the trajectory \(\pi(t,0,x,u^{i}_{{}_{e,x}})\) remains inside \(A_{i}\) for \(t\) near zero. By recalling (12)-(18) and selecting sufficiently small \(\varepsilon>0\), we have: \[\begin{array}{l}x\in A_{i}\Rightarrow\\ W(\pi(\varepsilon,0,x,u^{i}_{{}_{e,x}}))=W_{i}(\pi(\varepsilon,0,x,u^{i}_{{ }_{e,x}}))\\ =V_{i}(\pi(\varepsilon,0,x,u^{i}_{{}_{e,x}}))+c_{i}<V_{i}(x)+c_{i}=W(x)\end{array} \tag{20}\] and simultaneously: \[\begin{array}{l}x\in A_{i}\Rightarrow\\ W(\pi(t,0,x,u^{i}_{{}_{e,x}}))=W_{i}(\pi(t,0,x,u^{i}_{{}_{e,x}}))\\ =V_{i}(\pi(t,0,x,u^{i}_{{}_{e,x}}))+c_{i}\leq a(V_{i}(x))+c_{i}\\ \leq 2a(V_{i}(x)+c_{i})=2a(W(x)),t\in[0,\varepsilon]\end{array} \tag{21}\] for appropriate \(u^{i}_{{}_{e,x}}\) satisfying (12). Then, for the specific \(x\) above, the desired (4a,b) are consequence of (19)-(21) and continuity of \(V_{i}\). Particularly, (4) is valid with \(V\coloneqq W,\alpha\coloneqq 3a\), sufficiently small \(L_{x}>0\) and \(a_{i},i=1,2\) as above. **Case 2:** \[\begin{array}{l}(x\neq 0),x\in\partial A_{i}\cap\partial A_{i},...\cap \partial A_{i},j=i,i_{2},...,i_{k}\in\mathbb{N}:\\ x\notin\left\lceil\partial A_{i};j\neq i_{i},i_{2},...,i_{k}\right\rceil \end{array}\] For the specific \(x\) and \(i\) as above and by taking into account (15)-(18), and (22) we may define the integer \[I^{i}_{x}\coloneqq\max\left\{p:W_{{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Lie algebra generated by \(\{f,g\}\). Let \(L_{1}=span\{f,g\}\) and \(L_{s+1}=span\{[X,Y],X\in L_{1},Y\in L_{1}\}\), \(i=1,2,...\) and for every nonzero \(\Delta\in Lie\{f,g\}\) define: \[order_{i_{[f,g]}}^{[:=1,\ \text{if}\ \Delta\in L_{1}\setminus\{0\}}\] \[\lambda\left\{\begin{array}{cl}:=k>1,\ \text{if}\ \Delta=\Delta_{1}+\Delta_{2},with\ \Delta_{1}\in L_{1}\setminus\{0\}\\ \ \[(0,y)\in cl(D_{2});W(0,y)=0\Rightarrow y=0 \tag{37}\] \[0\neq(x,y)\in D_{2}\Rightarrow\frac{\partial W}{\partial y}(x,y)\neq 0 \tag{38}\] Then, under the previous properties, system (31) satisfies the assumptions of Proposition 1, and therefore is SDF-SGAS. **Proof**: Define: \[V_{1}\coloneqq V \tag{39}\] We have \[DV_{1}(x)\big{(}f+ug\big{)}_{\big{|}_{x,y\leq d(h,y[0]}}=DV_{1}(x)F(x,y)\big{|} _{x,y\leq d(h,y[0]}\] which in conjunction with (36) and (38) implies that, either \[DV_{1}(x)\big{(}f+ug\big{)}<0\,,\,\text{for all}\,\,\,u\] (40a) or \[\begin{cases}DV_{1}(x)\big{(}f+ug\big{)}=0\\ DV_{1}(x)[g,f]=DV(x)\frac{\partial}{\partial y}F(x,y)<0\,,\,\text{for all}\,\,\,u\end{cases} \tag{40b}\] From (32), (35) and (38), it follows that \(V_{1}\) satisfies (11) on the region \(D_{1}\setminus\{0\}\) for certain \(\omega_{1}\) and \(\omega_{2}\). From (40) it also follows that the pair \((A,V_{1})\), \(A_{2}\coloneqq D_{1}\) satisfies (30) of Proposition 1 for the system (31). Define next: \[V_{2}(x,y)\coloneqq V_{1}(x)+W(x,y) \tag{41}\] From (41) we have: \[DV_{2}(x,y)\big{(}f+ug\big{)}\big{|}_{x,y\leq d(h,y[0]}\] (42) \[= \left(\frac{\partial V_{1}}{\partial x}+\frac{\partial W}{ \partial x},\frac{\partial W}{\partial y}\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! which implies: \[V_{z}(x(t))\leq V_{z}(x(0))-\frac{1}{2}\,k(\xi)\mid x(t)\mid^{2},\] \[\forall t\geq 0\text{ near zero in such a way that }x(t)\text{ lies} \tag{49}\] \[\text{ in a neighborhood }A_{z}\text{ of }\xi\] By exploiting (45c), (48) and (49) for every constant \(R\in\mathbb{R}^{+}\) we can determine \(\xi_{1},\xi_{2},...\in\mathbb{R}^{n}\) and open neighborhoods \(A_{i}=A_{z_{i}}\), \(i=1,2...\) of \(\xi_{i}\) and appropriate positive definite \(C^{1}\) functions \[V_{i}(x)=\frac{1}{2}\,x^{\prime}P(\xi_{i})x,\ \ x\in A_{i}\] such that all conditions (8)-(13) are satisfied with \(a=1\) and for certain \(\omega_{1},\omega_{2}\in K\). We conclude that (43) satisfies all conditions of Lemma 2, therefore is SDF-SGAS. \(\bullet\)
2301.02183
Cold plasma waves in the chiral Maxwell-Carroll-Field-Jackiw electrodynamics
In this work, we study the propagation and absorption of plasma waves in the chiral Maxwell-Carroll-Field-Jackiw (MCJF) electrodynamics. The Maxwell equations are rewritten for a cold, uniform, and collisionless fluid plasma model, allowing us to determine the new refractive indices and propagating modes. The cases of propagation parallel and orthogonal to the magnetic field are examined considering a purely timelike CFJ background that plays the role of the magnetic conductivity chiral parameter. The collective electromagnetic modes are associated with four distinct refractive indices associated with right-circularly polarized and left-circularly polarized waves. For each index, the propagation and absorption zones are illustrated for some specific parameter values. In low-frequency regime, we have obtained modified helicons with right- and left-circularly polarizations. The optical behavior is investigated by means of the rotatory power (RP) and dichroism coefficient. The existence of a negative refraction zone enhances the rotatory power. It is also observed RP sign reversal, a feature of rotating plasmas.
Filipe S. Ribeiro, Pedro D. S. Silva, Manoel M. Ferreira Jr
2023-01-05T17:51:02Z
http://arxiv.org/abs/2301.02183v2
# Cold plasma waves in the chiral Maxwell-Carroll-Field-Jackiw electrodynamics ###### Abstract In this work, we study the propagation and absorption of plasma waves in the chiral Maxwell-Carroll-Field-Jackiw electrodynamics (MCJF). The Maxwell equations are rewritten for a cold, uniform, and collisionless fluid plasma model, allowing us to determine the new refractive indices and propagating modes. The case of transversal propagation is examined considering a purely timelike CFJ background that plays the role of the magnetic conductivity chiral parameter. We find four distinct refractive indices associated with RCP and LCP waves. For each index, the propagation and absorption zones are illustrated for some specific parameter values. The optical behavior is investigated by means of the rotatory power (RP) and dichroism coefficient. The existence of a negative refraction zone enhances the rotatory power. It is also observed RP sign reversal, a feature of rotating plasmas. pacs: 11.30.Cp, 41.20.Jb, 41.90.+e, 42.25.Lc ## I Introduction The study of electromagnetic (EM) waves propagation [1; 2] in cold magnetized plasma is based on magnetic-ionic theory [3; 4; 5; 6; 7; 8], developed by E. Appleton [9] and D. Hartree [10] between 1929 and 1932 to describe the radio waves propagation in the ionosphere, in the context of the usual electrodynamics [11]. EM waves in plasmas have been studied in other scenarios recently, as in logarithmic nonlinear electrodynamics [12]. The chiral magnetic effect (CME) is the macroscopic generation of an electric current in the presence of a magnetic field, stemming from an asymmetry between the number density of left- and right-handed chiral fermions [13; 14; 15; 16; 17]. It has been extensively investigated in several distinct contexts, such as quark-gluon plasmas [18; 19; 20], cosmology [21], neutron stars [22; 23], and electroweak interactions [24]. The CME plays a very relevant role in Weyl semimetals, where it is usually connected to the chiral anomaly associated with Weyl nodal points [25], the absence of the Weyl nodes [26], anisotropic effects stemming from tilted Weyl cones [27], the CME and anomalous transport in Weyl semimetals [28], quantum oscillations arising from the CME [29], computation of the electromagnetic fields produced by an electric charge near a topological Weyl semimetal with two Weyl nodes [30], renormalization evaluations for Weyl semimetals and Dirac materials [31], and solutions of axion electrodynamics [32]. The CME current can be classically described by the axion Lagrangian [33; 34; 35; 36; 37; 38], \[\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}+\theta(\mathbf{E}\cdot\mathbf{B}), \tag{1}\] where \(\theta\) is the axion field. In this context, the Maxwell equations are \[\nabla\cdot\mathbf{E} =\rho-\nabla\theta\cdot\mathbf{B}, \tag{2}\] \[\nabla\times\mathbf{B}-\partial_{t}\mathbf{E} =\mathbf{j}+(\partial_{t}\theta)\mathbf{B}+\nabla\theta\times \mathbf{E}, \tag{3}\] where the terms involving \(\theta\) derivatives find association with condensed matter effects [38]. Indeed, \(\nabla\theta\cdot\mathbf{B}\) represents an anomalous charge density, while \(\nabla\theta\times\mathbf{B}\) appears in the anomalous Hall effect, and \((\partial_{t}\theta)\mathbf{B}\) plays the role of the chiral magnetic current. In the case the axion field does not depend on the space coordinates, \(\nabla\theta=\mathbf{0}\), the Maxwell equations (2) and (3) read \[\nabla\cdot\mathbf{E}=\rho,\quad\nabla\times\mathbf{B}-\partial_{t}\mathbf{E} =\mathbf{j}+(\partial_{t}\theta)\mathbf{B}, \tag{4}\] where \((\partial_{t}\theta)\mathbf{B}\), the chiral magnetic current, may also be addressed as a term of Maxwell-Carroll-Field-Jackiw (MCJF) theory. A classical electrodynamics scenario endowed with a chiral magnetic current has been investigated considering symmetric and antisymmetric conductivity [39]. The latter case has also been addressed in Ref. [40]. The MCFJ model [41] is the _CPT_-odd part of the _U_(1) gauge sector of the Standard Model Extension (SME) [42]. It is described by the Lagrangian density \[\mathcal{L}=-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-\frac{1}{4}\epsilon^{\mu\nu \alpha\beta}\left(k_{AF}\right)_{\mu}A_{\nu}F_{\alpha\beta}-A_{\mu}J^{\mu}, \tag{5}\] with \(\left(k_{AF}\right)_{\mu}\) being the 4-vector background which controls the Lorentz violation. This theory has been investigated in multiple respects [43], encompassing radiative evaluations [44; 45], topological defects solutions [46], supersymmetric generalizations [47], classical solutions, quantum aspects and unitarity analysis [48]. It may also be connected with the CME in the sense that it provides a modified Ampere's law, \[\nabla\times\mathbf{B}-\frac{\partial\mathbf{E}}{\partial t}=\mathbf{J}+k_{AF }^{0}\mathbf{B}+\mathbf{k}_{AF}\times\mathbf{E}, \tag{6}\] containing the magnetic current, \(\mathbf{J}_{B}=k_{AF}^{0}\mathbf{B}\), with the component \(k_{AF}^{0}\) playing the role of the magnetic conductivity. The SME photon sector is also composed of a _CPT_-even term constituted of a rank 4 Lorentz-violating tensor [49], whose components may be properly parametrized in terms of dimensionless \(3\times 3\) matrices, \(\kappa_{DE}\), \(\kappa_{DB}\), \(\kappa_{HE}\), and \(\kappa_{HB}\), which allow to write generalized constitutive relations between the fields \((\mathbf{D},\mathbf{E})\) and \((\mathbf{H},\mathbf{B})\), \[\begin{pmatrix}\mathbf{D}\\ \mathbf{H}\end{pmatrix}=\begin{pmatrix}\epsilon\mathbb{1}+\kappa_{DE}&\kappa_ {DB}\\ \kappa_{HE}&\mu^{-1}\mathbb{1}+\kappa_{HB}\end{pmatrix}\begin{pmatrix}\mathbf{E} \\ \mathbf{B}\end{pmatrix}\,, \tag{7}\] similar to the ones that hold in continuous medium electrodynamics, see Eqs. (8a) and (8b). Here, \(\mathbf{D}\) is the electric displacement, while \(\mathbf{H}\) is the magnetic field. This _CPT_-even electrodynamics was investigated in several contexts, involving consistency aspects [50], finite temperature and boundary effects [51]. Lorentz-violating electrodynamics in continuous matter [52; 53] has been a topic of interest in the latest years due to its potential to describe interesting effects of the phenomenology of new materials, such as Weyl semimetals [54]. A classical field theory description of wave propagation, refractive indices, and optical effects in a continuous medium described by the MCFJ electrodynamics (with usual constitutive relations), including its Lorentz-violating higher-order derivative version [55], was discussed in Ref. [56]. Chiral media are endowed with parity violation [57; 58; 59; 60], being described by parity-odd models, as bi-isotropic [61] and bi-anisotropic electrodynamics [62; 63; 64; 65; 66; 67], whose constitutive relations read \[\mathbf{D} =\hat{\epsilon}\,\mathbf{E}+\hat{\alpha}\,\mathbf{B}, \tag{8a}\] \[\mathbf{H} =\hat{\beta}\,\mathbf{E}+\hat{\zeta}\,\mathbf{B}, \tag{8b}\] and \(\hat{\epsilon}=[\epsilon_{ij}]\), \(\hat{\alpha}=[\alpha_{ij}]\), \(\hat{\beta}=[\beta_{ij}]\), and \(\hat{\zeta}=[\zeta_{ij}]\) represent, in principle, \(3\times 3\) complex matrices. The bi-isotropic relations involve the diagonal isotropic tensors, \(\epsilon_{ij}=\epsilon\delta_{ij}\), \(\alpha_{ij}=\alpha\delta_{ij}\), \(\beta_{ij}=\beta\delta_{ij}\). In chiral scenarios, LCP and RCP waves travel at distinct phase velocities, implying birefringence and optical rotation [68]. This phenomenon stems from the natural optical activity of the medium or can be induced by the action of external fields (e. g., Faraday effect [69; 70; 71]), and it is measured in terms of the rotation angle per unit length or rotatory power (RP) [72]. Magneto-optical effects are used to investigate features of new materials, such as topological insulators [73; 74; 75; 76; 77; 78; 79] and graphene compounds [80]. The RP is a probe to examine the optical behavior of several distinct systems, for instance, crystals [81; 82], organic compounds [57; 83], graphene phenomena at terahertz band [84], and gas of fast-spinning molecules [85]. The optical rotation may depend on the frequency (RP dispersion) and undergo reversion (anomalous RP dispersion) [86; 87; 88]. It also finds interesting applications in chiral metamaterials [89; 90; 91], chiral semimetals [92; 93], in the determination of the rotation direction of pulsars [94], and in rotating plasmas, which constitutes a scenario where RP sign reversal also takes place [95]. Recently, RP reversal was also reported in a bi-isotropic dielectric in the presence of chiral magnetic current [96]. Furthermore, in the presence of absorption, dichroism is another useful tool for the optical characterization of matter. It occurs when LCP and RCP light waves are absorbed by the medium at different degrees. It has been used to distinguish between Dirac and Weyl semimetals [97], perform enantiomeric discrimination [98; 99], and for developing graphene-based devices at terahertz frequencies [100]. Another feature of chiral systems is the possible occurrence of negative refraction and negative refractive index, which was first proposed by Veselago in 1968 [101] and experimentally observed in 2000 [102; 103]. Later, other experiments confirmed the negative refraction by using Snell's law [104; 105]. This unusual property was achieved in constructed metamaterials with both negative electric permittivity and magnetic permeability [106; 107]. The negative refractive index also appears in quark-gluon plasmas [108; 109], magnetoelectric materials [110], metasurfaces [111], chiral bi-anisotropic metamaterials [112; 113], and new materials, such as Dirac semimetals [114; 115]. In chiral plasmas described by generalized bi-isotropic constitutive relations [116; 117], the negative refractive index can occur within some frequency band and is not necessarily associated with simultaneously negative electric permittivity and negative magnetic permeability, being attributed to the chirality parameter introduced in the constitutive relations, \[D^{i}=\varepsilon_{ij}E^{j}+i\xi_{c}B^{i},\quad H^{i}=\mu^{-1}B^{i}+i\xi_{c}E^{ i}, \tag{9}\] where \(\varepsilon_{ij}\), \(\mu\), and \(\xi_{c}\) are the plasma electric permittivity tensor, the magnetic permeability, and the constant chirality parameter. Plasmas metamaterials have been investigated as new media endowed with interesting properties, such as negative refraction and nonlinearities [118; 119]. In this work, we are interested in examining the wave propagation in a magnetized cold plasma ruled by the MCFJ model, a chiral route distinct from the bi-isotropic/anisotropic electrodynamics of the relations (9). We carry out our analysis considering the timelike Lorentz-violating background component, which plays the role of chiral magnetic conductivity. The refractive indices are evaluated and optical effects, such as birefringence and dichroism, are examined, which could be useful to trace analogies with other material properties. We also find that the chiral conductivity yields negative refraction in specific frequency bands, enhancing the rotatory power and dichroism signals. This paper is outlined as follows. In Sec. II, we briefly review some aspects of the MCFJ model. In Sec. III, the main properties of propagation in usual cold magnetized plasmas are presented. The dispersion relations and refractive indices for cold plasmas in chiral electrodynamics are addressed in Sec. IV. The optical effects are examined in Sec. V. Finally, we summarize our results in Sec. VI. ## II Basics on McFj Electrodynamics The Carroll-Field-Jackiw model was proposed as a gauge invariant CPT-odd electrodynamics constrained by birefringence data of distant galaxies [41]. It was later incorporated as the CPT-odd sector of the SME [42], and it has been investigated in several respects [43; 44]. In matter, it is described by the following Lagrangian density1[56]: Footnote 1: We use natural units \(h=c=1\) and the Minkowski metric signature \(g_{\mu\nu}=\text{diag}\left(1,-1,-1,-1\right)\). \[\mathcal{L}=-\frac{1}{4}G^{\mu\nu}F_{\mu\nu}-\frac{1}{4}\epsilon^{\mu\nu \alpha\beta}\left(k_{AF}\right)_{\mu}A_{\nu}F_{\alpha\beta}-A_{\mu}J^{\mu}, \tag{10}\] yielding the MCFJ equation of motion, \[\partial_{\rho}G^{\rho\kappa}+\epsilon^{\beta\kappa\mu\nu}\left(k_{AF}\right)_ {\beta}F_{\mu\nu}=J^{\kappa}\,. \tag{11}\] Here, \(\left(k_{AF}\right)^{\mu}=\left(k_{AF}^{0},\mathbf{k}_{AF}\right)\) is a constant 4-vector background responsible for the Lorentz violation, and \[F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu},\quad G^{\mu\nu}=\frac {1}{2}\chi^{\mu\nu\alpha\beta}F_{\alpha\beta}, \tag{12}\] are the usual \(U(1)\) vacuum and continuous matter field strength, respectively. The 4-rank tensor, \(\chi^{\mu\nu\alpha\beta}\), describes the medium constitutive tensor [120], whose components provide the electric and magnetic responses of the medium. Indeed, the electric permittivity and magnetic permeability tensor components are written as \(\epsilon_{ij}\equiv\chi^{0ij0}\) and \(\mu_{lk}^{-1}\equiv\frac{1}{4}\epsilon_{ijl}\chi^{ijmn}\epsilon_{mnk}\), respectively. For isotropic polarization and magnetization, it holds \(\epsilon_{ij}=\epsilon\delta_{ij}\) and \(\mu_{ij}^{-1}=\mu\delta_{ij}\), providing the usual isotropic constitutive relations, \[\mathbf{D}=\epsilon\mathbf{E},\quad\mathbf{H}=\mu\mathbf{B}. \tag{13}\] A straightforward calculation from Eq. (11) yields \[\nabla\cdot\mathbf{D} =J^{0}-\mathbf{k}_{AF}\cdot\mathbf{B}, \tag{14}\] \[\nabla\times\mathbf{H}-\frac{\partial\mathbf{D}}{\partial t} =\mathbf{J}+k_{AF}^{0}\mathbf{B}+\mathbf{k}_{AF}\times\mathbf{E}, \tag{15}\] where \(G^{i0}=D^{i}\) and \(G^{ij}=-\epsilon_{ijk}H^{k}\). The homogeneous Maxwell equations are given by \[\nabla\cdot\mathbf{B}=0,\quad\nabla\times\mathbf{E}+\frac{\partial\mathbf{B} }{\partial t}=0. \tag{16}\] By using a plane-wave ansatz for the electromagnetic fields, the MCFJ equations (14)-(16) read: \[i\mathbf{k}\cdot\mathbf{D}+\mathbf{k}_{AF}\cdot\mathbf{B} =J^{0}, \tag{17a}\] \[i\mathbf{k}\times\mathbf{H}+i\omega\mathbf{D}-k_{AF}^{0}\mathbf{ B}-\mathbf{k}_{AF}\times\mathbf{E} =\mathbf{J},\] (17b) \[\mathbf{k}\cdot\mathbf{B}=0,\quad\mathbf{k}\times\mathbf{E}- \omega\mathbf{B} =0, \tag{17c}\] where \(\mathbf{k}\) is the wave vector and \(\omega\) is the (angular) wave frequency. In the presence of anisotropy, the permittivity and permeability are represented by rank 2 tensors, \(\varepsilon_{ij}\) and \(\mu_{ij}\), which may also depend on the frequency (for a dispersive medium). For an anisotropic medium, the constitutive relations (13) are replaced by [1; 2], \[D^{i}=\varepsilon_{ij}(\omega)E^{j},\quad B^{i}=\mu_{ij}(\omega)H^{j}. \tag{18}\] For non-magnetic media with isotropic magnetic permeability, it holds \(\mu_{ij}(\omega)=\mu_{0}\), where \(\mu_{0}\) is the vacuum permeability. Considering the constitutive relations (18), the modified Ampere-Maxwell's law, Eq.(17b), and Faraday's law, Eq.(17c), in the absence of sources, we obtain a modified wave equation for the electric field, \[k^{i}\left(k^{j}E^{j}\right)-k^{2}E^{i}=-\omega^{2}\mu_{0}\bar{\varepsilon}_{ ij}\left(\omega\right)E^{j}, \tag{19}\] where we define the extended permittivity tensor, \[\bar{\varepsilon}_{ij}(\omega)=\varepsilon_{ij}(\omega)-i\frac{k_{AF}^{0}}{ \omega^{2}}\epsilon_{ikj}k^{k}-i\epsilon_{ikj}\frac{k_{AF}^{k}}{\omega}. \tag{20}\] Using the definition to the refractive index, \(\mathbf{n}=\mathbf{k}/\omega\), the modified wave equation becomes \[M_{ij}E^{j}=0, \tag{21}\] with \(M_{ij}\) given by \[M_{ij}=n^{2}\delta_{ij}-n_{i}n_{j}-\frac{\varepsilon_{ij}}{\varepsilon_{0}}- \frac{i}{\omega}\left(V_{0}\epsilon_{ikj}n^{k}+\epsilon_{ikj}V^{k}\right), \tag{22}\] in which \(\varepsilon_{0}\) is the vacuum electric permittivity, and \[V_{0}=k_{AF}^{0}/\varepsilon_{0},\quad V^{k}=k_{AF}^{k}/\varepsilon_{0}. \tag{23}\] appear as the components of a redefined background, \(V^{\mu}=\left(V_{0},V^{i}\right)\). The nontrivial solutions for the electric field require a vanishing determinant of the matrix \(M_{ij}\), det \(M_{ij}=0\), which provides the dispersion relations that describe the wave propagation in the medium. In this work, we will study plasma waves propagation for a chiral (parity-odd) medium, which means restraining our investigation to the case of a purely timelike Lorentz-violating background vector, \(\left(k_{AF}\right)^{\mu}=\left(k_{AF}^{0},\mathbf{0}\right)\). The latter also plays the role of chiral magnetic conductivity. In this scenario, the wave equation (21) becomes \[\left[n^{2}\delta_{ij}-n^{i}n^{j}-\frac{\varepsilon_{ij}}{\varepsilon_{0}}-i \frac{V_{0}}{\omega}\epsilon_{ikj}n^{k}\right]E^{j}=0. \tag{24}\] ## III The usual magnetized cold plasma In this work we will adopt the fluid theory approach in the cold plasma limit [3; 4; 5; 6; 7]: \[\frac{\partial n}{\partial t}+\nabla\cdot\left(n\mathbf{u}\right)=0, \tag{25}\] \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla \mathbf{u}=\frac{q}{m}\left(\mathbf{E}+\mathbf{u}\times\mathbf{B}_{0}\right), \tag{26}\] where \(n\) is the electron number density, \(\mathbf{u}\) is the electron fluid velocity field, \(q\) and \(m\) are the electron charge and mass, respectively, and \(\mathbf{B}_{0}\) is the equilibrium magnetic field. For simplicity, the ions are supposed to be infinitely massive, which is appropriate for high-frequency waves. Furthermore, thermal and collisional effects are also disregarded. The linearized version of the magnetized cold plasmas [7] consider fluctuations around average quantities, \(n_{0}\) and \(\mathbf{B}_{0}\), which are constant in time and space. Thus, the plasma quantities read \[n =n_{0}+\delta n, \tag{27a}\] \[\mathbf{u} =\delta\mathbf{u},\] (27b) \[\mathbf{E} =\delta\mathbf{E}\] (27c) \[\mathbf{B} =\mathbf{B}_{0}+\delta\mathbf{B}, \tag{27d}\] with \(\delta n\), \(\delta\mathbf{u}\), \(\delta\mathbf{E}\) and \(\delta\mathbf{B}\) being first order plane wave magnitude perturbations. Following the usual procedure [3; 4; 5; 6], assuming \(\mathbf{B}_{0}=B_{0}\hat{z}\), we write the corresponding dielectric tensor, \[\varepsilon_{ij}(\omega)=\varepsilon_{0}\left[\begin{array}{ccc}S&-iD&0\\ iD&S&0\\ 0&0&P\end{array}\right], \tag{28}\] where \[S=1-\frac{\omega_{p}^{2}}{\left(\omega^{2}-\omega_{c}^{2}\right)},\;D=\frac{ \omega_{c}\omega_{p}^{2}}{\omega\left(\omega^{2}-\omega_{c}^{2}\right)},\;P=1 -\frac{\omega_{p}^{2}}{\omega^{2}}, \tag{29}\] and \[\omega_{p}=\frac{n_{0}q^{2}}{m\epsilon_{0}},\quad\omega_{c}=\frac{|q|B_{0}}{m}, \tag{30}\] are the plasma and cyclotron frequencies, respectively. From the Maxwell theory, two distinct refractive indices are obtained, \[n_{\pm}=\sqrt{1-\frac{\omega_{p}^{2}}{\omega\left(\omega\pm\omega_{c}\right)}}, \tag{31}\] which provide right-handed circularly polarized (RCP) and left-handed circularly polarized (LCP) modes, \[\mathbf{E}_{LCP}=\frac{i}{\sqrt{2}}\begin{bmatrix}1\\ i\end{bmatrix},\quad\mathbf{E}_{RCP}=\frac{i}{\sqrt{2}}\begin{bmatrix}1\\ -i\end{bmatrix}, \tag{32}\] for the propagating modes associated to \(n_{\pm}\), respectively. This is the standard result of wave propagation in the usual magnetized cold plasma. We recall that a cutoff happens whenever the refractive index, \(n\), goes to zero. On the other hand, a resonance occurs if \(n\) tends to infinity. From the indices (31), we obtain the following cutoff frequencies: \[\omega_{\pm}=\frac{1}{2}\left(\sqrt{\omega_{c}^{2}+4\omega_{p}^{2}}\mp\omega_ {c}\right), \tag{33}\] where \(\omega_{\pm}\) is related to \(n_{\pm}\), respectively. A very usual effect in magnetized plasmas is the circular birefringence2, which causes the rotation of the plane of polarization of a linearly polarized wave that propagates within the medium. Thus the linearly polarized wave emerges from the medium with an electric field whose polarization is rotated relative to its initial linear configuration. Such a phenomenon can be properly explained by decomposing the initial wave into two circularly polarized waves (RCP and LCP) that travel with different phase velocities. In this case, the rotation angle of the electric field can be expressed as the difference between the refractive indices associated with the RCP and LCP waves [72; 68]: Footnote 2: In plasmas, the birefringence is usually a consequence of the Faraday effect, occurring due to the presence of the external field \(\mathbf{B}_{0}\), which generates distinct phase velocities for the propagating modes [70]. \[\theta=\frac{\pi L}{\lambda_{0}}\left(\mathrm{Re}\left[n_{RCP}\right]- \mathrm{Re}\left[n_{LCP}\right]\right), \tag{34}\] where \(\lambda_{0}\) is the vacuum wavelength of the incident wave. The rotation power \(\delta=\theta/L\) (phase difference per unit length), is given as \[\delta=-\frac{\omega}{2}\left(\mathrm{Re}\left[n_{LCP}\right]-\mathrm{Re} \left[n_{RCP}\right]\right). \tag{35}\] In a cold magnetized plasma, the refractive indices (31) provide the following rotatory power: \[\delta=-\frac{\omega}{2}\mathrm{Re}\left(\sqrt{1-\frac{\omega_{p}^{2}}{ \omega\left(\omega+\omega_{c}\right)}}-\sqrt{1-\frac{\omega_{p}^{2}}{\omega \left(\omega-\omega_{c}\right)}}\right). \tag{36}\] The behavior of the RP (36) in terms of the frequency \(\omega\) is depicted in Fig. 1. One notices that there is a divergence at \(\omega_{c}\), being positive for \(\omega<\omega_{c}\) and negative for \(\omega>\omega_{c}\). It tends to zero at the high-frequency limit \(\omega>>(\omega_{p},\omega_{c})\), where it decays as \[\delta\approx-\frac{\omega_{p}^{2}\omega_{c}}{2\omega^{2}}. \tag{37}\] Associated with the imaginary part of the refractive index, one can also examine dichroism, an optical effect that occurs when circularly polarized waves are absorbed by the medium at different degrees [56; 58; 59]. Thus dichroism coefficient refers to the difference in absorption of LCP and RCP waves, being given by: \[\delta_{d}=-\frac{\omega}{2}\left(\mathrm{Im}[n_{LCP}]-\mathrm{Im}[n_{RCP}] \right). \tag{38}\] which, for the refractive indices (31), implies \[\delta_{d}=-\frac{\omega}{2}\mathrm{Im}\left(\sqrt{1-\frac{\omega_{p}^{2}}{ \omega\left(\omega+\omega_{c}\right)}}-\sqrt{1-\frac{\omega_{p}^{2}}{\omega \left(\omega-\omega_{c}\right)}}\right). \tag{39}\] Such a quantity is plotted in Fig. 2, which shows singularity at the cyclotron frequency \(\omega_{c}\). For \(\omega_{c}=\omega_{p}\) (red curve), the dichroism coefficient (39) is negative for \(\omega<\omega_{+}^{red}\), positive for \(2\omega_{c}<\omega<\omega_{c}^{red}\) and null for other frequencies. The case for \(\omega_{c}=\omega_{p}/2\) (blue curve) differs in the fact that \(\omega_{+}^{blue}\) is greater than \(\omega_{c}\), showing that (39) is now negative for \(\omega<\omega_{c}\). ## IV Wave propagation in chiral electrodynamics Starting from the wave equation (24) and using the expression of the cold plasma dielectric permittivity, given in Eq. (28), we obtain a linear homogeneous system, \[\begin{bmatrix}n^{2}-n_{x}^{2}-S&iD-n_{x}n_{y}+i\left(V_{0}/\omega\right)n_{z} &-n_{x}n_{z}-i\left(V_{0}/\omega\right)n_{y}\\ -iD-n_{x}n_{y}-i\left(V_{0}/\omega\right)n_{z}&n^{2}-n_{x}^{2}-S&-n_{y}n_{z}+i \left(V_{0}/\omega\right)n_{x}\\ -n_{x}n_{z}+i\left(V_{0}/\omega\right)n_{y}&-n_{y}n_{z}-i\left(V_{0}/\omega \right)n_{x}&n^{2}-n_{z}^{2}-P\end{bmatrix}\begin{bmatrix}\delta E_{x}\\ \delta E_{y}\\ \delta E_{z}\end{bmatrix}=0. \tag{40}\] Let us consider, for simplicity, the case the refractive index is parallel to the magnetic field, \(\mathbf{n}=n\hat{z}\), such that one obtains \[\begin{bmatrix}n^{2}-S&iD+i\left(V_{0}/\omega\right)n&0\\ -iD-i\left(V_{0}/\omega\right)n&n^{2}-S&0\\ 0&0&-P\end{bmatrix}\begin{bmatrix}\delta E_{x}\\ \delta E_{y}\\ \delta E_{z}\end{bmatrix}=0, \tag{41}\] for which \(\det[M_{ij}]=0\) provides the dispersion relations \[P\left(\omega^{2}\left(n^{2}-S\right)^{2}-(\omega D+nV_{0})^{2}\right)=0. \tag{42}\] Longitudinal waves, \(\mathbf{n}\parallel\delta\mathbf{E}\) or \(\delta\mathbf{E}=(0,0,\delta E_{z})\), may emerge, when \(P=0\), with non propagating vibration at the plasma frequency, \(\omega=\omega_{p}\). For transverse waves, \(\mathbf{n}\perp\delta\mathbf{E}\) or \(\delta\mathbf{E}=(\delta E_{x},\delta E_{y},0)\), the dispersion relation (42) simplifies as \[\left(n^{2}-S\right)^{2}-\left(D+n\left(V_{0}/\omega\right)\right)^{2}=0, \tag{43}\] also written as a fourth-order equation in \(n\), \[n^{4}-\left(2S+\left(V_{0}/\omega\right)^{2}\right)n^{2}-2D\left(V_{0}/\omega \right)n+\left(S^{2}-D^{2}\right)=0. \tag{44}\] Taking into account the relations (29), the dispersion relation (44) provides the following refractive indices: \[n_{R,M} =-\frac{V_{0}}{2\omega}\pm\sqrt{1+\left(\frac{V_{0}}{2\omega} \right)^{2}-\frac{\omega_{p}^{2}}{\omega(\omega-\omega_{c})}}, \tag{45}\] \[n_{L,E} =\frac{V_{0}}{2\omega}\pm\sqrt{1+\left(\frac{V_{0}}{2\omega} \right)^{2}-\frac{\omega_{p}^{2}}{\omega(\omega+\omega_{c})}}, \tag{46}\] In general, the indices \(n_{R},n_{L},n_{E},n_{M}\) may be real, imaginary, or complex (presenting both pieces) at some frequency ranges. As well-known, the real part is associated with propagation, while the complex piece is concerned with absorption. Furthermore, these indices may have positive or negative real pieces. The indices \(n_{L}\) and \(n_{M}\) are always positive and negative, respectively, the latter one being a negative refractive index. On the other hand, the indices \(n_{R}\) and \(n_{E}\) can be positive or negative, depending on the frequency zone examined, in such a way the associated modes can manifest negative refraction behavior (in a suitable frequency band). The propagating modes associated with the refractive indices in Eq. (45)) and Eq. (46) are obtained by inserting each one in Eq. (41) and carrying out the corresponding eigenvector (with a null eigenvalue). The emerging electric field are the \(\mathbf{E}_{LCP}\) and \(\mathbf{E}_{RCP}\), given in Eq. (32), where \(n_{R}\), \(n_{M}\) are associated with the RCP mode, and \(n_{L}\), \(n_{E}\) are related to the LCP mode, \[n_{L},n_{E}\ \mapsto\ \mathbf{E}_{LCP}=\frac{i}{\sqrt{2}}\begin{bmatrix}1\\ i\end{bmatrix}, \tag{47}\] \[n_{R},n_{M}\ \mapsto\ \mathbf{E}_{RCP}=\frac{i}{\sqrt{2}}\begin{bmatrix}1\\ -i\end{bmatrix}. \tag{48}\] From the indices \(n_{R}\), \(n_{E}\), given by Eqs. (45) and (46), we obtain the same cutoff frequencies (33) of the standard case: in fact, \(\omega_{-}\) is related to the refractive index \(n_{R}\), and \(\omega_{+}\) is associated with the refractive index \(n_{E}\). In contrast, the refractive indices \(n_{L}\) and \(n_{M}\) have no real root. The behavior of the refractive indices in Eqs. (45) and (46) will be examined in the following. ### About the index \(n_{R}\) We initiate discussing some properties of the index \(n_{R}\). The behavior of \(n_{R}\) in terms of the dimensionless parameter \(\omega/\omega_{c}\) is illustrated in Fig. 3, which displays the real imaginary pieces of the refractive index \(n_{R}\). We point out: 1. It takes on a finite value when \(\omega\to 0\), given by \[n_{R}\left(0\right)=\frac{1}{V_{0}}\left(\frac{\omega_{p}^{2}}{\omega_{c}} \right),\] (49) differing from the behavior of the usual magnetized plasma index \(n_{-}\), which provides \(n\rightarrow\infty\) near the origin. 2. For \(0<\omega<\omega_{c}\), \(n_{R}\) is positive since the square root in (45) is real, positive, and larger than the negative piece before it. Such a positivity also holds for the usual index \(n_{-}\). See the black line in this frequency zone in Fig. 3. 3. For \(\omega\rightarrow\omega_{c}\), \(n_{R}\rightarrow\infty\), and there occurs a resonance at the cyclotron frequency. 4. For \(\omega_{c}<\omega<\omega_{r}\), there appears a negative refractive index zone with absorption, where \(\mathrm{Re}[n_{R}]<0\) and \(\mathrm{Im}[n_{R}]\neq 0\), as shown in Fig. 3. The frequency \(\omega_{r}\) is the root of the radicand in Eq. (45), \[R_{-}\left(\omega\right)=1+\frac{V_{0}^{2}}{4\omega^{2}}-\frac{\omega_{p}^{2}} {\omega\left(\omega-\omega_{c}\right)},\] (50) which yields a cubic equation in \(\omega\). 5. For \(\omega_{r}<\omega<\omega_{-}\), one finds a negative refractive index zone without absorption, that is, \(\mathrm{Re}[n_{R}]<0\) and \(\mathrm{Im}[n_{R}]=0\). 6. For \(\omega>\omega_{-}\), the quantity \(n_{R}\) is always positive, corresponding to a propagating zone, with \(n_{R}\to 1\) in the high-frequency limit. The frequency zone in which \(\mathrm{Im}[n_{R}]\neq 0\), that is, \(\omega_{c}<\omega<\omega_{r}\), corresponds to the absorption zone for the metamaterial (negative refractive index) RCP wave, as already mentioned before. The frequency ranges in which \(\mathrm{Im}[n_{R}]=0\) define the propagation zone for the RCP wave. ### About the index \(n_{L}\) The index \(n_{L}\), given in Eq. (46), has no real root, presenting the following features: 1. For \(\omega\to 0\), \(n_{L}\rightarrow+\infty\). Then, the presence of the term \(V_{0}\) turns the refractive index real and positively divergent at the origin, differing from the usual index \(n_{+}\) behavior, see Eq. (31), which is complex and divergent, \(\mathrm{Im}[n_{+}]\rightarrow\infty\), at the origin. 2. For \(\omega>0\), it is necessary to analyze the radicand in Eq. (46), \[R_{+}\left(\omega\right)=1+\frac{V_{0}^{2}}{4\omega^{2}}-\frac{\omega_{p}^{2}}{ \omega\left(\omega+\omega_{c}\right)},\] (51) since it can be positive or negative, which determines the absence or presence of an absorption zone, respectively. Note that for \(\omega>\omega_{+}\) the term \(1-\omega_{p}^{2}/\omega\left(\omega+\omega_{c}\right)\) is greater than zero (\(\omega_{+}\) is the root of such a term), such that \(R_{+}\) is positive. Therefore, the possibility of \(R_{+}\) being negative occurs only in the range \(0<\omega<\omega_{+}\), for which the term \(1-\omega_{p}^{2}/\omega\left(\omega+\omega_{c}\right)\) is less than zero. Hence, this positivity for \(R_{+}\) is stated by the condition, \[\frac{V_{0}^{2}}{4\omega^{2}}>\left|1-\frac{4\omega_{p}^{2}}{\omega(\omega+ \omega_{c})}\right|_{\omega<\omega_{+}},\] (52) for which \(R_{+}\) is always positive and the refractive index \(n_{L}\) is real for any \(\omega>0\). This corresponds to a propagating mode for the entire frequency domain. The behavior of \(n_{L}\) in terms of the dimensionless parameter \(\omega/\omega_{c}\), considering the condition (52), that is, \(R_{+}>0\), is shown in Fig. 4. 3. On the other hand, for \[\frac{V_{0}^{2}}{4\omega^{2}}<\left|1-\frac{4\omega_{p}^{2}}{\omega(\omega+ \omega_{c})}\right|_{\omega<\omega_{+}},\] (53) one has \(R_{+}<0\) and \(n_{L}\) becomes complex, \(\mathrm{Im}[n_{L}]\neq 0\), determining the opening of an absorption zone located within the interval \(\omega_{i}<\omega<\omega_{f}\), as shown in Fig. 5. The frequencies \(\omega_{i}\) and \(\omega_{f}\) are positive and real roots of \(R_{+}\), a cubic equation in the frequency. ### About the index \(n_{E}\) The quantity \(n_{E}\) is a refractive index that only exists as a positive quantity due to the presence of the chiral Lorentz-violating term. In the case we set \(V_{0}=0\), the second relation in Eq. (46) yields \(\mathrm{Re}[n_{E}]<0\) (negative index of refraction). For \(V_{0}\neq 0\), the index \(n_{E}\) presents a small positivity range, \(\mathrm{Re}[n_{E}]>0\), which provides propagation for the associated LCP wave. We present below some aspects of \(n_{E}\): 1. For \(\omega\to 0\), the index \(n_{E}\) tends to a finite value at origin, \[n_{E}\left(0\right)=\frac{1}{V_{0}}\left[\frac{\omega_{p}^{2}}{\omega_{c}} \right],\] (54) which is inversely proportional to the magnitude of the chiral factor, \(V_{0}\). 2. Since the radicand of \(n_{E}\) is the same one of \(n_{L}\), see Eq. (46), it holds here the same procedure applied for \(n_{L}\). For values of \(V_{0}\) that satisfy the condition (52), \(R_{+}>0\), \(n_{E}\) is always real, \(\mathrm{Im}[n_{E}]=0\), being positive within the interval \(0<\omega<\omega_{+}\), and negative for \(\omega>\omega_{+}\), since \(\sqrt{R_{+}}>V_{0}/2\omega\) at this range. The real and imaginary parts of \(n_{E}\) are represented in Fig. 6. 3. Considering the condition (53), \(n_{E}\) becomes complex and exhibits an absorption zone, \(\mathrm{Im}[n_{E}]\neq 0\), in the interval \(\omega_{i}<\omega<\omega_{f}\), with \(\omega_{i},\omega_{f}<\omega_{+}\), as shown in Fig. 7. Such a figure depicts the real and imaginary pieces of \(n_{E}\) (under the condition (53)). ### About the index \(n_{M}\) The additional index \(n_{M}\), given in Eq. (45), is always negative (negative refraction) and has no real root. The behavior of \(n_{M}\) in terms of the dimensionless parameter \(\omega/\omega_{c}\) is shown in Fig. 8. We notice the following features: 1. For \(0<\omega<\omega_{c}\), \(n_{M}\) is real and negative since the square root in (45) is real. This is the same behavior of the index \(-n_{+}\). See the black line in Fig. (8). 2. For \(\omega\to\omega_{c}\), \(n_{M}\to-\infty\), and there occurs a resonance at the cyclotron frequency. 3. For \(\omega_{c}<\omega<\omega_{r}\), there appears an absorption zone for metamaterial, \(\mathrm{Re}[n_{M}]<0\) and \(\mathrm{Im}[n_{M}]\neq 0\), while the index \(-n_{+}\) is purely imaginary, \(\mathrm{Re}[n_{M}]=0\) and \(\mathrm{Im}[n_{M}]\neq 0\), as shown in Fig. 8. The frequency \(\omega_{r}\) is the root of \(R_{-}\). 4. For \(\omega>\omega_{r}\), the quantity \(n_{M}\) is always negative, corresponding to a negative propagation zone, with \(n_{M}\to-1\) in the high-frequency limit. ### Dispersion relations behavior The wave dispersion associated with each refractive index is usually visualized in plots \(\omega\times k\). In the following, we work with dimensionless plots, \((\omega/\omega_{c})\times(k/\omega_{c})\). The dispersion relations associated with \(n_{R}\) and \(n_{M}\) are depicted in Fig. 9 for \(\omega_{c}=\omega_{p}\). The propagation occurs for \(0<\omega<\omega_{c}\) and \(\omega>\omega_{-}\), while absorption takes place in \(\omega_{c}<\omega<\omega_{r}\). The range \(\omega_{r}<\omega<\omega_{-}\) corresponds to negative refraction propagation zone (\(k<0\)) for \(n_{R}\). The refractive index \(n_{M}\) is negative for \(k<0\) and \(\omega>0\). Figure 10 depicts the dispersion relations related to \(n_{L}\) and \(n_{E}\). The wave associated with \(n_{L}\) propagates for all frequencies. For \(n_{E}\), the conventional propagation zone occurs in \(0<\omega<\omega_{+}\). For \(\omega>\omega_{+}\), there occurs a propagation zone with negative refraction. For the standard indices, \(\pm n_{+}\), the absorption zone is \(0<\omega<\omega_{+}\). Furthermore, Fig. 11 shows the dispersion relations for \(n_{L}\) and \(n_{E}\) in the case there is a modified absorption zone for \(\omega_{i}<\omega<\omega_{f}\), while the free propagation occurs for \(0<\omega<\omega_{i}\) and \(\omega>\omega_{f}\). The frequencies \(\omega_{i}\), \(\omega_{f}\) and \(\omega_{r}\) define the limits for unusual propagation zones. As already discussed, these frequencies are obtained from the radicands (50) and (51). ## V Birefringence, Rotatory Power and Dichroism The phase velocity in terms of the refractive index \(n\) is defined (in natural units) as \(v_{phase}=1/n\). Hence, the corresponding phase velocities, \(v_{R}=1/\left(n_{R}\right)\), \(v_{L}=1/\left(n_{L}\right)\), \(v_{E}=1/\left(n_{E}\right)\), \(v_{M}=1/\left(n_{M}\right)\), can be defined with the indices \(n_{R}\), \(n_{L}\), \(n_{E}\), \(n_{M}\) of Eqs. (45) and (46). Accordingly with the previous analysis of the refractive indices, in general, the RCP and LCP modes propagate at different phase velocities for each frequency value, generating circular birefringence in the propagation band, expressed in terms of the rotatory power (35). On the other hand, in the absorption zones, there occurs dichroism, measured in terms of the coefficient of Eq. (38). ### Rotatory power In order to write the rotatory power (RP), we need to consider the refractive indices \(n_{L}\), \(n_{E}\), associated with the LCP wave, and the indices \(n_{R}\), \(n_{M}\), associated to the RCP wave. It allows, in principle, to determine four distinct RPs at the propagation zones, some of which we examine in this section. We start by writing the rotation power defined in terms of real pieces of the refractive indices \(n_{L}\) and \(n_{R}\), \[\delta_{LR}=-\frac{\omega}{2}\left(\text{Re}[n_{L}]-\text{Re}[n_{R}]\right), \tag{55}\] Figure 11: Plot of the dispersion relations related to refractive indices \(n_{L}\) (solid red line) and \(n_{E}\) (solid blue line). The dashed line corresponds to the usual case with indices \(\pm n_{+}\). The highlighted areas in red (gray) indicate the absorption zone for \(n_{L,E}\) (\(\pm n_{+}\)). Here, we have used \(\omega_{c}=\omega_{p}\) and \(V_{0}=0.7\omega_{c}\), with \(\omega_{c}=1\) rad \(s^{-1}\). Figure 10: Plot of the dispersion relations related to refractive indices \(n_{L}\) (solid red line) and \(n_{E}\) (solid blue line). The dashed line corresponds to the indices \(\pm n_{+}\) of the usual case. The highlighted gray area indicates the absorption zone for \(\pm n_{+}\), where now also occurs propagation. Here, we have used \(\omega_{c}=\omega_{p}\) and \(V_{0}=\omega_{p}\), with \(\omega_{c}=1\) rad \(s^{-1}\). Figure 9: Plot of the dispersion relations related to refractive indices \(n_{R}\) (solid red line) and \(n_{M}\) (solid blue line). The dashed black line corresponds to the indices of the usual case (\(\pm n_{-}\)). The highlighted area in red (gray) indicates the absorption zone for \(n_{R,M}\) (\(\pm n_{-}\)). Here, we have used \(\omega_{c}=\omega_{p}\) and \(V_{0}=2\omega_{p}\), with \(\omega_{c}=1\) rad \(s^{-1}\). or explicitly, \[\delta_{LR}=-\frac{\omega}{2}\text{Re}\left[V_{0}/\omega+\sqrt{R_{+}}-\sqrt{R_{-}} \right], \tag{56}\] where \(R_{+}\) and \(R_{-}\) are given in Eqs. (50) and (51). We find a positive frequency, \[\hat{\omega}=\sqrt{\omega_{c}^{2}+\omega_{p}^{2}/2-\frac{\omega_{p}^{2}\sqrt{4 \omega_{c}^{2}+V_{0}^{2}}}{2V_{0}}}, \tag{57}\] where the RP (56) undergoes a sign reversal. In Fig. 12, we illustrate the behavior of RP for the condition (52). For the interval \(0<\omega<\hat{\omega}\), the RP is negative, and for \(\hat{\omega}<\omega<\omega_{c}\), it is positive. The RP reversion that occurs at \(\omega=\hat{\omega}\) is not usual in cold plasmas theory. However, it is reported in graphene systems [84], rotating plasmas [95], and bi-isotropic dielectrics supporting chiral magnetic current [96]. For \(\omega>\omega_{c}\), the RP is always negative. Nevertheless, it is necessary to pay attention to the interval \(\omega_{c}<\omega<\omega_{r}\), where the refractive index \(n_{R}\) has an imaginary piece and the RCP wave is absorbed. At \(\omega=\omega_{r}\), the real piece of \(n_{R}\) undergoes a sharp change (see Fig. 3), which also appears in the RP profile of Fig. 12. We can safely claim that both modes associated with the \(n_{L}\) and \(n_{R}\) propagate for \(\omega>\omega_{-}\), range in which the RP magnitude decreases monotonically with \(\omega\), approaching to its asymptotic value, \(-V_{0}/2\) (see Fig. 12). Assuming the limit where \(\omega>>(\omega_{p},\omega_{c})\), we can write \[n_{L,R}\approx 1\pm\frac{V_{0}}{2\omega}+\frac{V_{0}^{2}}{8\omega^{2}}-\frac{ \omega_{p}^{2}}{2\omega\left(\omega\pm\omega_{c}\right)}, \tag{58}\] so that the rotatory power is \[\delta_{LR}\approx-\frac{V_{0}}{2}-\frac{\omega_{p}^{2}\omega_{c}}{2\omega^{ 2}}. \tag{59}\] Note that taking the limit \(V_{0}\to 0\), the usual Faraday effect RP (37) is recovered for the high-frequency regime. It is also interesting to point out that the Faraday effect disappears for a null magnetic field, \(\omega_{c}=0\). However, the birefringence still remains, due to the presence of the chiral term, which yields the following RP: \[\delta\approx-V_{0}/2. \tag{60}\] For the condition (53), the RP (56) also exhibits a sign reversal and a very similar profile to the one of Fig. 12, in such a way that it will not be depicted here. Considering now the refractive indices \(n_{E}\) and \(n_{R}\), the rotatory power is: \[\delta_{ER}=-\frac{\omega}{2}\left(\text{Re}[n_{E}]-\text{Re}[n_{R}]\right), \tag{61}\] or, \[\delta_{ER}=-\frac{\omega}{2}\text{Re}\left[V_{0}/\omega-\sqrt{R_{+}}-\sqrt{R _{-}}\right]. \tag{62}\] Recalling that the LCP wave associated with \(n_{E}\) has a conventional free propagation for \(\omega<\omega_{+}\) and propagation with negative refractive index (\(n_{E}<0\)) for \(\omega>\omega_{+}\) (with \(\omega_{+}<\omega_{c}\)), the RP magnitude is enhanced in the latter zone. This behavior is depicted in Fig. 13, which shows the RP (62) for \(n_{E}\) given by the condition (52), \(R_{+}>0\). The RP is positive for \(\omega<\omega_{c}\) and negative for \(\omega_{c}<\omega<\omega^{\prime\prime}\), becoming positive again for \(\omega>\omega^{\prime\prime}\), where \(\omega^{\prime\prime}\) is the reversal frequency. For \(n_{E}\) given by the condition condition (53), the RP is depicted in Fig. 14, revealing a small reversion at \(\omega^{\prime\prime}<\omega_{c}\). Note that the increasing RP with \(\omega\), depicted in Figs. 13 and 14, is due to the negative behavior of the index \(n_{E}\) for \(\omega>\omega_{+}\). In the asymptotic limit, where \(\omega>>(\omega_{p},\omega_{c})\), the RP (62) goes as \[\delta_{ER}\approx\omega-\frac{V_{0}}{2}, \tag{63}\] presentig a predominant linear behavior in \(\omega\), as it appears in Figs. 13 and 14. It is also worth mentioning that the limit \(V_{0}\to 0\), implying \(\delta\approx\omega\), does not stand for a valid result for usual magnetized plasma, since the RP (62) is not defined for achiral cold plasmas. ### Dichroism coefficients As well known, absorption depends on the magnitude of the imaginary parts of the refractive indices. When one mode is more absorbed than the other, there occurs dichroism. Considering the refractive indices \(n_{L}\) and \(n_{R}\), the circular dichroism coefficient is \[\delta_{dLR}=-\frac{\omega}{2}\left(\text{Im}[n_{L}]-\text{Im}[n_{R}]\right). \tag{64}\] Considering the condition (52), only \(n_{R}\) has imaginary part (localized in the interval \(\omega_{c}<\omega<\omega_{-}\)), while \(n_{L}\) is Figure 12: The solid blue line represents the rotatory power (56) defined by the refractive index \(n_{L}\) and \(n_{R}\), for the condition (52). The dashed black line corresponds to the usual rotatory power (36). Here, we have used \(\omega_{c}=\omega_{p}\), \(V_{0}=\omega_{p}\), and \(\omega_{c}=1\) rad \(s^{-1}\). real for \(\omega>0\). In this case, the dichroism coefficient is given by \[\delta_{dLR}=\begin{cases}0,&\text{ for }0<\omega<\omega_{c},\\ \sqrt{R_{-}},&\text{ for }\omega_{c}<\omega<\omega_{r},\\ 0,&\text{ for }\omega>\omega_{r},\end{cases} \tag{65}\] being non null only in the range \(\omega_{c}<\omega<\omega_{-}\), as properly shown in Fig 15. Considering the condition (53), both \(n_{R}\) and \(n_{L}\) have non-null imaginary parts in the intervals \(\omega_{c}<\omega<\omega_{r}\) and \(\omega_{i}<\omega<\omega_{f}\), respectively. The dichroism coefficient is null for \(0<\omega<\omega_{i}\), \(\omega_{f}<\omega<\omega_{c}\), and \(\omega>\omega_{r}\), being non-null only for \[\delta_{dLR}=\begin{cases}-\frac{\omega}{2}\sqrt{R_{+}},&\text{ for }\omega_{i}<\omega<\omega_{f},\\ +\frac{\omega}{2}\sqrt{R_{-}},&\text{ for }\omega_{c}<\omega<\omega_{r},\end{cases} \tag{66}\] whose general behavior is exhibited in Fig. 16. For the refractive indices \(n_{E}\) and \(n_{R}\), the circular dichroism coefficient is \[\delta_{dER}=-\frac{\omega}{2}\left(\text{Im}[n_{E}]-\text{Im}[n_{R}]\right). \tag{67}\] If we consider \(n_{E}\) under the condition (52), the same behavior of Fig. 15 is obtained, since \(n_{E}\) is always real, not contributing to the dichroism. On the other hand, regarding now the condition (53), both \(n_{R}\) e \(n_{E}\) have non-zero imaginary parts in the intervals \(\omega_{c}<\omega<\omega_{r}\) Figure 16: Plot of the dichroism coefficient (66)(solid red lines) associated to the refractive indices \(n_{L}\) and \(n_{R}\), under the condition (53). The dashed line represents the usual dichroism coefficient (39). Here, we have set \(\omega_{c}=\omega_{p}\), \(V_{0}=0.7\omega_{p}\), and \(\omega_{c}=1\) rad \(s^{-1}\). Figure 13: Solid blue lines: plot of the rotatory power (62) associated to the refractive indices \(n_{E}\) and \(n_{R}\) for the condition (52). The dashed line represents the usual rotatory power (36). Here, we have used \(\omega_{c}=\omega_{p}\), \(V_{0}=\omega_{p}\), and \(\omega_{c}=1\) rad \(s^{-1}\). The inset plot highlights the behavior of \(\delta\) around \(\omega=\omega^{\prime\prime}\). Figure 15: Plot of the dichroism coefficient (65)(red solid lines) associated to the refractive indices \(n_{L}\) and \(n_{R}\), under the condition (52). The black dashed line represents the usual dichroism coefficient (39). Here \(\omega_{c}=\omega_{p}\), \(V_{0}=(3/2)\,\omega_{c}\), and \(\omega_{c}=1\) rad \(s^{-1}\). and \(\omega_{i}<\omega<\omega_{f}\), respectively). In this case, we have \[\delta_{dER}=\begin{cases}0,&\text{for }0<\omega<\omega_{i},\\ +\frac{\omega}{2}\sqrt{R_{+}},&\text{for }\omega_{i}<\omega<\omega_{f},\\ 0,&\text{for }\omega_{f}<\omega<\omega_{c},\\ +\frac{\omega}{2}\sqrt{R_{-}},&\text{for }\omega_{c}<\omega<\omega_{r},\\ 0,&\text{for }\omega>\omega_{r}.\end{cases} \tag{68}\] The general behavior of the dichroism coefficient (68) is illustrated in Fig. 17. ## VI Final remarks In this work, we have examined the propagation of electromagnetic waves in a cold magnetized plasma in the context of the chiral MCFJ electrodynamics, describing the implied optical effects as well. We have adopted a MCFJ timelike background vector in order to represent the chirality factor that breaks the parity. Starting from the modified Maxwell equations and employing the usual methods, we obtained four modified refractive indices given by Eqs. (45) and (46), associated with circularly polarized propagating modes. Such indices were analyzed in detail in the Secs. IV.1 - IV.4, where some of them exhibited significant modifications, as the index \(n_{R}\), see Fig. 3. It presents a negative refraction behavior in the range \(\omega_{c}<\omega<\omega_{-}\), in which it occurs propagation with absorption for \(\omega_{c}<\omega<\omega_{r}\) and free (metamaterial) propagation for \(\omega_{r}<\omega<\omega_{-}\). The usual counterpart index presents only pure absorption in this range. Optical effects of this system, involving birefringence and dichroism, were discussed in Sec. V, considering the refractive index \(n_{L}\) and \(n_{R}\) and \(n_{E}\). In Sec.V.1, the RP \(\delta_{LR}\) was introduced, see Eq. (56), exhibiting sign reversion at \(\omega=\hat{\omega}\), for the conditions (52) and (53). The RP \(\delta_{ER}\) also exhibits sign change at \(\omega=\omega^{\prime\prime}>\omega_{c}\) for the condition (52), and \(\omega=\omega^{\prime\prime}<\omega_{c}\) under the condition (53), as shown in Figs. 13 and 14, respectively. Such a RP reversal is not usual in cold plasmas, being reported in graphene systems [84], rotating plasmas [95], Weyl metals and semimetals with low electron density with chiral conductivity [92; 93], and bi-isotropic dielectrics with magnetic chiral conductivity [96]. Comparing our results with the rotating plasma scenario of Ref. [95], there appear differences. In the rotating plasma, the RP undergoes reversal and decays as \(1/\omega^{2}\) for high frequencies. In the present case, the rotatory power tends to the asymptotical value \(-V_{0}\), see Eq. (60), or increases with \(\omega\) when it involves the negative refraction index, see Eq. (63). These distinct RP properties may provide a channel to optically characterize chiral cold plasmas. Besides the nonconventional effect of reversion, the RP can also be enhanced when it is defined in the negative refraction zone. Such an enhancement occurs for \(\delta_{ER}\), given in Eq. (62), for \(\omega>\omega_{+}\) (zone in which \(n_{E}\) is negative), being a topic of interest in metamaterial plasmas [116; 117; 118; 119]. Dichroism was examined in Sec.V.2, where the coefficients \(\delta_{dLR}\) and \(\delta_{dER}\) have been shown to be non-null only in the range \(\omega_{c}<\omega<\omega_{r}\), for the condition (52) - see Figs. 15, and in the intervals \(\omega_{c}<\omega<\omega_{r}\), \(\omega_{i}<\omega<\omega_{f}\), for the condition (53), in accordance with Figs. 16 and 17. ###### Acknowledgements. The authors express their gratitude to FAPEMA, CNPq, and CAPES (Brazilian research agencies) for their invaluable financial support. M.M.F. is supported by FAPEMA Universal/01187/18, CNPq/Produtividade 311220/2019-3 and CNPq/Universal/422527/2021-1. P.D.S.S is supported by FAPEMA BPD-12562/22. Furthermore, we are indebted to CAPES/Finance Code 001 and FAPEMA/POS- GRAD-02575/21.
2306.09071
An overview of desorption parameters of Volatile and Complex Organic Molecules: A systematic dig on experimental literature
Many molecules observed in the interstellar medium are thought to result from thermal desorption of ices. Parameters such as desorption energy and pre-exponential frequency factor are essential to describe the desorption of molecules. Experimental determinations of these parameters are missing for many molecules, including those found in the interstellar medium. The objective of this work is to expand the number of molecules for which desorption parameters are available, by collecting and re-analysing experimental temperature programmed desorption data that are present in the literature. Transition State Theory (TST) is used in combination with the Redhead equation to determine desorption parameters. Experimental data and molecular constants (e.g., mass, moment of inertia) are collected and given as input. Using the Redhead-TST method, the desorption parameters for 133 molecules have been determined. The Redhead-TST method is found to provide reliable results that agree well with desorption parameters determined with more rigorous experimental methods. The importance of using accurately determined pre-exponential frequency factors to simulate desorption profiles is emphasised. The large amount of data allows to look for trends, the most important is the relationship log$_{10}$($\nu$) = 2.65ln($m$) + 8.07, where $\nu$ is the pre-exponential frequency factor and $m$ the mass of the molecule. The data collected in this work allow to model the thermal desorption of molecules and help understand changes in chemical and elemental composition of interstellar environments.
N. F. W. Ligterink, M. Minissale
2023-06-15T12:01:09Z
http://arxiv.org/abs/2306.09071v1
# An overview of desorption parameters of ###### Abstract Context:Many molecules observed in the interstellar medium are thought to result from thermal desorption of ices. Parameters such as desorption energy and pre-exponential frequency factor are essential to describe the desorption of molecules. Experimental determinations of these parameters are missing for many molecules, including those found in the interstellar medium. Aims:The objective of this work is to expand the number of molecules for which desorption parameters are available, by collecting and re-analysing experimental temperature programmed desorption data that are present in the literature. Methods:Transition State Theory (TST) is used in combination with the Redhead equation to determine desorption parameters. Experimental data and molecular constants (e.g., mass, moment of inertia) are collected and given as input. Results:Using the Redhead-TST method, the desorption parameters for 133 molecules have been determined. The Redhead-TST method is found to provide reliable results that agree well with desorption parameters determined with more rigorous experimental methods. The importance of using accurately determined pre-exponential frequency factors to simulate desorption profiles is emphasised. The large amount of data allows to look for trends, the most important is the relationship \(\log_{10}(\nu)=2.65\mathrm{ln}(m)+8.07\), where \(\nu\) is the pre-exponential frequency factor and \(m\) the mass of the molecule. Conclusions:The data collected in this work allow to model the thermal desorption of molecules and help understand changes in chemical and elemental composition of interstellar environments. ## 1 Introduction Desorption of molecules from and adsorption of gaseous species on a surface play a pivotal role in regulating physical processes and setting the chemical composition of environments in the interstellar medium (ISM), star- and planet-forming regions, and solar system objects. For example, the chemical composition of hot core and corinos, compact regions of warm and molecule-rich gas surrounding protostars, are largely explained by the desorption of species from ice-coated dust grains (e.g., Ligterink et al. 2018, 2020, 2021, 2022; Buggelund et al. 2019; Gorai et al. 2020; Yang et al. 2021; Hsu et al. 2022; Nazari et al. 2022; Bianchi et al. 2022; Zhang et al. 2023). The temperature of interstellar grains and their ice mantles dictates which molecules adsorb and consequently take part in chemical reactions (Jin & Garrod 2020; Garrod et al. 2022). The molecular composition of comets like 67P/Churyumov-Gerasimenko is largely set by which species have remained frozen since its formation or have frozen-out on its surface since (e.g., Mumma & Charnley 2011; Goesmann et al. 2015; Altwegg et al. 2016; Rubin et al. 2019). Frozen molecules are found on the surfaces of planets and moons in the solar system, where seasonal changes alter sublimation rates and in turn affect atmospheric processes and chemical composition (Fray & Schmitt 2009), for example on Triton (Bertrand et al. 2022) or Pluto (Johnson et al. 2021). To interpret observational data and model physical and chemical processes, empirical equations are employed to describe the desorption/adsorption process. For these equations, molecule-specific parameters need to be known. The theoretical framework and experimental methods for thermal desorption studies are well described in astrophysical and astrochemical literature (see reviews by Burke & Brown 2010; Minissale et al. 2022). In short, to simulate the desorption rate the Polanyi-Wigner equation is generally used: \[-\frac{dN}{dt}=\nu_{\mathrm{n}}\cdot N^{\mathrm{n}}\cdot\exp\left(\frac{E_{ \mathrm{des}}}{T}\right), \tag{1}\] where \(\nu_{\mathrm{n}}\) the pre-exponential frequency factor with value molecules1-n s\({}^{-1}\) (also often denoted as \(A_{\mathrm{n}}\) and where \(n\) is the desorption order with n = 0, 1, 2), \(N\) the surface coverage in molecules cm\({}^{-2}\) (also often denoted as \(\theta\)), n (= 0, 1, 2) the order of desorption, \(E_{\mathrm{des}}\) the desorption energy in K, and \(T\) the temperature of the surface. The order of desorption is given as zeroth, first, or second. Zeroth order desorption is associated with multilayer desorption, while first order desorption with (sub)monolayer desorption. The desorption energy can also be given in Joule by changing the exponent to \(E_{\rm des}\)\(k_{\rm B}^{-1}\) (\(k_{\rm B}=\) Boltzmann constant) or in Joule mol\({}^{-1}\) by changing the exponent to \(E_{\rm des}\)\(R^{-1}\) (\(R=\) ideal gas constant). Second order description is possible, and is for example observed with processes such as recombinative desorption, where two species react to form the desorbing product. However, as this type of desorption is hardly encountered within the astrochemical literature, second order desorption is ignored in the remainder of this publication. There are a variety of experimental methods to determine the desorption parameters n, v, and \(E_{\rm des}\), most of them based on the Temperature Programmed Desorption (TPD) technique or a variation thereof. In a typical TPD experiment, a surface held at low temperature under vacuum conditions is exposed and covered with a given adsorbate. Next, the temperature of the surface is linearly increased. At some point the adsorbate will start desorbing and continues to do so until the adsorbate is fully removed from the surface. The release of adsorbate to the gas-phase can be traced with a variety of instruments, but usually a mass spectrometry technique is employed. The measured desorption trace can be analysed to find the desorption parameters, for example with leading edge, Redhead, heating variation, inversion, or Arrhenius analysis (e.g., King 1975; De Jong & Niemantsverdri 1990a,b; Tait et al. 2005a). There are a number of limitation to the experimental determination of desorption parameters, specifically relating to experimental efforts and safety. Currently, the number of molecules detected in the interstellar medium is about 2701.2(McGuire 2022) and more are detected every year. Because experiments are time consuming, it is challenging to keep up with the number of detections and provide desorption parameters for all species. Furthermore, some molecules are difficult to work with, either because they are chemically unstable (e.g., PH\({}_{2}\)COOH, CH\({}_{3}\)OOH, c-C\({}_{3}\)H\({}_{4}\)O) or highly toxic (e.g., HCN, CH\({}_{3}\)NCO, H\({}_{2}\)P(O)OH). To bridge the gap between the availability of experimentally determined desorption parameters and the needs of the community, alternative approaches are needed. Computational techniques such as Machine Learning (Villadsen et al. 2022), Bayesian inference (Heyl et al. 2022), DFT calculations (Ferrero et al. 2022; Piacentino & Oberg 2022), or quantum mechanical methods (Germain et al. 2022; Bovolenta et al. 2022; Tinacci et al. 2022) help fill the gap. However, there also exists a rich experimental literature of TPD experiments that have been used for the identification of molecules produced in experiments that simulate chemical processes in extraterrestrial ice (for brevity named "chemical TPDs"), but not to assess their desorption parameters. Because this type of data is in essence the same as what is used for the determination of desorption parameters, it raises the question if chemical TPD traces can be used to determine pre-exponential factors and desorption energies and in this way contribute more of these essential parameters to the literature. Footnote 1: [https://cdms.astro.uni-koeln.de/classic/molecules](https://cdms.astro.uni-koeln.de/classic/molecules) Footnote 2: [http://astrochymist.org/astrochymist_mole.html](http://astrochymist.org/astrochymist_mole.html) In this study, chemical TPD traces are collected from laboratory literature and analysed with a combination of Transition State Theory and the Redhead method (Redhead-TST) to determine the pre-factor and desorption energies of 133 molecules. The methods are presented in Sect. 2 and the resulting data in Sect. 3. Implications for astrophysical and astrochemical studies are discussed in Sect. 4. ## 2 Methods This work makes use of an analysis method based on the Redhead equation and Transition State Theory, and is indicated as the Redhead-TST method throughout this manuscript. Furthermore, desorption energies and pre-exponential factors are obtained from TPD data that are used for molecule identification. The Redhead-TST method and the data set are introduced in the following sections and a visual summary is presented in Fig. 1. ### Redhead-TST formalism To determine the desorption energies of molecules from TPD data, the Redhead equation is used (Redhead 1962; King 1975): \[E_{\rm Rehead}=T_{\rm peak}\cdot\left(\ln\left(\frac{\nu_{\rm TST}T_{\rm peak} }{\beta}\right)-3.64\right) \tag{2}\] This equation takes the peak of the desorption trace (\(T_{\rm peak}\), K) in combination with the heating rate (\(\beta\), K s\({}^{-1}\)) and a pre-exponential factor (\(\nu\), \(s^{-1}\)) to determine \(E_{\rm des}\) in K energy units. This equation only applies to first order desorption, which generally applies to (sub-)monolayer coverage. It is not used for zeroth order desorption processes, which is usually the case for multi-layer desorption. Redhead (1962) showed that for \(\nu\)/\(\beta\) values of \(10^{8}-10^{13}\) K\({}^{-1}\), the relation between \(E_{\rm des}\) and \(T_{\rm peak}\) is nearly linear, within a \(\pm 1.5\%\) accuracy. For values of \(\nu\)/\(\beta>10^{13}\) K\({}^{-1}\), we verified that this relationship holds by comparing literature results with those retrieved from the Redhead equation when the same parameters are used (see Sect. 3.2). While the Redhead equation is not the most accurate analysis method to determine desorption energies (De Jong & Niemantsverdriet 1990b,a), the simplicity of this equation makes it very suitable for the analysis of TPD data that have been recorded for other purposes, such as the molecule identification, rather than their desorption parameters. Data of this type can be low in signal to noise ratio or have a poorly defined desorption trace shape, which make other methods, such as the leading edge analysis (De Jong & Niemantsverdriet 1990b,a) less suitable to analyse it with. The value for \(\nu\) used in the Redhead equation is usually assumed and taken to be \(10^{12}-10^{13}\) s\({}^{-1}\). While these values are suitable for small molecules and atoms, they significantly underestimate the pre-exponential factor for larger species with more degrees of freedom. In this work, the pre-exponential factor is therefore calculated by Transition State Theory (TST), following the equation: \[\nu_{\rm TST}=\frac{k_{\rm B}\cdot T_{\rm peak}}{h}\cdot q_{\rm tr,2D}^{\rm t} \cdot q_{\rm tr,3D}^{\rm t}, \tag{3}\] where \(k_{\rm B}\) is the Boltzmann constant and \(h\) the Planck constant. This formalism is adopted from Minissale et al. (2022), which in turn is based on work by Tait et al. (2005b). In short, TST takes the difference in rotational and translational degrees of freedom between the adsorbed and transition state into account. In equation 3, \(q_{\rm tr,2D}^{\rm t}\) and \(q_{\rm tr,3D}^{\rm t}\) are the 2D translational partition function and the 3D rotational partition function, respectively. The 2D translational partition function, because the dimension orthogonal to the surface is assumed to be common to both adsorbed and desorbed molecules. \(q_{\rm tr,2D}^{\rm t}\) is given by: \[q_{\rm tr,2D}^{\rm t}=\frac{A}{\Lambda^{2}}. \tag{4}\] The parameter \(A\) is the surface area of each adsorbed molecule, which is fixed to 10\({}^{-19}\) m\({}^{2}\) and is the inverse of the generally assumed number of binding sites (that is, 1\(\times\)10\({}^{15}\) cm\({}^{-2}\)). For large molecules this value could be different, but for simplicity we adopt a uniform value. \(\Lambda\) is the thermal wavelength of the molecule and calculated in the following way: \[\Lambda=\frac{h}{\sqrt{2\ \pi\ m_{\rm molecule}\ k_{\rm B}\ T_{\rm peak}}} \tag{5}\] In this equation, \(m_{\rm molecule}\) is the mass of the particle in kg. Finally, the rotational partition function, \(q_{\rm rot,3D}^{\ddagger}\) is given as: \[q_{\rm rot,3D}^{\ddagger}=\frac{\sqrt{\pi}}{\sigma h^{3}}\cdot(8\pi^{2}k_{\rm B }T_{\rm peak})^{3/2}\cdot\sqrt{I_{\rm x}I_{\rm y}I_{\rm z}} \tag{6}\] Here, \(\sigma\) is the symmetry factor of the molecule and indicates the number of indistinguishable orientations of the particle. \(I_{\rm x}\), \(I_{\rm y}\), and \(I_{\rm z}\) are the principal moments of inertia for rotation of the particle. The moments of inertia are determined using a rigid rotor approximation and chemical structures from the ChemSpider3 database. These structure as calculated with a Dreiding force field based geometry optimisation and are not a full quantum mechanical treatment. For a handful of molecules their structures were not available in this database and for these instances they have been calculated with the Avogadro4 software. These equations are only applicable to molecules consisting of more than two atoms. Tait et al. (2005b) note that this TST method gives a good approximation of the pre-exponential factor, but can overestimate the value. Adsorbates are assumed to be immobile on the surface and therefore have no rotational or translational degrees of freedom when bound (in other words \(q_{\rm ads}\) = 1). Since some molecules are found to migrate on the surface to sites with higher binding energies, they have some degrees of freedom on the surface, which results in \(q_{\rm ads}>1\). With a mobile adsorbate, \(q_{\rm u,2D}^{\ddagger}\cdot q_{\rm rot,3D}^{\ddagger}\) / \(q_{\rm ads}\) will therefore be lower, thus lowering the pre-exponential factor. Footnote 3: [http://www.chemspider.com](http://www.chemspider.com) Footnote 4: Avogadro: an open-source molecular builder and visualisation tool. Version 1.2.0 [http://avogadro.cc/](http://avogadro.cc/) We stress that TST suffers from some limitations. One of the main approximations is that all molecules in the transition state reach the Boltzmann distribution. This could not be the case for large molecules, and it is the reasons why the TST could fail to treat large molecules. It is not easy to quantitatively define the "large" word but based on the results of the present work and on Minissale et al. (2022), we can tentatively claim that TST starts to fail when the molecule presents more than 20 atoms. Moreover, TST neglects quantum effects, which is important at low temperatures or when the chemical reaction involves tunnelling. We point out that this a second order limitation since, except for H\({}_{2}\) or D\({}_{2}\) or other peculiar cases, desorption occurs at temperature where classical effects overcome of some orders of magnitude quantum effects. Several species considered in this work are salt complexes, which consist of two molecules that have engaged in an acid-base reaction and are present as a cation-anion pair. For these species, it is generally not possible to determine the moments of inertia and subsequently its pre-exponential factor with TST. To determine the desorption energies of these salts, we take their pre-factor as the sum of the pre-factors of the individual acid and base. The following values are used for individual components: 4.96\(\times\)10\({}^{15}\) s\({}^{-1}\) for H\({}_{2}\)O, 1.63\(\times\)10\({}^{17}\) s\({}^{-1}\) for HCN, and 1.94\(\times\)10\({}^{15}\) s\({}^{-1}\) for NH\({}_{3}\) (Minissale et al. 2022) and 2.9\(\times\)10\({}^{17}\) s\({}^{-1}\) for HNCO, 3.9\(\times\)10\({}^{17}\) s\({}^{-1}\) for CH\({}_{3}\)NH\({}_{2}\), 6.9\(\times\)10\({}^{19}\) s\({}^{-1}\) for NH\({}_{2}\)COOH, 2.0\(\times\)10\({}^{18}\) s\({}^{-1}\) for HCOOH, 1.3\(\times\)10\({}^{19}\) s\({}^{-1}\) for CH\({}_{3}\)COOH, and 1.7\(\times\)10\({}^{21}\) s\({}^{-1}\) for the Acetaldehyde Ammonia Trimer (AAT, this work). ### Data set The data used in this paper have been collected from a wide variety of publications in the laboratory astrochemistry and surface science literature. All these publications make use of TPD experiments in combination with a mass spectrometry technique, such as quadrupole mass spectrometer (QMS) or time-of-flight MS (TOF-MS), to detect and identify molecules. To determine \(E_{\rm Redehead}\) and \(\nu_{TST}\), the peak desorption temperature \(T_{\rm peak}\) and heating rate \(\beta\) are collected and presented in Table 2. Table 2 lists the relevant molecular constants mass, \(\sigma\), \(I_{\rm x}\), \(I_{\rm y}\), and \(I_{\rm z}\). The heating rate is usually indicated in the publication, but in case \(\beta\) is not provided it is set to 1 K min\({}^{-1}\). This matches with most heating rates applied in astrochemical laboratory work, but we note that heating rates in this data set range from 0.1 K min\({}^{-1}\) to over 1 K s\({}^{-1}\). Peak desorption temperatures can textually be indicated in the publication or be determined by-eye from TPD traces presented in figures. The absolute un Figure 1: Visual summary of the Redhead-TST method. The pre-exponential frequency factor (\(\nu\)) is determined with Transition State Theory (TST). The main input are the translational (\(q_{\rm u,2D}^{\ddagger}\)) and rotational (\(q_{\rm rot,3D}^{\ddagger}\)) partition functions, which increase as the molecule transitions from a solid to gaseous state, and \(T_{\rm peak}\) of the molecule. The Redhead equation takes \(T_{\rm peak}\), the heating rate \(\beta\), and \(\nu_{\rm TST}\) as main inputs to calculate the desorption energy \(E_{\rm des}\). certainty on the recorded temperature is generally 0.5-2.0 K, depending on the measurement technique used (e.g., thermocouple, diode). However, by-eye analysis of TPD data is inherently more inaccurate and the \(T_{\mathrm{peak}}\) uncertainty is therefore uniformly set to \(\pm\)5 K. Combined with the uncertainties of the equations used, we apply a uniform uncertainty of \(\pm\)10% on all determined desorption energies. Table 2 also list details about the substrate material, precursor molecules (i.e., starting molecule or mixture of molecules), and the ice processing source. We note that most data are collected for molecules that are formed in-situ, instead of as pure or mixed deposited ices. This means that while compositions of the ice at the beginning of the experiment are given, by the time the TPD is started this composition has changed because new molecules have been formed. In fact, some of the listed molecules may form in reactions that are promoted by heating during the TPD. This makes characterising the binding environment of a molecule challenging, as the target molecule does not only interact with the substrate or surrounding precursor species, but potentially with a host of molecules that are formed during processing of the ice. Therefore, when the ice is processed and the target species is formed in-situ, the binding environment is assumed to consist of a combination of the substrate material (e.g., metal, carbon, etc) and a residue that is a mix of different and undefined organic molecules. All molecules are assumed to be present at (sub)monolayer coverage (1ML \(\approx\) 1\(\times\)10\({}^{15}\) molecules cm\({}^{-2}\)), unless there is clear mention that multilayer quantities (\(\gg\)1ML) of product are formed, for example determined from IR spectroscopic measurements, in which case the molecule is excluded from the dataset. It is possible that the molecule under investigation co-desorbs with another matrix species, either a precursor molecule or an new species that is abundantly formed during the processing of the ice. Because co-desorbing species have a peak desorption temperature that is governed by the matrix molecule instead of the binding of the targeted species to the surface, these molecules are excluded from the data set. For precursor molecules (e.g., H\({}_{2}\)O, CH\({}_{3}\)OH) desorption temperatures are often known and therefore any target molecule that has a \(T_{\mathrm{peak}}\) close to this is considered to be co-desorbing and excluded. However, during processing new molecules that act as co-desorption matrix can be formed. Because the full molecular inventory is not always described, it is difficult to identify co-desorption in these cases. Only if there is mention or a strong suspicion that co-desorption is occuring the entry is omitted. A variety of ice processing techniques are used, ranging from UV and X-ray processing to hydrogenation and electron/ion bombardment. We note that thermal processing is also a possibility, but this is only listed when it is explicitly mentioned as a step in the molecule formation process. The TPD process itself is not considered thermal processing. Finally, data of (sub)monolayer desorption studies presented in the literature have also been collected and are presented in Table 2 as well. In some cases only \(\nu_{\mathrm{lit}}\) and \(E_{\mathrm{lit}}\) are presented, while in other cases sufficient information is available (i.e., \(T_{\mathrm{peak}}\) and \(\beta\)) that \(\nu_{\mathrm{TST}}\) and \(E_{\mathrm{Redhead}}\) can also be determined. ## 3 Results and Discussion In Table 2 data of 133 molecules from 132 publications have been collected, for a total of 328 entries. Information on the CHNOPS elemental composition, functional groups, and binding surfaces in the data set are presented in Fig. 2. The vast majority of molecules contain one or more carbon and hydrogen atom, while a substantial fraction contains at least one oxygen atom. About one third of the molecules included contains at least one nitrogen atom, while phosphorus and sulfur are present in a minor percentage of the data set. In terms of functional groups, we find that mostly alcohols, aldehydes or ketones, and amines are covered in the data set. Contrary to what is found in the ISM (e.g., Cernicharo et al., 2020; McGuire et al., 2020; Marcelino et al., 2021; Lee et al., 2021, 2022; Cernicharo et al., 2022), cyanides only make up a small portion of this data set and ethers and formates represent the smallest fraction of the functional groups. This figure highlights the large availability of data experimental data on oxygen-bearing molecules, but relatively few molecules are covered that contain nitrogen, phosphorus, or sulfur atoms. There are two primary explanations for this. First, many precursor molecules with N, P, or S, such as HCN, PH\({}_{3}\), and H\({}_{2}\)S, are more difficult to work with in the laboratory or toxic and therefore avoided by experimental researchers. In turn, fewer TPD data on the formation of molecules containing these kind of atoms are available. At the same time the limited amount of data available on N, P, S-bearing species may also indicate that ice chemistry is less efficient at forming such molecules or that molecules containing these atoms are refractory and do not desorb at temperature \(\leq\)300 K. Finally, the cold surfaces for these studies are dominated by metallic ones (e.g., Au, Ag, Pt). Surfaces that are more relevant to the ISM, such as crystalline or amorphous water, highly ori Figure 2: Occurrences of CHNOPS atoms (top panel), functional groups (middle panel), and binding surfaces (bottom panel) of the entries present in the data set used in this paper. The category “other” in the bottom panel includes surfaces like KBr and MgF\({}_{2}\) windows. ented pyrolytic graphite (HOPG), graphene, silica (SiO\({}_{2}\)), and silicates (SiO\({}_{4}^{4}\)-) make up a smaller percentage of the data set. Only few ice chemistry experiments make use of surface that are not metallic (e.g., Potapov et al. 2022). Since many molecules in the data set are obtained from such experiments, this explains the dominance of metallic surfaces in the data set. However, as mentioned earlier, these molecules are produced in-situ, likely together with a mixture of other complex organic molecules that can remain on the surface at temperatures \(\geq\)300 K. Consequently, the binding surface of the target molecule will be a combination of the metallic surface and an organic molecular residue. This environment be a relevant analogue to dust grains in the ISM, which are presumably coated in a layer of organic molecules once water-ice is removed from their surfaces. In the following sections a deeper look will be taken at the Redhead-TST method performance and collected desorption energy and pre-exponential factor data. ### Influence of TPD data on the Redhead-TST output The Redhead-TST method relies on TPD data, which can show major differences in \(T_{\rm peak}\) between experiments. Consequently, this influences the derived \(v_{\rm TST}\) and \(E_{\rm redhead-TST}\) values. Examples of this are shown in Fig. 3, where \(v_{\rm TST}\) and \(E_{\rm redhead-TST}\) of all entries of CH\({}_{3}\)CH\({}_{2}\)CH\({}_{2}\)(proionaldehyde), CH\({}_{3}\)COCOCH\({}_{4}\) (2,3-Butanedione), NH\({}_{3}\)OH (hydroxylamine), (CH\({}_{2}\)OH)\({}_{2}\) (ethylenglycol), HOCHCHOH (1,2-ethenediol), HOCH\({}_{2}\)CH(OH)CHO (glyceraldehyde), and HOCH(CH\({}_{2}\)OH)\({}_{2}\) (glycerol) are shown. In all cases, marginal variations of at most one order of magnitude are seen in the pre-exponential factors for each molecule. Variations on this level will have a negligible effect on the simulated desorption profiles. However, in several cases large shifts in the retrieved desorption energy are found, with the most extreme scatter seen in CH\({}_{3}\)COCOCH\({}_{3}\) values, which range from 6920 to 12690 K. These shifts are directly correlated with the peak desorption temperature used as input for the Redhead equation. The scatter in peak desorption temperatures, and thus desorption energies, has several explanations. First and foremost the coverage of a species of interest is not known in the majority of cases. Therefore, molecules can span a wide range of (sub)monolayer coverages. At lower coverage, molecules have a tendency to settle in deeper binding sites, which have higher desorption energies. Consequently, there is a correlation between coverage and desorption temperature, where lower coverage results in a shift of desorption to higher temperature (see e.g., Smith et al. 2016; He et al. 2016). This effect is more pronounced on rough surfaces, which have a large range of binding sites, like Amorphous Solid Water (ASW) or the organic residue that is presumably formed in the experiments included in this work. The surface on which the experiment is conducted can also play a role. Almost all entries depicted in Fig. 3 are measured on a metallic surface, but this classification groups many different materials, such as Au, Ag, and Cu, and structure, such as rough, Pt(111), or Mo(110), together. Each metal and surface structure will result in different desorption energies, which in turn shift \(T_{\rm peak}\). This shift is exacerbated when other categories of surfaces are used, such as water ice or carbon. For the selected molecules, a clear example is seen for NH\({}_{2}\)OH. One entry is measured on graphite (Ioppolo et al. 2014) and its retrieved desorption energy is a clear outlier with respect to the other entries of this molecule. Similarly, the type of organic residue formed on a surface during an experiment can also influence the desorption characteristics. This is highlighted by CH\({}_{3}\)COCOCH\({}_{3}\), for which all entries are measured on the same type of Ag surface, but produced with different irradiated precursors, such as CH\({}_{3}\)CHO, CH\({}_{3}\)CHO:CH\({}_{3}\)COCOCH\({}_{3}\), and H\({}_{2}\)O:CH\({}_{3}\)CHO. Since the desorption parameters of molecules covered in this study have in most cases not been reported, the values listed in this work can be used as an approximation of the pre-factor and desorption energy, for example in chemical models. However, it is important to be aware that the provided values are limited by the quality of the TPD input data. ### Redhead-TST method performance In Fig. 4 the Redhead-TST and literature \(E_{\rm des}\) and \(\nu\) values are compared for a number of publications where both these values are available. The top panel of this figure shows the pre-factors values, which are colored based on whether the difference \(\Delta\)(\(\nu_{\rm TST}\) - \(\nu_{\rm Lk}\)) is smaller than \(\pm 3\) orders of magnitude (red squares) or larger than \(\pm 3\) orders of magnitude (blue circles). The literature \(\nu\) values that are in close agreement with their TST counterpart are Figure 3: Values of \(v_{\rm TST}\) and \(E_{\rm redhead-TST}\) of all the entries of H\({}_{3}\)CH\({}_{2}\)CHO (propionaldehyde), CH\({}_{3}\)COCOCH\({}_{3}\) (2,3-Butanedione), NH\({}_{3}\)OH (hydroxylamine), (CH\({}_{3}\)OH)\({}_{2}\) (ethylenglycol), HOCHCHOH (1,2-ethenediol), HOCHCHOHOHCHO (glyceraldehyde), and HOCH(CH\({}_{2}\)OH)\({}_{2}\) (glycerol). Symbols indicate a metallic surface (squares) or carbon surface (pentagon, NH\({}_{2}\)OH panel). found to derive from studies that employ more rigorous methods to determine the pre-factor (e.g., Tait et al. 2005b; Chaabouni et al. 2018; Behmard et al. 2019; Tylinski et al. 2020), although a number of data points are also derived from studies that in fact employ the same TST to calculate \(\nu\)(Ulbricht et al. 2006). To some extent, the large differences in pre-factors are not surprising, as in many studies \(\nu\) is assumed to be 10\({}^{12}\) or 10\({}^{13}\) s\({}^{-1}\). This tendency is visible in the top panel of Fig. 4. We conclude that the TST method gives accurate approximations of the pre-exponential factor value. The subsequent effect of the adopted \(\nu\) on the desorption energy is visible in the bottom panel of Fig. 4. The data points in the bottom panel are again labeled according to the difference in pre-factor values. When the difference in \(\nu\) is \(\leq\)3 decades, the desorption energies are in good agreement and fall on the parity line. Contrary, when there is a large difference between \(\nu_{\rm it}\) and \(\nu_{\rm TST}\) there is a prominent difference between \(E_{\rm Refhead}\) and \(E_{\rm Hit}\). Because in all these cases the literature pre-factor is lower than \(\nu_{\rm TST}\), the corresponding desorption energy is also lower. This comparison shows that the Redhead-TST method gives accurate desorption parameters that are in good agreement with those derived in experiments, provided these experimental parameters are determined with rigorous laboratory and analysis methods. One may argue that the large discrepancies between \(E_{\rm Refhead-TST}\) and \(E_{\rm Hit}\) when \(\nu_{\rm it}\) is assumed will actually not affect the simulated desorption behaviour, as in either case the best fit parameters are generated. An example of this is shown in Fig. 5, where data of the molecules CH\({}_{3}\)NCO (methyl isocyanate), CH\({}_{3}\)C(O)NH\({}_{2}\) (acetamide), and NH\({}_{2}\)C(O)NH\({}_{2}\) (carbamide) are analysed with the Redhead-TST method (red lines) and the Redhead method, while assuming \(\nu=10^{13}\) s\({}^{-1}\). The dashed lines show the simulated desorption profiles at a heating rate of 5 K min\({}^{-1}\), the same temperature ramp as used in the studies where the TPD data of these molecules are taken from (Ligterink et al. 2017, 2018a). The peak desorption temperatures determined with a TST and assumed prefactor are found to be identical and match with the \(T_{\rm peak}\) value determined in the laboratory. However, if the heating rate is changed to 1 K century\({}^{-1}\), a value that is applicable to interstellar environments, the peak desorption temperatures show a substantial deviation. The simulated profiles with the assumed \(\nu\) value, which are lower than the value determined with TST, underestimate the peak desorption temperature by about 10%. This example highlights the importance of determining desorption parameters, including the pre-exponential factor, as accurately as possible. However, it is important to be aware of the nuances of heating in interstellar environments. Grains can be heated by photons and in the case of very small particles and energetic photons, the grains can be flash heated at rates in the order of K s\({}^{-1}\). In these cases desorption energies with assumed pre-exponential factors might be just as applicable, as Fig. 5 shows. In general, it is recommended to use as accurate a value as possible. ### Data trends Due to the large amount of collected data, it is possible to investigate trends in pre-factor values and desorption energies. In Fig. 6 \(E_{\rm des}\) (top panels) and \(\nu\) (bottom panels) are plotted against the molecule mass in amu (left column) and number of atoms of the molecule (right column). Data points obtained with the Redhead-TST method are indicated in red, while those taken from literature are presented in blue. In addition, the recommended desorption parameters of molecules from various surfaces listed in Figure 4: Comparison between \(\nu\) (top panel) and \(E_{\rm des}\) (bottom panel) values obtained from literature sources, but which have also been analysed with the Redhead-TST method. Red squares indicate entries for which the difference between \(\nu_{\rm TST}\) and \(\nu_{\rm it}\) is smaller than three orders of magnitude, whereas the blue circles indicate those with a difference larger than three orders of magnitude. Figure 5: Desorption traces of CH\({}_{3}\)NCO, CH\({}_{3}\)C(O)NH\({}_{2}\), and NH\({}_{2}\)C(O)NH\({}_{2}\) with desorption parameters obtained with the Redhead-TST method (red) and the Redhead method, assuming a pre-exponential factor of 10\({}^{13}\) s\({}^{-1}\) (blue). Peak desorption temperatures are indicated in the plot for each species. The desorption profile is simulated with a first-order Polayinyl-Wigner equation, surface coverage of 1x10\({}^{15}\) molecules cm\({}^{-2}\), and heating rate of 1 K century\({}^{-1}\) (solid lines), which is appropriate for the ISM, or the heating rate of 5 K min\({}^{-1}\) (dashed lines) used in these experiments. Minissale et al. (2022) have been added in green points (data from their Table 2 and 3). Each data point is labelled by its surface category, which includes metallic (squares), carbon (pentagons), silicate (triangles), and water-ice (circles). Both Redhead-TST and literature desorption energies show a general tendency to increase with increasing molecule mass or number of atoms. However, most striking is the large spread in desorption energy values for a given molecular mass or number of atoms, which often spans more than 5000 K (top panels Fig. 6. This spread makes it difficult to retrieve any empirical relationship between the parameters. Even when only data of a specific surface or containing certain functional groups are selected (not shown), no trend or relationship is found. The situation is different for the pre-exponential factors. While the laboratory literature data is affected by the tendency to assume \(\nu=10^{13}\) s\({}^{-1}\), there is a clear trend in the TST data showing a relationship between \(\nu_{\rm TST}\) and the molecule mass. These data are fitted with an equation of the form \(\log_{10}(\nu)=\) a\(\ln(m)\)\(+\) b, where \(m\) is the mass of the molecule in amu (atomic mass units, and the best-fit values are found to be a = 2.65 and of b = 8.07. Because both studied make use of Transition State Theory to determine the pre-factor, the data points of both this work (red) and those of Minissale et al. (2022) (green) are used for the fit. The TST data points show a scatter of about one decade around the best-fit line, an uncertainty that has a marginal effect on any simulated desorption profile. The recommended values presented by Minissale et al. (2022) are in good agreement with the found equation and show that the equation can also be used for lower mass molecules. This empirical relationship can be used to estimate the pre-exponential factor of a molecule solely based on its mass. This may find use when analysing laboratory data to determine \(E_{\rm des}\) values or as an easy way to determine pre-factors for molecules in astrochemical models (see also Sect. 4.1). ### Desorption parameters and caveats For several molecules a large number of entries of the same species have been collected in Table 2. This amount of data can be useful in several cases, such as to analyse the influence of the desorption surface. However, often a single \(\nu\) and \(E_{\rm des}\) value is desired, for example to be used chemical models. The mean values for \(\nu\) and \(E_{\rm des}\) for molecules analysed in this work with the Redhead-TST method are presented Table 1. It is important to note several caveats of the data provided in Table 1. First, these desorption parameters are the mean of the available data. Monolayer desorption is a more nuanced process and desorption energies will differ depending on coverage and available binding sites. The mean values presented in this table should only be used when no data is available from more rigorous studies and only for specific applications, such as input for astrochemical models. Second, binding environments are grouped together, but one has to be aware that the desorption energies of these molecules will be different when, for example, amorphous water ice or graphite are considered. The most prominent source of desorption parameter data comes from experiments that make use of metallic surfaces (e.g., Au, Ag, Pt substrates). Often the molecule is formed in situ together with other species, and there the binding surface can be seen as a combination of metallic and a refractory organic residue, which makes these environments more realistic. In connection to the above points, it is important to note that for some molecules extensive studies have been performed to determine the pure ice (sub)monolayer desorption parameters, including on different surfaces. In particular molecules that are often and abundantly observed in the ISM are on this list, such as CH\({}_{3}\)NH\({}_{2}\), CH\({}_{3}\)CHO, HCOOH, CH\({}_{3}\)CH\({}_{2}\)OH, CH\({}_{3}\)OCHO, CH\({}_{3}\)COOH, and HOCH\({}_{2}\)CHO (e.g. Lattelais et al. 2011; Bertin et al. 2011; Burke et al. 2015a,b,c; Chaabouni et al. 2018; Corazzi et al. 2021; Ferrero et al. 2022). While these studies are invaluable to assess the influence of different binding environments, in many cases the pre-exponential frequency factor is assumed (e.g., to be \(10^{12}\) or \(10^{13}\) s\({}^{-1}\)), which will affect the determined desorption energy. In Table 2 the Redhead-TST analysis of these data are presented when possible. If desorption parameters on surfaces like amorphous or crystalline water, HOPG, or silicates are required, it is recommended to adopt these values. While some molecules are well studied, it is also noteworthy that some prominent interstellar molecules have not rigorously been studied in the laboratory, such as dimethyl ether (CH\({}_{3}\)OCH\({}_{3}\)), ethylene glycol ((CH\({}_{2}\)OH\({}_{2}\))), and acetamide (CH\({}_{3}\)C(O)NH\({}_{2}\)). For some molecules like ethylcyanide (CH\({}_{3}\)CH\({}_{2}\)CN) and vinylcyanide (CH\({}_{2}\)CHCN) only limited data can be found in the literature (Toumi et al. 2016; Kimber et al. 2018; Couturier-Tamburelli et al. 2018) and only for the case of multilayer desorption. Because these species are routinely observed in the ISM (e.g., Nazari et al. 2022), dedicated laboratory studies of these molecules are desired. The parameters of these molecules presented in this study rely solely on chemical TPD data and should be considered the best currently available. ## 4 Astrophysical implications ### Impact of the pre-factor on astrochemical models Astrochemical models make use of the Polanyi-Wigner equation or a variation thereof to calculate the desorption rates of molecules on dust grains (e.g., Kulterrer et al. 2020). While for these models desorption energies are generally taken from experimental and theoretical literature, the \(\nu\) value is calculated with an empirical formula, often the harmonic oscillator relation presented in Hasegawa et al. (1992): \[\nu=\sqrt{2k_{\rm B}\mu_{\rm xs}E_{\rm des}/\pi^{2}m}, \tag{7}\] which uses the molecule mass \(m\), the molecule binding energy \(E_{\rm des}\), and the number of binding sites per grain surface area \(n_{\rm ss}\) as input to calculate the pre-factor. Except for small molecules and atoms, this equation underestimates the value of \(\nu\), often by multiple decades (Minissale et al. 2022). This is also found for molecules considered in this study. Figure 7 shows the difference between \(\nu_{\rm TST}\) and \(\nu_{\rm Hasegawa}\) and visualises that for the majority of cases \(\nu_{\rm TST}\) is at least seven decades larger than the pre-factor calculated with the Hasegawa equation. The difference in adopted pre-factor can have a strong impact on astrochemical models. Combining a desorption energy that is determined with Redhead-TST or any other accurate method with a pre-factor determined by the Hasegawa equation can severely misrepresent the desorption behaviour of the molecule (see Fig. 2 in Ceccarelli et al. 2022). This is exemplified in Fig. 8 for the molecules CH\({}_{3}\)NCO, CH\({}_{3}\)C(O)NH\({}_{2}\), and NH\({}_{2}\)C(O)NH\({}_{2}\). For these species \(E_{\rm Redhead-TST}\) is determined with data from Ligterink et al. (2017) and Ligterink et al. (2018) and subsequently plotted with \(\nu_{\rm TST}\) (red) and \(\nu_{\rm Hasegawa}\) (blue). The desorption profiles (left panel) plotted with \(\nu_{Hasegawa}\) are all \begin{table} \begin{tabular}{l c c c|l c c c} \hline \hline Molecule & \(\nu\) & \(E_{\rm des}\) & \(T^{\rm 2}_{\rm peak,ISM}\) & Molecule & \(\nu\) & \(E_{\rm des}\) & \(T_{\rm peak,ISM}\) \\ & s\({}^{-1}\) & K & K & & s\({}^{-1}\) & K & K \\ \hline C\({}_{2}\)H\({}_{4}\) & 9.1e+15 & 2602 & 45 & CH\({}_{3}\)CHCHOH & 1.6e+19 & 8330 & 125 \\ C\({}_{2}\)H\({}_{6}\) & 4.8e+16 & 2773 & 46 & CH\({}_{3}\)OCH\({}_{2}\)OH & 1.9e+19 & 8471 & 127 \\ C\({}_{3}\)H\({}_{8}\) & 6.5e+17 & 3721 & 59 & (CH\({}_{3}\))\({}_{2}\)NCHO & 6.7e+19 & 8785 & 129 \\ CH\({}_{2}\)CHCH\({}_{3}\) & 3.9e+17 & 3709 & 60 & [NH\({}_{2}^{+}\)][OCN\({}^{-1}\)] & 2.9e+17 & 8117 & 129 \\ CH\({}_{3}\)P\({}_{2}\)H\({}_{2}\) & 4.4e+17 & 4662 & 74 & CH\({}_{3}\)OCH\({}_{2}\)CH\({}_{3}\) & 2.1e+19 & 8633 & 129 \\ CH\({}_{3}\)I & 9.3e+17 & 4758 & 75 & CH\({}_{3}\)CHCH & 1.1e+20 & 8915 & 130 \\ CH\({}_{3}\)CCH & 1.9e+17 & 4624 & 75 & CH\({}_{3}\)CH\({}_{2}\)CH\({}_{2}\)OH & 3.8e+19 & 8909 & 132 \\ C\({}_{4}\)H\({}_{10}\) & 5.7e+18 & 4946 & 76 & c-C\({}_{3}\)H\({}_{5}\)O & 5.8e+18 & 8766 & 133 \\ CH\({}_{3}\)CHO & 9.7e+17 & 4882 & 77 & CH\({}_{3}\)CH\({}_{2}\)O & 3.0e+20 & 9502 & 136 \\ H\({}_{2}\)SO\({}_{2}\) & 2.6e+18 & 5035 & 78 & CH\({}_{2}\)CH(OH)CH\({}_{3}\) & 3.3e+19 & 9415 & 139 \\ H\({}_{2}\)CO & 1.7e+17 & 4895 & 79 & CH\({}_{3}\)COOCH\({}_{3}\) & 1.4e+20 & 9621 & 139 \\ N\({}_{2}\)H\({}_{2}\) & 5.9e+16 & 4891 & 81 & NH\({}_{3}\)OH & 1.0e+18 & 8954 & 140 \\ C\({}_{3}\)H\({}_{2}\)O & 2.9e+15 & 5319 & 82 & [NH\({}_{2}^{+}\)][CH\({}_{3}\)CO\({}^{-1}\)] & 1.3e+19 & 9408 & 141 \\ HNCO & 2.9e+17 & 5154 & 83 & NH\({}_{2}\)CH\({}_{2}\)CN & 2.4e+19 & 9529 & 142 \\ CH\({}_{3}\)NH\({}_{2}\) & 5.3e+16 & 5106 & 84 & CH\({}_{3}\)CH\({}_{2}\)CH\({}_{5}\)H\({}_{5}\) & 3.5e+20 & 10126 & 144 \\ CH\({}_{3}\)OOCH\({}_{4}\) & 4.2e+18 & 5560 & 86 & H\({}_{2}\)POCOH & 6.4e+19 & 9852 & 144 \\ CH\({}_{3}\)CHCH\({}_{2}\)O & 4.4e+18 & 5715 & 88 & e-H\({}_{2}\)O\({}_{3}\)O & 1.2e+19 & 9605 & 144 \\ CH\({}_{3}\)CH\({}_{3}\)OCHO & 1.6e+19 & 5833 & 88 & H\({}_{2}\)POH & 3.0e+18 & 9446 & 145 \\ CH\({}_{3}\)OCHO & 4.7e+18 & 5715 & 88 & [NH\({}_{2}^{+}\)][HCOO\({}^{-1}\)] & 2.0e+18 & 9426 & 145 \\ CH\({}_{3}\)NC & 2.9e+17 & 5657 & 91 & CH\({}_{3}\)CH\({}_{2}\)OH\({}_{3}\)OH & 4.2e+19 & 9910 & 146 \\ NO\({}_{2}\) & 3.7e+17 & 5676 & 91 & C\({}_{10}\)H\({}_{2}\) & 2.7e+21 & 10517 & 146 \\ HCOPH\({}_{2}\) & 3.7e+18 & 6006 & 92 & CH\({}_{3}\)CH\({}_{2}\)CH\({}_{2}\)SH & 9.6e+19 & 10070 & 147 \\ CH\({}_{3}\)CH\({}_{3}\)NH\({}_{2}\) & 2.3e+18 & 6033 & 94 & c-NCH\({}_{2}\)H\({}_{3}\) & 3.4e+18 & 9733 & 148 \\ P\({}_{2}\)H\({}_{4}\) & 2.2e+18 & 6183 & 96 & CH\({}_{3}\)OCOOH & 6.6e+19 & 10151 & 149 \\ CH\({}_{3}\)NH\({}_{2}\) & 4.9e+17 & 6025 & 96 & HOCH\({}_{2}\)OH & 1.1e+19 & 9911 & 149 \\ CH\({}_{3}\)CH\({}_{3}\)CHO & 8.8e+18 & 6359 & 97 & CH\({}_{3}\)CH\({}_{2}\)OH\({}_{3}\) & 3.3e+19 & 10139 & 150 \\ HCOSH & 3.8e+18 & 6301 & 97 & (CH\({}_{3}\)H\({}_{3}^{+}\)][NCO\({}^{-1}\)] & 4.2e+19 & 10214 & 151 \\ CH\({}_{3}\)COCH\({}_{3}\) & 1.0e+19 & 6459 & 98 & [CH\({}_{3}\)H\({}_{3}^{+}\)][NCO\({}^{-1}\)] & 6.8e+17 & 9688 & 152 \\ CH\({}_{3}\)CHNH\({}_{2}\) & 1.9e+18 & 6327 & 98 & HOCH\({}_{2}\)NH\({}_{2}\) & 1.2e+19 & 10138 & 152 \\ CH\({}_{2}\)CHNH\({}_{2}\) & 1.9e+18 & 6326 & 98 & CH\({}_{2}\)COH/COOH & 1.5e+20 & 10578 & 153 \\ H\({}_{3}\)SC\({}_{2}\)(HCO)O & 7.9e+18 & 6439 & 98 & C\({}_{3}\)H\({}_{3}\)OH\({}_{3}\) & 5.7e+20 & 10831 & 154 \\ C\({}_{6}\)H\({}_{14}\) & 8.5e+19 & 6874 & 101 & HOCH\({}_{2}\)CN & 3.0e+19 & 10375 & 154 \\ CH\({}_{3}\)P\({}_{2}\)H\({}_{3}\) & 1.4e+19 & 6675 & 101 & HCOCOOH & 7.8e+19 & 10838 & 158 \\ CH\({}_{3}\)SH\({}_{3}\) & 1.8e+18 & 6522 & 101 & CH\({}_{3}\)NHCHOCHO & 3.1e+19 & 10708 & 158 \\ CH\({}_{3}\)OCH\({}_{2}\) & 2.0e+18 & 6503 & 101 & CH\({}_{3}\)COCOOH & 1.9e+20 & 11169 & 161 \\ CH\({}_{3}\)CO & 6.8e+18 & 6842 & 104 & H\({}_{2}\)CO(OH)\({}_{2}\) & 3.4e+19 & 10985 & 162 \\ C\({}_{6}\)H\({}_{2}\) & 6.3e+19 & 71 shifted to higher temperatures and misrepresent the actual desorption profile. For these examples the peak desorption temperature is about 30% higher than what it should be. In the right panel the gas and ice column densities are plotted against the radius of a protoplanetary disk, which has the average temperature profile \(T(r)=200\times(r/1AU)^{0.62}\) K, as found by Andrews & Williams (2007). For each 1 K temperature step, the number of desorbed molecules is determined, which summed give the gas-phase column density and subtracted from a starting value of 1\(\times\)10\({}^{15}\) cm\({}^{-2}\) give the ice column density. As expected from their peak desorption temperatures, the ice-gas inversion point lies at a larger radius when the accurate TST pre-factor is used. Using \(\nu_{\rm Hasegawa}\) results in the inversion point shifting \(\sim\)50% inwards. Due to its impact on astrochemical models, it is recommended to use more accurate pre-exponential factor values. One way is to implement the empirical equation presented in Sect. 3.3, with the caveat that this equation is ill suited to determine pre-factors for atoms and diatomic molecules. Better yet is to use the \(v\) and \(E_{\rm des}\) that belong together as presented in the literature source as direct input for the chemical model, to prevent any misrepresentation. ### Peak desorption temperatures For simplicity and consistency, desorption profiles and peak desorption temperatures are calculated using Eq. 1 and a heating rate of 1 K century\({}^{-1}\) in this work. However, this heating rate is not appropriate for all interstellar environments and peak desorption temperatures are often determined with other equations, such as the adsorption-thermal-desorption balance presented by Hasegawa et al. (1992) (see also Nazari et al. 2021): \[\frac{n_{\rm ice}}{n_{\rm gas}}=\frac{\pi a_{\rm grain}^{2}n_{\rm grain}\sqrt{3 k_{\rm B}Tm^{-1}}}{e^{-E_{\rm obs}/T}\sqrt{2k_{\rm B}n_{\rm ss}E_{\rm de}\pi^{-2}m^{-1}}}. \tag{8}\] As input, this equation takes the grain size \(a_{\rm grain}\) (set to 0.1 \(\mu\)m), the grain number density \(n_{\rm grain}\) (set to 1.0\(\times\)10\({}^{-12}\)\(n_{\rm H}\), with \(n_{\rm H}\) the gas density), the sticking efficiency \(S\) (set to 1), the gas and grain temperature \(T\), the molecule mass \(m\), the molecule binding energy \(E_{\rm des}\), and the number of binding sites per grain surface area \(n_{\rm ss}\) to calculate the ice-to-gas ratio. The peak desorption temperature is taken as the point where ice and gas molecular abundances are equal, that is \(n_{\rm ice}\) / \(n_{\rm gas}=1\). Instead of the peak desorption temperature depending on the heating rate values, in this equation it primarily depends on the gas density of the Figure 6: Desorption energy (top row) and pre-exponential factor (bottom row) plotted against molecule mass (left column) and number of atoms (right column). Values derived with the Redhead-TST method are coloured in red, recommended desorption parameters adopted from Minissale et al. (2022) in green, and other literature sources are presented in blue. Marker symbol indicates whether a molecule is desorbing from a metallic (squares), water (circles), silicate (triangle), or carbon (pentagon) surface. Figure 7: Difference between \(\nu_{\rm TST}\) and \(\nu_{Hasegawa}\) for molecules presented in this study. Frequencies normalised to one are presented. environment. Furthermore, we note that this equation in its original form makes use of the harmonic oscillator relation to calculate the pre-exponential frequency in the denominator. As discussed Sect. 4.1, this equation gives inaccurate values for larger molecules and therefore the equation should rather take accurate pre-factor into account: \[\frac{n_{\rm ice}}{n_{\rm gas}}=\frac{\pi a_{\rm grain}^{2}n_{\rm grain}\sqrt{3k_ {\rm B}Tm^{-1}}}{\nu_{\rm TST}e^{-E_{\rm obs}/T}}. \tag{9}\] Using the latter equation, the peak desorption temperatures where calculated for \(n_{\rm H}=1\times\)10\({}^{7}\) cm\({}^{-3}\) (appropriate for molecular clouds) and \(n_{\rm H}=1\times\)10\({}^{12}\) cm\({}^{-3}\) (appropriate for protoplanetary disk environments) and compared with the \(T_{\rm peak}\) determined from the peak of the Polanyi-Wigner desorption profile using a heating rate of \(\beta=1\) K century\({}^{-1}\). Desorption parameters listed in Table 1 were used. In Table 2 the results for a selection of twenty molecules are presented. The adsorption-thermal-desorption equation a low density (\(n_{\rm H}=1\times\)10\({}^{7}\) cm\({}^{-3}\)) shows a small difference with the Polanyi-Wigner results. One aspect that affects this comparison is the definition of the peak desorption temperature of the adsorption-thermal-desorption balance equation, which is located at the point where \(n_{\rm ice}\) / \(n_{\rm gas}=1\). However, for the \(T_{\rm peak}\) from the Polanyi-Wigner equation this is usually \(n_{\rm ice}\) / \(n_{\rm gas}\leq\) 0.1. Correcting for this discrepancy raises the adsorption-thermal-desorption balance peak desorption temperatures by several K. In turn, this make \(T_{\rm peak,PW}\) and \(T_{\rm peak,ALP}\) for the low-density scenario virtually identical. Quite different is the situation for the high-density (\(n_{\rm H}=1\times\)10\({}^{12}\) cm\({}^{-3}\)) results of the adsorption-thermal-desorption balance equation, which show peak desorption temperatures that are approximately 20% higher than those obtained with the Polanyi-Wigner equation. The peak desorption temperatures listed in this study might therefore not be representative for all interstellar environments and source-specific modelling is required to accurately determine desorption fronts. ### Salt desorption Several molecules detected in the interstellar medium can be classified as acids ([AH], e.g., HCOOH, HNCO, CH\({}_{3}\)COOH) or bases ([B], e.g., NH\({}_{3}\), CH\({}_{3}\)NH\({}_{2}\)). In ice mantles, these species can engage in acid-base reactions and form organic salts via the reaction [AH] + [B] \(\rightarrow\) [A\({}^{-}\)][BH\({}^{+}\)]. Several of such salts are included in this study and their desorption energies are determined. The pre-factors of these salts are determined by adding the pre-exponential factor values of its individual components ([AH] and [B]), see Sect. 2.1. While there is a chemical diversity in the salts, they often include ammonia as the base. In the interstellar medium, ammonia is also expected to be a prominent base, due to its high ice abundance (up to 10% w.r.t. H\({}_{2}\)O, Boogert et al. 2015). In all cases, \(E_{\rm des}\) of the salt is larger than the value of its individual components and therefore the salts will reside on dust grains at higher temperatures (see also e.g., Kruczkiewicz et al. 2021). This is exemplified in Fig. 9 where the desorption profiles of various salts and their corresponding acid and base are plotted. A first-order Polanyi-Wigner equation is used, with a monolayer coverage of 10\({}^{15}\) molecules cm\({}^{-2}\) and a heating rate of 1 K century\({}^{-1}\). Differences in peak desorption temperatures range from just over 10 K for the H\({}_{2}\)O-HNCO system, to more than 60 K for the CH\({}_{3}\)NH\({}_{2}\)-HNCO system. The ability to lock up molecules in the form of salts can have number of implications. Large quantities of organic molecules with amino (-NH\({}_{2}\)) or carboxylic acid (-COOH) groups can be trapped in organic salt complexes and remain on dust grains and larger bodies at elevated temperatures. Recent analysis of samples collected and return from the astroid Ryugu show that it contains high concentrations of amines (e.g., CH\({}_{3}\)NH\({}_{2}\)) and Figure 8: Comparison of desorption profiles generated with the TST (red) or the Hasegawa (blue) prefactor. Left panel: desorption profiles of CH\({}_{3}\)NO, CH\({}_{3}\)CO(ONH\({}_{2}\), and NH\({}_{2}\)C(ON)NH\({}_{2}\). The desorption energies are determined with the Redhead-TST method using data from Ligterink et al. (2017) and Ligterink et al. (2018a) (see Table A.2 and top left corners). Desorption profiles are plotted for \(\nu_{\rm TST}\) (red) and with \(\nu_{\rm Hasegawa}\) (blue). The desorption profile is simulated with a first-order Polanyi-Wigner equation, surface coverage of 1\(\times\)10\({}^{15}\) molecules cm\({}^{-2}\), and heating rate of 1 K century\({}^{-1}\). Peak desorption temperatures decrease by \(\sim\)30% when realistic pre-factors are used. Right panel: ice (solid) and gas (dashed) abundances plotted against radius (au) of a protoplanetary disk for \(\nu_{\rm TST}\) (red) and with \(\nu_{\rm Hasegawa}\) (blue). An average disk temperature profile of \(T(r)=200\times(r/1AU)^{0.62}\) K is used (Andrews & Williams 2007). For each 1 K temperature step, the number of desorbed molecules is determined, which summed give the gas-phase column density and subtracted from a starting value of 1\(\times\)10\({}^{15}\) cm\({}^{-2}\) give the ice column density. The TST abundances are offset by a factor of 10 for easier viewing. The peak radii of the ice-gas inversion are indicated and are shown to shift outward by \(\sim\)50% when realistic pre-factors are used. acids (HCOOH and CH\({}_{3}\)COOH, Naraoka et al. 2023). Because these molecules are found to not be trapped in minerals or other organic matter, the authors suggest that these species reside in the material as salts in order to explain why these volatiles are still present. Our results underline this conclusion, although questions remains if the salts included in this study, which are still relatively volatile, could have survived the heating (\(\geq\) 300 K) and hydrothermal stages during the formation of Ryugu (Nakamura et al. 2022). Further desorption studies of salts complexes could help unravel in which salt form amines and acids are locked up in asteroid Ryugu. Organic salt complexes also provide a molecular reservoir that remains available for grain surface chemistry for a longer time (or rather to higher temperatures) but not for gas-phase reactions. Ammonia is the dominant nitrogen carrier in observed interstellar ice, but it is also relatively volatile. Locking ammonia up in ammonium salts (NH\({}_{4}^{+}\)) could mean that a substantial atomic nitrogen reservoir is available on interstellar grains long after NH\({}_{3}\) or H\({}_{2}\)O have desorbed, see Fig. 9. Laboratory investigations of the processing of salts with energetic radiation at elevated temperatures (\(\geq\)100 K) and in water-free environments, are relevant avenues to investigate the formation of prebiotic molecules (see for example Bossa et al. 2009b). As salts desorb at higher temperatures, they can give misleading indications of sublimation fronts as determined with observations of interstellar environments. For example, Lee et al. (2022) measure the spatial distribution of various molecules toward the HH212 protoplanetary disk and find similar radial distributions of HNCO and NH\({}_{2}\)CHO. If the presence of these molecules in the gas can entirely be explained by ice desorption, this co-spatial distribution is peculiar as these molecules have significantly different desorption parameters of 2.9\(\times\)10\({}^{17}\) s\({}^{-1}\) and 5154 K (this work) and 3.69\(\times\)10\({}^{18}\) s\({}^{-1}\) and 9561 K (Minissale et al. 2022) for HNCO and NH\({}_{2}\)CHO, respectively. One would expect that HNCO desords at a much lower temperature or larger radius than NH\({}_{2}\)CHO. However, if instead most of the HNCO available in the ice has reacted with NH\({}_{3}\), it would be locked up in the [NH\({}_{4}^{+}\)][OCN\({}^{-}\)] salt. This salt has the desorption parameters 2.9\(\times\)10\({}^{17}\) s\({}^{-1}\) and 8117 K, which are close to those of formamide. The salt can therefore explain the co-spatial distribution of HNCO and NH\({}_{2}\)CHO due to thermal desorption of the [NH\({}_{4}^{+}\)][OCN\({}^{-}\)] salts, which upon desorption dissociates into its individual components, HNCO and NH\({}_{3}\) (see also Lee et al. 2022). Thus far two molecules have been detected in the interstellar medium that are presumed to be part of salt complexes. These are the cyanate anion (OCN\({}^{-}\)) and the ammonium cation NH\({}_{4}^{+}\) (Lacy et al. 1984; Keane et al. 2001; Pontoppidan et al. 2003; Van Broekhuizen et al. 2005). Recently, these species have also been detected in observations with the new _James Webb Space Telescope_ (JWST, McClure et al. 2023). The unprecedented sen \begin{table} \begin{tabular}{l c c c c c} \hline \hline Molecule & \(T_{\rm p,PW}\) & \(T_{\rm p,ATD}\) & Diff. & \(T_{\rm p,ATD}\) & Diff. \\ & & \(n_{\rm H,7}\) & & \(n_{\rm H,12}\) & \\ & K & K & \% & K & \% \\ \hline C\({}_{2}\)H\({}_{4}\) & 45 & 43 & 96 & 54 & 119 \\ CH\({}_{3}\)CHO & 77 & 75 & 98 & 92 & 119 \\ H\({}_{2}\)CCO & 79 & 78 & 98 & 95 & 120 \\ CH\({}_{2}\)NH & 84 & 83 & 99 & 102 & 121 \\ CH\({}_{3}\)SH & 101 & 100 & 99 & 121 & 120 \\ CH\({}_{3}\)OCH\({}_{3}\) & 101 & 99 & 98 & 121 & 120 \\ CH\({}_{3}\)NCO & 104 & 102 & 98 & 124 & 119 \\ HC\({}_{2}\)CHO & 107 & 105 & 98 & 127 & 119 \\ CH\({}_{2}\)CHOH & 108 & 106 & 98 & 129 & 120 \\ (CHO)\({}_{2}\) & 109 & 107 & 99 & 130 & 119 \\ HCOOH & 111 & 109 & 98 & 133 & 119 \\ CH\({}_{3}\)CH\({}_{2}\)OH & 113 & 111 & 99 & 135 & 120 \\ CH\({}_{3}\)COOH & 116 & 114 & 98 & 138 & 119 \\ NH\({}_{2}\)OH & 140 & 139 & 99 & 169 & 121 \\ H\({}_{2}\)PCOOH & 144 & 143 & 99 & 172 & 119 \\ HOCH\({}_{2}\)CN & 154 & 152 & 99 & 184 & 119 \\ HCOCOOH & 158 & 157 & 99 & 188 & 119 \\ NH\({}_{2}\)CONH\({}_{2}\) & 196 & 195 & 99 & 235 & 120 \\ S\({}_{4}\) & 218 & 216 & 99 & 258 & 118 \\ C\({}_{60}\) & 399 & 417 & 104 & 487 & 122 \\ \hline \end{tabular} 1 \end{table} Table 2: Comparison peak desorption temperatures. Figure 9: Desorption profiles of acids (red), bases (blue), and the resulting salt (black). Desorption energies and pre-factors for H\({}_{2}\)O, NH\({}_{3}\), and HCN are obtained from Minissale et al. (2022), while the parameters for the remaining molecules are taken from Table 1. Peak desorption temperatures are indicated in the plot for each species. The desorption profile is simulated with a first-order Polanyi-Wigner equation, surface coverage of 10\({}^{15}\) molecules cm\({}^{-2}\), and heating rate of 1 K century\({}^{-1}\). sitivity of JWST opens up two avenues of research. First, it is possible to look for other salt components at lower abundance or with weaker spectroscopic features, for example the cyanide anion (CN\({}^{-}\)) and carboxylate anions (R-COO\({}^{-}\)). Second, searches for salts in water poor or free interstellar environments, for example molecular clouds that interact with the warm gas of a protostellar outflow or the edges of photo dominated regions (PDRs), can be conducted. Because the bulk ice species have been removed from the grains in these regions, the less abundant salt and organic species are easier to detect. Furthermore, the presence and shape of salt spectroscopic signatures will provide information about the chemical and physical history of the environment. ### Chemical and elemental composition above the water-snow line Within star- and planet-forming environments temperature gradients and the associated desorption fronts of molecules, also called snow lines, play an important role in forming planetary objects and setting their chemical and elemental composition. Examples of such lines are the nitrogen and water snowline or the soot line, which is driven by the sublimation of large carbon-dominated molecules such as polycyclic aromatic hydrocarbons (PAHs). These lines are invoked to explain the high atomic nitrogen abundance in Jupiter or the comparatively low carbon content of Earth (Bosman et al., 2019; Oberg and Bergin, 2021; Li et al., 2021). Because of their high abundances, observational, modelling, and experimental efforts have focused on the main ice species to investigate desorption fronts. Since most organic molecules except for CH\({}_{3}\)OH and CH\({}_{4}\) are found at low abundances, these species will not affect the elemental composition of a protoplanetary disk or create snow lines that are observable. However, the combined inventory of organic molecules may affect the elemental composition. _ISO_ and _Spitzer_ IR observation of various interstellar sources have found significant features in the 5-8 \(\mu\)m region (Gibb and Whittet, 2002; Boogert et al., 2008). These features have at least in part been assigned to an organic residue consisting of a variety of molecules, including HCOOH and HCOO\({}^{-}\). Therefore, the data presented in this paper are used in this section to investigate how a large reservoir of diverse organic molecules affects elemental ratios and the chemical composition, for example in a protoplanetary disk. Elemental ratios of hydrogen, nitrogen, oxygen, phosphorus, and sulphur over carbon are determined for a molecular inventory that consists of the species listed in this work and desorption parameters presented in Table 1, supplemented by bulk ice species with parameters taken from Table 3 of Minissale et al. (2022). The ice abundances are set with respect to water. The following fractions are used: CO, N\({}_{2}\), and CO\({}_{2}\) at 0.25, CH\({}_{3}\)OH at 0.1, O\({}_{2}\), CH\({}_{4}\), NH\({}_{3}\), and H\({}_{2}\)CO at 0.05, and H\({}_{2}\)S, CS, and HCN at 0.01. Combined, the aforementioned species are indicated as the bulk ice species. All remaining organic molecules listed in this work, as well as C\({}_{2}\)H\({}_{2}\), CH\({}_{3}\)CN, NH\({}_{2}\)CHO taken from Minissale et al. (2022) are included in a low and high fraction scenario where each organic molecule contributes at 10\({}^{-4}\) and 10\({}^{-3}\) level, respectively. The desorption profile, including ice and gas abundances are simulated with a first-order Polanyi-Wigner equation, variable abundances according to fractions with respect to water, and a heating rate of 1 K century\({}^{-1}\). The elemental composition is obtained by multiplying the ice or gas abundance of each molecule with its elemental composition (e.g., for CH\({}_{3}\)OH carbon = abundance \(\times\) 1, hydrogen = abundance \(\times\) 4, and oxygen = abundance \(\times\) 1) and subsequently summing all contributions to an element. The resulting ice elemental compositions for [Hydrogen]/[Carbon], [Nitrogen]/[Carbon], [Oxygen]/[Carbon], and [Sulphur]/[Carbon] are shown in Fig. 10. Figure 11 in the Appendix shows the same plot, but with elemental ratios plotted versus protoplanetary disk radius by using an average protoplanetary disk temperature profile \(T(r)=200\times(r/1AU)^{0.62}\) K (Andrews and Williams, 2007). The elemental ratio of [Phosphorus]/[Carbon] is shown in Fig. 12 in the Appendix. The bulk ice elemental ratios are plotted in blue, while the addition of a low (10\({}^{-4}\)\(\times\)[H\({}_{2}\)O]) and high (10\({}^{-3}\)\(\times\)[H\({}_{2}\)O]) fraction of organic molecules are plotted in red and green, respectively. The peak desorption temperatures of several bulk ice species are indicated, as well as the elemental composition of the extraterrestrial substances Soluble Organic Matter (SOM, Sephton, 2002) and Insoluble Organic Matter (IOM, Alexander et al., 2017). Both SOM and IOM are regularly extracted from meteorites and SOM shows similarities with the organic material measured on several solar system objects, such as comet 67P/ Churyumov-Gerasimenko (Hanni et al., 2022). In this work, we use the average elemental composition of C\({}_{100}\)H\({}_{155}\)N\({}_{3}\)O\({}_{20}\)S\({}_{3}\) for SOM obtained from the Murchison meteorite (Schmitt-Kopplin et al., 2010) and the average IOM composition of C\({}_{100}\)H\({}_{77}\)O\({}_{14}\)N\({}_{3}\)S\({}_{2}\) presented in Alexander et al. (2017). First, we note that the addition of organic material affects the elemental ratios at low temperatures, when bulk ice dominates. The combined contribution of organic elemental carbon lowers all elemental ratios. The effect is most pronounced for the high fraction organic matter model where it can lower ratios by up to 100%. In this scenario we only take the contribution of the 133 molecules studied in this work into account. In a natural environment the diversity of the organic components may be many times larger (e.g., Hanni et al., 2022) and therefore have an even larger impact on elemental ratios. Second, we note the importance of the organic material to set the ice elemental composition when the bulk ice species are removed from the solid state. In our model, all the bulk ice species are lost before \(\sim\)115 K, while the species that contribute to the nitrogen and sulphur budget are gone at lower temperatures of \(\sim\)100 K and \(\sim\)75 K, respectively. These differences in bulk ice desorption explain the dips seen in [N]/[C] and [S]/[C] between 75 and 100 K. At these points the nitrogen (NH\({}_{3}\)) and sulphur (CS) reservoirs have desorbed, but the main carrier of carbon (CH\({}_{3}\)OH) is still present. It is likely that these dips will be less pronounced in reality due to the trapping of the volatile nitrogen and sulphur carriers in the dominating water-phase, which desorbs together methanol. After desorption of the bulk ice reservoir the organic material starts to dominate the overall elemental composition on the dust grains. The elemental ratios are biased towards the molecules included in this work and the assumption that molecules contribute equally to the total molecular budget is unlikely to be realistic. Therefore, care needs to be taken with any interpretation of Fig. 10. However, some general observations can be made. First, at low temperature(\(\leq\)115 K) or larger radii (\(\geq\)3 au), the bulk ices dominate the elemental budget and ratios are seen to vary in a stepwise manner. Objects formed in this regime will have predictable elemental compositions, depending on the "step" they form at. However, in the warmer disk regions closer to the (proto)star and where water has desorbed, the elemental budget is set by the large mixture of organic molecules, which desorb at different temperatures. Consequently, the elemental composition is seen to be much more varied, especially in [H drogen]/[Carbon] and [Oxygen]/[Carbon]. Objects that form in this regime could thus be expected to have a much more varied or less predictable elemental signature. It is interesting to point out that the organic molecules that remain on the grains at \(T\geq\)100 K show similarities with, and in some cases are identical to, the molecules used by Kudo et al. (2002) to determine the sticking velocity of organics-coated grains. These authors found that organics-coated grains rapidly coagulate in the 2.3 - 3.0 au region of a disk. This region is in good agreement with the location where we find most organic molecules still coating the grains after water has desorbed. Finally, the evolution of various functional chemical groups are plotted in Fig. 11. The abundances of the functional groups have been determined in the same way that the elemental abundances have. Ratios of amides ([-NC(O)-]), cyanides ([-CN]), alcohols ([-OH]), carboxylic acids ([-COOH]), aldehydes + ketones ([-C(O)-]), esters ([-C(O)O-]), and ethers ([-O-]) with respect to amines ([-NH\({}_{2}\)]) are plotted. This plot reflects several observations already made in Sect. 3 about the occurrence in of certain functional groups in the data set, for example that the alcohol (-OH) group is most abundant. It also shows that some of these functional groups can be quickly depleted as the environment warms up. In particular esters and ethers will be removed quickly. On the other hand, amides and amines seem to remain present to higher temperatures. In this way we see how chemical make-up and average properties of an organics-coated grain may change with temperature and could potentially be used as a tracers of thermal history of a particle in the solar system. However, it is important to acknowledge that this view is biased towards relatively small and volatile organic molecules. For example, IOM is rich in ether (-O-) cross links (Remusat et al. 2005) and because this substance is very refractory it will ensure that the ether functional group will remain prominent component of the organic coating of grains at elevated temperatures. ## 5 Conclusion This study presents a large number of desorption parameters, that is desorption energies (\(E_{\mathrm{des}}\)) and pre-exponential frequency factors (\(\nu\)). These parameters will find use in astrochemical models and help to understand the evolution of the chemical and elemental composition in star and planet-forming regions. Because the Figure 11: Functional chemical group composition [X]/[-NH\({}_{2}\)] for oxygen-bearing groups -OH, -COOH, -C(O)-, -C(O)O-, and -O- (solid red lines) and nitrogen-bearing groups -NC(O)- and -CN (dashed blue lines) as a function of temperature. The desorption profiles are simulated with a first-order Polanyi-Wigner equation, variable surface coverages, and a heating rate of 1 K century\({}^{-1}\). Figure 10: Ice elemental composition as [X]/[Carbon] for Hydrogen, Nitrogen, Oxygen, and Sulphur as a function of temperature. Peak desorption temperatures for selected species are indicated in the plot (vertical dashed lines). Elemental compositions of Soluble Organic Matter (SOM) obtained from the Murchison meteorite (Schmitt-Kopplin et al. 2010) and averaged Insoluble Organic Matter (IOM, Alexander et al. 2017) are indicated (horizontal dashed lines). The desorption profiles are simulated with a first-order Polanyi-Wigner equation, variable surface coverages, and a heating rate of 1 K century\({}^{-1}\). Low and high OM represent low and high fraction of organic molecules, respectively. list of molecules is dominated by organic molecules and salts of medium-volatility these data are of particular importance to assess the surface chemistry in regions were water ice has been removed from dust grains. To expand the number of molecules for which desorption parameters are available, experimental temperature programmed desorption data have been collected from the literature and analysed with the Redhead Transition State Theory (Redhead-TST) method to determine the pre-exponential frequency factor (\(\nu\)) and the desorption energy (\(E\)). A comparison with literature \(\nu\) and \(E_{\rm des}\) values shows that the Redhead-TST method provides reliable results that are on par with results of rigorous experimental methods. We emphasise that the usage of accurately determined pre-factor values, instead of assumed values or the often used Hasagawa equation, is essential to properly simulate the desorption profiles of molecules. Due to the large amount of data collected in this study, trends can be searched for. No relationship between molecule mass or number of atoms is found, but a relationship between the pre-factor and molecule mass in the form of log\({}_{10}(\nu)\) = 2.65ln(\(m\)) + 8.07 is, which can be used to determine this parameter in future studies. Mean desorption parameters are provided and used to highlight how the desorption of these species can affect chemical and elemental compositions. ###### Acknowledgements. The authors thank E.F. van Dishoeck, E.G. Begelund, and C. Ceccarelli for helpful discussions and feedback. Thanks go out to K.-J. Chuang for making unpublished TPD plots of CH\({}_{2}\)CHNH\({}_{2}\) and CH\({}_{3}\)CHNH available for analysis. The authors thank the many arstrochemists and surface scientists who have contributed data to the literature on which this study relies. N.F.W.L. acknowledges support from the Swiss National Science Foundation (SNSF) Ambizione grant 193453 and NCCR Planets. M.M. acknowledges the French national programme "Physique et Chimie du Milieu Interstellaire" (PCMI) of CNRS/INSU with INC/INF cofunded by CEA and CNES.
2303.14198
Non-standard modalities in paraconsistent Gödel logic
We introduce a paraconsistent expansion of the G\"{o}del logic with a De Morgan negation $\neg$ and modalities $\blacksquare$ and $\blacklozenge$. We equip it with Kripke semantics on frames with two (possibly fuzzy) relations: $R^+$ and $R^-$ (interpreted as the degree of trust in affirmations and denials by a given source) and valuations $v_1$ and $v_2$ (positive and negative support) ranging over $[0,1]$ and connected via $\neg$. We motivate the semantics of $\blacksquare\phi$ (resp., $\blacklozenge\phi$) as infima (suprema) of both positive and negative supports of $\phi$ in $R^+$- and $R^-$-accessible states, respectively. We then prove several instructive semantical properties of the logic. Finally, we devise a tableaux system for branching fragment and establish the complexity of satisfiability and validity.
Marta Bilkova, Sabine Frittella, Daniil Kozhemiachenko
2023-03-24T17:28:19Z
http://arxiv.org/abs/2303.14198v1
# Non-standard modalities ###### Abstract We introduce a paraconsistent expansion of the Godel logic with a De Morgan negation \(\neg\) and modalities \(\blacksquare\) and \(\blacklozenge\). We dub the logic \(\mathsf{G}^{\geq\pm}_{\blacksquare,\blacklozenge}\) and equip it with Kripke semantics on frames with two (possibly fuzzy) relations: \(R^{+}\) and \(R^{-}\) (interpreted as the degree of trust in affirmations and denials by a given source) and valuations \(v_{1}\) and \(v_{2}\) (positive and negative support) ranging over \([0,1]\) and connected via \(\neg\). We motivate the semantics of \(\blacksquare\phi\) (resp., \(\blacklozenge\)) as infima (suprema) of both positive and negative supports of \(\phi\) in \(R^{+}\)- and \(R^{-}\)-accessible states, respectively. We then prove several instructive semantical properties of \(\mathsf{G}^{\geq\pm}_{\blacksquare,\blacklozenge}\). Finally, we devise a tableaux system for \(\mathsf{G}^{\geq\pm}_{\blacksquare,\blacklozenge}\) over finitely branching frames and establish the complexity of satisfiability and validity. Keywords:Godel logic modal logic non-standard modalities constraint tableaux ## 1 Introduction When aggregating information from different sources, two of the simplest strategies are as follows: either one is sceptical and cautious regarding the information they provide thus requiring that they agree, or one is credulous and trusts their sources. In the classical setting, these two strategies can be modelled with \(\Box\) and \(\lozenge\) modalities defined on Kripke frames where states are sources, the accessibility relation represents references between them, and \(w\vDash\phi\) is construed as '\(w\) says that \(\phi\) is true'. However, the sources can contradict themselves or be silent regarding a given question (as opposed to providing a clear denial). Furthermore, a source can provide a degree to their confirmation or denial. In all of these cases, classical logic struggles to formalise reasoning with such information.
2305.06612
Spectral Analysis and Hydrodynamic Manifolds for the Linearized Shakhov Model
We perform a complete spectral analysis of the linearized Shakhov model involving two relaxation times $\tau_{\rm fast}$ and $\tau_{\rm slow}$. Our results are based on spectral functions derived from the theory of finite-rank perturbations, which allows us to infer the existence of a critical wave number $k_{\rm crit}$ limiting the number of discrete eigenvalues above the essential spectrum together with the existence of a finite-dimensional slow manifold defining non-local hydrodynamics. We discuss the merging of hydrodynamic modes as well as the existence of second sound and the appearance of ghost modes beneath the essential spectrum in dependence of the Prandtl number.
Florian Kogelbauer, Ilya Karlin
2023-05-11T07:19:02Z
http://arxiv.org/abs/2305.06612v1
# Spectral analysis and hydrodynamic manifolds for the linearized Shakhov model ###### Abstract. We perform a complete spectral analysis of the linearized Shakhov model involving two relaxation times \(\tau_{\mathrm{fast}}\) and \(\tau_{\mathrm{slow}}\). Our results are based on spectral functions derived from the theory of finite-rank perturbations, which allows us to infer the existence of a critical wave number \(k_{\mathrm{crit}}\) limiting the number of discrete eigenvalues above the essential spectrum together with the existence of a finite-dimensional slow manifold defining non-local hydrodynamics. We discuss the merging of hydrodynamic modes as well as the existence of second sound and the appearance of ghost modes beneath the essential spectrum in dependence of the Prandtl number. ## 1. Introduction Hydrodynamic closures derived from kinetic theory are a fruitful research direction in statistical physics [20, 43] and of basic interest for the investigation of kinetic models [12]. The fundamental question: What is the connection between kinetic equations (in the small relaxation regime) and the equations for the motion of continua? Or, to phrase it differently: Can the governing equations of fluid dynamics be rigorously derived from kinetic theory? This problem has a long history. Famously, in his speech at the International Congress of Mathematics's in Paris in 1900, Hilbert proposed a program to derive the passage from the atomistic view of fluids and gases to the motion of continua [29]. A modern interpretation of this challenge, known as "Hilbert's sixth problem" in this context, aims to proof the convergence of kinetic models, such as the Boltzmann equation, to hydrodynamic models, such as the Navier-Stokes equation [42, 43]. There are several ways to tackle this problem [47]. Assuming that the collision term scales as \(\varepsilon^{-1}\), a widely used approach is to expand the density function as a formal power series in \(\varepsilon\) (the Knudsen number), called Chapman-Enskog series [12]. Indeed, the zeroth order PDE obtained from this (singular) Taylor expansion gives the Euler equation, while the first order PDE reproduces the Navier-Stokes equation. We stress, however, that this holds on a formal level only. While the the solution to the underlying kinetic equation decays to the global equilibrium due to increase of entropy, higher-order terms in the Chapman-Enskog expansion might exhibit instabilities. In [5], it was first shown that an expansion in terms of Knudsen number can lead to nonphysical properties of the hydrodynamic models: At order two (Burnett equation [12]), the dispersion relation shows a change of sign, thus leading to modes which grow in energy (Bobylev instability). Indeed, as pointed out by, e.g., Slemrod [45], convergence of the singular expansion to the leading-order equation is by no means obvious: the formation of shocks might be an obstacle to global uniform convergence in the sense of solutions [46]. Furthermore, the expansion of a non-local operator in frequency space in terms of (local) differential operators might be problematic. Therefore, Rosenau suggested a non-local closure [41]. A different approach is to sum the Chapman-Enskog series for all orders. This was achieved for the three-component Grad system by Gorban and Karlin in a series of papers [20, 21, 23]. In this work, we approach the problem from the angle of spectral theory. Investigations of the spectrum of linearized kinetic operators date back to Hilbert himself [28]. Carleman [8] proved that the essential spectrum remains the same under a compact perturbation (Weyl's theorem) in the hard sphere case and was able to estimate the spectral gap. This result was generalized to a broader class of collision kernels by Grad [26] and to soft potentials in [7]. For spatially uniform Maxwell molecules, a complete spectral description was derived in [6] (together with exact special solutions and normal form calculations for the full, nonlinear problem), see also [11]. In [14], some fundamental properties of the spectrum of a comparably broad class of kinetic operators was derived. In particular, the existence of eigenvalue branches and asymptotic expansion of the (small) eigenvalues for vanishing wave number was derived. The analysis carried out in [14], however, does not extend to large wave numbers or to properties of the discrete spectrum close to the essential spectrum (accumulation of eigenvalues). For convergence results to the Euler and Navier-Stokes equation based on spectral insights, we refer to the classical papers [4, 38] This paper is devoted to the spectral analysis of the Shakhov model [44] linaerized around a global Maxwellian, see also [3]. While the BGK equation only has one global relaxation time \(\tau\), the Shakhov model has two different time scales \(\tau_{\text{fast}}\) and \(\tau_{\text{slow}}\), which, in particular, allows to consider a family of kinetic equations with varying Prandtl number. While the BGK equation has Prandtl number one, the Shakhov model admits Prandtl numbers \(0<\Pr<1\) (and also the somewhat nonphysical \(\Pr>1\), see Section 4.6) thus give a more realistic approximation of relaxation times. In particular, it allows for the Prandtl number \(\Pr=2/3\), see, e.g. [12]. In our previous papers, similar considerations were already carried out for the three-component Grad system [33], the one-dimensional BGK equation with mass conservation only [34], and recently for the three-dimensional BGK equation with the mass, momentum and energy conservation [36]. Similar considerations have been carried out in [9, 10] for the one-dimensional linear BGK with one conservation law that of mass in the context of grossly determined solutions (in the sense of [47]). In [2], (optimal) decay rates for various simplified BGK models where derived in the context of hypocoercivity [48]. In comparison with the previously mentioned kinetic models, Shakhov's equation admits more realistic behavior, i.e., mimics properties of the full Boltzmann equation more closely. Indeed, due to the presence of two time-scales \((\tau_{\text{fast}},\tau_{\text{slow}})\) in Shakhov's model does not only admit hydrodynamic modes, but non-hydrodynamic (fast) modes as well. In difference with the BGK equation, whose modal branches stay coherent until they mix with the essential spectrum [36], modal branches of the Shakhov equation can mix and produce new branches. For instance, two diffusion modes can collide and produce a second pair of acoustic modes, see Section 4.5. Moreover, Shakhov's model and its versions are widely used in gas-dynamics applications, in particular, in the lattice Boltzmann computations of compressible flows, see, e.g., [18]. We emphasize that the techniques outlined in this paper can easily be applied to a wide class of similar kinetic models, such as the ES-BGK [30]. In the following, we will give a complete and (up to a solution of a transcendental equation) explicit description of the spectrum of the Shakhov model linearized around a global Maxwellian. We will show the existence of _finitely many_ discrete eigenvalues above the essential spectrum as well as the existence of a critical wave number for each family of modes. More precisely, we prove the following: _Theorem 1.1_.: The spectrum of the non-dimensional linearized Shakhov operator \(\mathcal{L}\) with relaxation times \(\tau_{\text{fast}}\) and \(\tau_{\text{slow}}\) around a global Maxwellian is given by \[\sigma(\mathcal{L})=\left\{-\frac{1}{\tau_{\text{fast}}}+\text{i}\mathbb{R} \right\}\cup\bigcup_{|\mathbf{k}|<k_{\text{crit}}}\bigcup_{N\in\text{Modes}(| \mathbf{k}|,\Pr)}\{\lambda_{N}(\mathbf{k}|)\}, \tag{1.1}\] where Modes denotes the set of modes (branches), which might change with wave number and Prandtl number. This is due to a collision of roots and a subsequent bifurcation (for details, we refer to Section 4.5). The essential spectrum is given by the line \(\Re\lambda=-\frac{1}{\tau_{\text{fast}}}\), while the discrete spectrum consists of a _finite_ number of discrete, isolated eigenvalues. Along with each family of modes (up to merging), there exists a critical wave number \(k_{crit,N}\), limiting the range of wave numbers for which \(\lambda_{N}\) exists. For small wave numbers and \(\Pr\) close to one, the set of modes is given by \[\text{Modes}_{1}=\{\text{shear}_{1},\text{diff}_{1},\text{ac}_{1},\text{ac}_{ 1}*,\text{shear}_{2},\text{diff}_{2}\}, \tag{1.2}\] while the higher wave-number and \(\Pr\) closer to zero, set of modes is given by \[\text{Modes}_{2}=\{\text{shear}_{1},\text{ac}_{1},\text{ac}_{1}*,\text{shear }_{2},\text{ac}_{2},\text{ac}_{2}*\}, \tag{1.3}\] where \(\lambda_{\text{shear},1},\lambda_{\text{shear},2}\) denote the (real) primary and secondary shear modes (double degenerated), \(\{\lambda_{\text{ac},1},\lambda_{\text{ac},1}^{*}\},\{\lambda_{\text{ac},2}, \lambda_{\text{ac},2}^{*}\}\) denote pairs of complex conjugated roots, the primary and secondary acoustic modes and the real roots \(\lambda_{\text{diff},1},\lambda_{\text{diff},2}\) denote the (real) primary and secondary diffusion modes, respectively. For discussion of different branches of eigenvalues in kinetic models, we refer to [14, 17]. Our proof is based on the theory of finite-rank perturbations (see, e.g., [51]), together with some properties of the plasma dispersion function, collected in the Appendix for the sake of completeness. Furthermore, we give a hydrodynamic interpretation of the results by considering the dynamics on the slow hydrodynamic manifold (linear combination of eigenspaces). The paper is structured as follows: In Section 2, we introduce some notation and give some basic definitions. In Section 3, we formulate the fundamental equations (the detailed linearization around a global Maxwellian as well as the non-dimensionalization are included to the Appendix). Section 4 is devoted to the spectral analysis of the linear part, including the derivation of a spectral function describing the discrete spectrum completely. We also give a proof of the finiteness of the hydrodynamic spectrum together with a description of the modes (primary shear, primary diffusion, primary acoustic, secondary shear and secondary diffusion) in frequency space. We also comment on the merging of branches and the formation of a secondary acoustic branch, called _second sound_ for a certain range of Prandtl numbers, see, e.g., [40] or [27] for an analogous phenomenon in solids (phonons). Finally, in Section 5, we write down the hydrodynamic manifold as a linear combination of eigenvectors and derive a closed system for the linear hydrodynamic variables. ## 2. Notation and Basic Definitions For a wave vector \(\mathbf{k}\in\mathbb{Z}^{3}\), \(\mathbf{k}=(k_{1},k_{2},k_{3})\), we denote its wave number as \[k:=|\mathbf{k}|=\sqrt{k_{1}^{2}+k_{2}^{2}+k_{3}^{2}}. \tag{2.1}\] Let \(\mathcal{H}\) denote a Hilbert space and let \(\mathbf{T}:\mathcal{H}\rightarrow\mathcal{H}\) be a linear operator with domain of definition \(\mathcal{D}(\mathcal{H})\). We denote the spectrum of \(\mathbf{T}\) as \(\sigma(\mathbf{T})\) and its resolvent set as \(\rho(\mathbf{T})\). The spectral analysis of the main operator \(\mathcal{L}\) of the paper (to be defined later) will be carried out on the Hilbert space \[\mathcal{H}_{\mathbf{x},\mathbf{v}}=L_{\mathbf{x}}^{2}(\mathbb{T}^{3})\times L _{\mathbf{v}}^{2}(\mathbb{R}^{3},e^{-\frac{|\mathbf{v}|^{2}}{2}}), \tag{2.2}\] together with the inner product \[\langle f,g\rangle_{\mathbf{x},\mathbf{v}}=\frac{1}{(2\pi)^{3+\frac{3}{2}}} \int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}}f(\mathbf{x},\mathbf{v})g^{*}(x, \mathbf{v})\,e^{-\frac{|\mathbf{v}|^{2}}{2}}d\mathbf{v}d\mathbf{x}, \tag{2.3}\] where the star denotes complex conjugation. Because of the unitary properties of the Fourier expansion, we can slice the space \(\mathcal{H}\) for each wave number \(\mathbf{k}\) and analyze the operator \(\mathcal{L}_{\mathbf{k}}\) (restriction of \(\mathcal{L}\) to the wave number \(\mathbf{k}\)) on the Hilbert space \[\mathcal{H}_{\mathbf{v}}=L_{\mathbf{v}}^{2}(\mathbb{R}^{3},e^{-\frac{|\mathbf{ v}|^{2}}{2}}), \tag{2.4}\] together with the inner product \[\langle f,g\rangle_{\mathbf{v}}:=\frac{1}{(2\pi)^{\frac{3}{2}}}\int_{\mathbb{ R}^{3}}f(\mathbf{v})g^{*}(\mathbf{v})e^{-\frac{|\mathbf{v}|}{2}}d\mathbf{v}. \tag{2.5}\] For \(\mathbf{n}=(n_{1},n_{2},n_{3})\), the three-dimensional (multidimensional) Hermite polynomials \(\{H_{\mathbf{n}}\}_{\mathbf{n}\in\mathbb{N}^{3}}\) are defined via the generating function \[e^{\mathbf{a}\cdot\mathbf{v}-\frac{|\mathbf{a}|^{2}}{2}}=\sum_{|\mathbf{n}|=0 }c_{\mathbf{n}}(\mathbf{a})H_{\mathbf{n}}(\mathbf{v}), \tag{2.6}\] for the coefficients \[c_{\mathbf{n}}(\mathbf{a})=\frac{\mathbf{a}^{\mathbf{n}}}{\mathbf{n}!}. \tag{2.7}\] Let us denote the \(j\)-th standard basis vector of \(\mathbb{R}^{3}\) as \(\mathbf{e}_{j}\). The three-dimensional Hermite polynomials obey the recurrence relation \[H_{\mathbf{n}+\mathbf{e}_{j}}(\mathbf{v})=v_{j}H_{\mathbf{n}}(\mathbf{v})-n_{ j}H_{\mathbf{n}-\mathbf{e}_{j}}(\mathbf{v}), \tag{2.8}\] for \(j=1,2,3\), as well as the orthogonality relation \[\frac{1}{(2\pi)^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}H_{\mathbf{m}}(\mathbf{v}) H_{\mathbf{n}}(\mathbf{v})e^{-\frac{|\mathbf{v}|^{2}}{2}}\,d\mathbf{v}=\frac{1}{ \mathbf{m}!}\delta_{\mathbf{m},\mathbf{n}}, \tag{2.9}\] where \(\delta_{\mathbf{m},\mathbf{n}}\) is the Kronecker delta. In particular, the sequence of one-dimensional Hermite polynomials \(H_{n}(v)\) obey the recurrence relation \[H_{n+1}(v)=vH_{n}(v)-nH_{n-1}(v), \tag{2.10}\] which, by (4.15), implies the differential recurrence relation \[H_{n+1}(v)=H_{n}^{\prime}(v)-vH_{n}(v). \tag{2.11}\] We introduce the _plasma dispersion function_ as the integral \[Z(\zeta)=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\frac{e^{-\frac{u^{2}}{2}}}{v- \zeta}\,dv, \tag{2.12}\] for any \(\zeta\in\mathbb{C}\setminus\mathbb{R}\). The function \(Z\) is analytic on each half plane \(\{\Im(\zeta)>0\}\) and \(\{\Im(\zeta)<0\}\) and satisfies the complex differential equation \[\frac{dZ}{d\zeta}=-\zeta Z-1. \tag{2.13}\] We collect further useful properties of \(Z\) in the Appendix. _Remark 2.1_.: The plasma dispersion function (2.12) - as its name suggests - appears in plasma physics in the context of Landau damping [16]. An detailed description of this widely used non-elementary function is presented in [19]. ## 3. Preliminaries, Linearization and Non-Dimensionalization Consider the kinetic model \[\frac{\partial F}{\partial t}+\mathbf{v}\cdot\nabla F=-\frac{1}{\tau_{\rm fast }}Q_{fs}(F), \tag{3.1}\] for an unknown distribution function \(F\) and the collision operator \[Q_{fs}(F)=F-F^{eq}(n[F],\mathbf{u}[F],T[F])\left(1+(1-Pr)\frac{\mathbf{q}[F] \cdot(\mathbf{v}-\mathbf{u}[F])}{Rp[F]T[F]}\left(\frac{|\mathbf{v}-\mathbf{u}[ F]|^{2}}{5RT[F]}-1\right)\right). \tag{3.2}\] Here, \(R\) denotes the gas constant, while \[\Pr=\frac{\nicefrac{{\text{\tiny{$\circ$}}}}{{\text{\tiny{$\circ$}}}}}{ \nicefrac{{\text{\tiny{$\circ$}}}}{{\text{\tiny{$\circ$}}}}}\leq 1, \tag{3.3}\] denotes the Prandtl number for the fast time scale \(\tau_{\rm fast}\) and the slow time scale \(\tau_{\rm slow}\). The physical units are given as \([R]=m^{2}kgs^{-2}K^{-1}\) and \([RT]=m^{2}kgs^{-2}\) respectively, while the Prandtl number is dimensionless. The number density \(n\), the velocity \(\mathbf{u}\), the pressure \(p\) and the heat flux \(\mathbf{q}\) are defined by \[\begin{split} n[F]&=\int_{\mathbb{R}^{3}}F\,d \mathbf{b},\\ \mathbf{u}[F]&=\frac{1}{n[F]}\int_{\mathbb{R}^{3}} \mathbf{v}F\,d\mathbf{b},\\ p[F]&=\frac{m}{3}\int_{\mathbb{R}^{3}}\!\!|\mathbf{ v}-\mathbf{u}[F]|^{2}F\,d\mathbf{v},\\ \mathbf{q}[F]&=\frac{m}{2}\int_{\mathbb{R}^{3}}\!\! \left(\mathbf{v}-\mathbf{u}[F]\right)\!|\mathbf{u}[F]-\mathbf{v}|^{2}F\,d \mathbf{v},\end{split} \tag{3.4}\] while the temperature \(T\) is defined through the relation \[T[F]=\frac{p[T]}{mRn[T]}. \tag{3.5}\] We stress that the macroscopic quantities \((n,\mathbf{u},T,\mathbf{q})\) depend on \(F\) through the functional relationship (3.4), but are independent of the velocity \(\mathbf{v}\). Equation (3.1) is called _Shakhov's S-model_ and was first derived in [44] as a generalization to the BGK equation, allowing for two time scales, which, in particular, allows to define a family of models with varying Prandtl number. To ease notation in the following calculations, we set \[\tau=\tau_{\text{fast}} \tag{3.6}\] and define the dimensionless parameter \[r:=1-\Pr, \tag{3.7}\] which satisfies \(0\leq r\leq 1\) as well. _Remark 3.1_.: The Prandtl number (and the parameter \(r\)) allow us to define an interpolation between the three-dimensional BGK equation (\(\Pr=1\) r \(r=0\)), see e.g. [49], and a model with maximal separation between fast and slow time-scale (\(\Pr=0\) or \(r=1\)). The different properties of the spectra, see Section 4, vary for different values of \(r\). For \(n\geq 0\), we define the linear moments \[\mathbf{M}_{n}(\mathbf{x},t)=\int_{\mathbb{R}^{3}}F(\mathbf{x},\mathbf{v},t) \mathbf{v}^{\otimes n}\,d\mathbf{v}, \tag{3.8}\] where \(\mathbf{v}^{\otimes 0}=1\), \(\mathbf{v}^{\otimes 1}=\mathbf{v}\) and \[\mathbf{v}^{\otimes n}=\underbrace{\mathbf{v}\otimes...\otimes\mathbf{v}}_{n- \text{times}}, \tag{3.9}\] for \(n\geq 2\) is the \(n^{th}\) tensor power. For compactness in the presentation, we also define the special vector \[\tilde{\mathbf{M}}_{3}=\int_{\mathbb{R}^{3}}F\mathbf{v}|\mathbf{v}|^{2}\,d \mathbf{v}. \tag{3.10}\] The hydrodynamic variables \((n,\mathbf{u},T,\mathbf{q})\) are related to the moments (3.8) via \[\mathbf{M}_{0} =n,\] \[\mathbf{M}_{1} =n\mathbf{u}, \tag{3.11}\] \[\text{trace}\mathbf{M}_{2} =n\left(|\mathbf{u}|^{2}{+}3RT\right)=n|\mathbf{u}|^{2}{+}3\frac{ p}{m},\] \[|\mathbf{u}|^{2}\mathbf{u}-2\mathbf{M}_{2}\mathbf{u}+\tilde{ \mathbf{M}}_{3} =\frac{2}{m}\mathbf{q}+\frac{3p}{2}\mathbf{u},\] which can be inverted to \[n =\mathbf{M}_{0},\] \[\mathbf{u} =\frac{\mathbf{M}_{1}}{\mathbf{M}_{0}}, \tag{3.12}\] \[p =\frac{m}{3}\text{trace}\mathbf{M}_{2}-\frac{m}{3}\frac{|\mathbf{ M}_{1}|^{2}}{\mathbf{M}_{0}},\] \[\mathbf{q} =-\frac{3}{2}\left(\frac{m}{3}\text{trace}\mathbf{M}_{2}-\frac{ m}{3}\frac{|\mathbf{M}_{1}|^{2}}{\mathbf{M}_{0}}\right)\frac{\mathbf{M}_{1}}{ \mathbf{M}_{0}}+\frac{m}{2}\tilde{\mathbf{M}}_{3}+\frac{m}{2}|\mathbf{u}|^{2} \mathbf{u}-m\mathbf{M}_{2}\mathbf{u}.\] Equation (3.1) admits a global Maxwellian, \[F_{0}^{eq}(\mathbf{v})=\frac{n_{0}}{(2\pi RT_{0})^{\frac{3}{2}}}e^{-\frac{| \mathbf{v}|^{2}}{2RT_{0}}}, \tag{3.13}\] as an equilibrium solution. Here, the equilibrium number density \(n_{0}\) and the equilibrium temperature \(T_{0}\) are constants. In the following, we will be interested in the dynamics of (3.1) close to the stationary solution (3.13), i.e., the linearized dynamics of (3.1) around (3.13). The Shakhov model linearized around (3.13) in non-dimensional form is given by \[\begin{split}\frac{\partial f}{\partial t}=-\mathbf{v}\cdot \nabla_{\mathbf{x}}f-\frac{1}{\tau}f+\frac{1}{\tau}(2\pi)^{-3/2}e^{-\frac{| \mathbf{v}|^{2}}{2}}&\left[\left(\frac{5-|\mathbf{v}|^{2}}{2} \right)\mathbf{m}_{0}+\left(1+\frac{r}{2}\left(\frac{|\mathbf{v}|^{2}{-}5}{5} \right)\right)\mathbf{v}\cdot\mathbf{m}_{1}\right.\\ &\left.+\left(\frac{|\mathbf{v}|^{2}{-}3}{2}\right)\mathrm{trac }\mathbf{m}_{2}+r\left(\frac{|\mathbf{v}|^{2}{-}5}{10}\right)\mathbf{v}\cdot \tilde{\mathbf{m}}_{3}\right].\end{split} \tag{3.14}\] For the details, we refer to Appendix I. Equation (3.14) will serve as the basis for the spectral analysis performed in the following section. ## 4. Spectral Analysis of the Linearized Two-Timescale Operator In this section, we will carry out a complete spectral analysis of the right-hand side of (3.14), following the approach in [36]. This will allow us to draw conclusions on the decay properties of hydrodynamic variables, the existence of a critical wave number and the hydrodynamic closure. After reformulating the problem in frequency space, we will use the resolvent calculus to formulate a condition for the discrete spectrum (Subsection 4.1). Then, we will use properties of the plasma dispersion function (see Appendix) to define a spectral function \(\Sigma_{|\mathbf{k}|,\tau}\), whose zeros coincide with the discrete, isolated eigenvalues (Subsection 4.2). These families of eigenvalues are described in more detail in Subsection 4.3. Then, in Subsection 4.4, we prove the existence of a critical wave number \(k_{\mathrm{crit}}\) such that \(\Sigma_{|\mathbf{k}|,\tau}\) has no zeros (i.e., there exists no eigenvalues) for \(|\mathbf{k}|{>k_{\mathrm{crit}}}\). In Subsection 4.5, we take a closer look at the branches of eigenvalues (modes) and the merging of diffusive branches (second sound). Finally, in Subsection 4.6, we discuss the existence of eigenvalues below the essential spectrum (ghost modes) for \(r<0\) (\(\mathrm{Pr}>1\)). ### Description of the discrete spectrum In the following, we rescale the density \(f\) with a global, non-dimensional Maxwellian, \[f\mapsto(2\pi)^{-3/2}e^{-\frac{|\mathbf{v}|^{2}}{2}}f, \tag{4.1}\] which allows us to divide by the Gaussian in (3.14) and interpret the moments (A.4) as projections relative to the inner product (2.5). We define the following set of basis functions \[e_{0}(\mathbf{v}) =1, e_{4}(\mathbf{v})=\frac{|\mathbf{v}|^{2}{-}3}{\sqrt{6}}, \tag{4.3}\] \[e_{1}(\mathbf{v}) =v_{1}, e_{5}(\mathbf{v})=v_{1}\frac{|\mathbf{v}|^{2}{-}5}{\sqrt{10}},\] (4.4) \[e_{2}(\mathbf{v}) =v_{1}, e_{6}(\mathbf{v})=v_{2}\frac{|\mathbf{v}|^{2}{-}5}{\sqrt{10}},\] (4.5) \[e_{3}(\mathbf{v}) =v_{3}, e_{7}(\mathbf{v})=v_{3}\frac{|\mathbf{v}|^{2}{-}5}{\sqrt{10}}, \tag{4.2}\] which satisfy the orthonormality condition \[\langle e_{n},e_{m}\rangle_{\mathbf{v}}=\delta_{nm},\quad\text{for}\quad 0 \leq n,m\leq 7. \tag{4.6}\] Defining \[f_{j}=\langle e_{j},f\rangle_{\mathbf{v}}, \tag{4.7}\] we can infer the following relations between the moments and the coefficients (4.7): \[\frac{5-|\mathbf{v}|^{2}}{2}\mathbf{m}_{0} =\frac{5-|\mathbf{v}|^{2}}{2}f_{0}=f_{0}e_{0}-\frac{\sqrt{6}}{2}f_ {0}e_{4},\] \[\left(\frac{r}{2}\left(\frac{|\mathbf{v}|^{2}{-}5}{5}\right)+1 \right)\mathbf{v}\cdot\mathbf{m}_{1} =f_{1}e_{1}+f_{2}e_{2}+f_{3}e_{3}-\frac{r\sqrt{10}}{2}(f_{1}e_{5} +f_{2}e_{6}+f_{3}e_{7}),\] \[\frac{|\mathbf{v}|^{2}{-}3}{6}\mathrm{trace}\mathbf{m}_{2} =e_{4}\frac{1}{\sqrt{6}}\int_{\mathbb{R}}f|\mathbf{v}|^{2}\,d \mathbf{v}=e_{4}\frac{1}{\sqrt{6}}\left(\int_{\mathbb{R}}f(|\mathbf{v}|^{2}{- }3)\,d\mathbf{v}+3\mathbf{m}_{0}\right)=f_{2}e_{4}+\frac{3}{\sqrt{6}}f_{0}e_{4}\] \[r\left(\frac{|\mathbf{v}|^{2}{-}5}{10}\right)\mathbf{v}\cdot \tilde{\mathbf{m}}_{3} =\frac{r}{\sqrt{10}}(e_{5},e_{6},e_{7})\cdot\int_{\mathbf{R}^{3}} f\mathbf{v}|\mathbf{v}|^{2}\,d\mathbf{v}=(e_{5},e_{6},e_{7})\cdot\int_{ \mathbf{R}^{3}}\left[f(e_{5},e_{6},e_{7})+\frac{5r}{\sqrt{10}}\mathbf{v}f \right]d\mathbf{v}\] \[=r(f_{5}e_{5}+f_{6}e_{6}+f_{7}e_{7})+\frac{5r}{\sqrt{10}}(f_{1}e_ {5}+f_{2}e_{6}+f_{3}e_{7}) \tag{4.8}\] We bundle the basis functions (4.2) into a vector \[\mathbf{e}=\{e_{j}\}_{0\leq j\leq 7}, \tag{4.9}\] and define the matrix \[\mathbf{D}_{r}=\mathrm{diag}(1,1,1,1,1,r,r,r), \tag{4.10}\] together with the following \[\mathbb{B}_{8,r}=(\mathbb{P}_{5}+r\mathbb{P}_{8})f=\langle f,\mathbf{e} \rangle_{\mathbf{v}}\cdot\mathbf{D}_{r}\mathbf{e}, \tag{4.11}\] as the scaled sum of two finite-rank projections \[\mathbb{P}_{5}f=\sum_{n=0}^{4}\langle f,e_{n}\rangle e_{n},\qquad\mathbb{P}_{ 8}f=\sum_{n=5}^{8}\langle f,e_{n}\rangle e_{n}. \tag{4.12}\] The linear operator appearing as the right-hand side of equation (3.14) (together with the rescaling (4.1)) then takes the simple form \[\mathcal{L}=-\mathbf{v}\cdot\nabla_{\mathbf{x}}-\frac{1}{\tau}+\frac{1}{\tau} \mathbb{B}_{8,r}, \tag{4.13}\] and equation (3.14) becomes \[\frac{\partial f}{\partial t}=\mathcal{L}f. \tag{4.14}\] _Remark 4.1_.: Let us recall that any function \(f\in\mathcal{H}_{\mathbf{v}}\) admits a unique expansion as a multi-dimensional _Hermite series_: \[f(\mathbf{v})=\sum_{n=0}^{\infty}\mathbf{f}_{n}:\mathbf{H}_{n}(\mathbf{v}), \tag{4.15}\] where \(\mathbf{H}_{n}\) is defined in (2.6) and \(\mathbf{f}_{n}\) is an \(n\)-tensor. Since the eight basis vectors (4.2) appear in the expansion (4.15) via an orthogonal splitting, we have that \[\langle\mathbb{P}_{5}f,(1-\mathbb{P}_{5})f\rangle_{\mathbf{v}}=0,\qquad\langle \mathbb{P}_{8}f,(1-\mathbb{P}_{8})f\rangle_{\mathbf{v}}=0 \tag{4.16}\] for any \(f\in\mathcal{H}_{\mathbf{v}}\). Hermite expansions were famously used by Grad in his seminal paper [25] to establish finite-moment closures. From \[\begin{split}\langle\mathcal{L}f,f\rangle_{\mathbf{x},\mathbf{v}}& =\langle-\mathbf{v}\cdot\nabla_{\mathbf{x}}f-\frac{1}{\tau}f+ \frac{1}{\tau}\mathbb{B}_{8,r}f,f\rangle_{\mathbf{x},\mathbf{v}}\\ &=\int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}}(-\mathbf{v}\cdot \nabla_{\mathbf{x}}f-\frac{1}{\tau}f+\frac{1}{\tau}(\mathbb{P}_{5}+r\mathbb{P} _{8})f)fe^{-\frac{|\mathbf{v}|^{2}}{2}}\,d\mathbf{x}d\mathbf{v}\\ &=-\frac{1}{\tau}\int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}}\Big{[} (1-\mathbb{P}_{5}-\mathbb{P}_{8})f+(1-r)\mathbb{P}_{8}f\Big{]}fe^{-\frac{| \mathbf{v}|^{2}}{2}}\,d\mathbf{x}d\mathbf{v}\\ &=-\frac{1}{\tau}\|(1-\mathbb{P}_{5}-\mathbb{P}_{8})f\|_{\mathbf{ x},\mathbf{v}}^{2}-\frac{(1-r)}{\tau}\|\mathbb{P}_{8}f\|_{\mathbf{x},\mathbf{v}}^{2}, \end{split} \tag{4.17}\] where we have assumed that \(f\) is sufficiently regular to justify the application of the divergence theorem in \(\mathbf{x}\) in order to remove the gradient term as well as (4.16). For \(0<r<1\) (or \(0<\Pr<1\)), it follows that the operator \(\mathcal{L}\) is dissipative and that \[\Re\sigma(\mathcal{L})\leq 0. \tag{4.18}\] On the other hand, for \(r>0\), since \(\mathbb{P}_{5}\) and \(\mathbb{P}_{8}\) are orthogonal projections, it follows that \[\begin{split}\langle\mathcal{L}f,f\rangle_{\mathbf{x},\mathbf{v }}&=-\frac{1}{\tau}\int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}} \Big{[}f-\mathbb{P}_{5}f-r\mathbb{P}_{8}f\Big{]}fe^{-\frac{|\mathbf{v}|^{2}}{ 2}}\,d\mathbf{x}d\mathbf{v}\\ &=-\frac{1}{\tau}(\|f\|_{\mathbf{x},\mathbf{v}}^{2}-\|\mathbb{P} _{5}f\|_{\mathbf{x},\mathbf{v}}^{2}-r\|\mathbb{P}_{8}f\|_{\mathbf{x},\mathbf{ v}}^{2})\\ &\geq-\frac{1}{\tau}\|f\|_{\mathbf{x},\mathbf{v}}^{2}\end{split} \tag{4.19}\] This shows that any solution to (4.14) has to converge to zero, i.e., the global Maxwellian is a stable equilibrium up to the conserved quantities from the projected modes. On the other hand, we infer that the overall convergence rate to equilibrium can be at most \(-\frac{1}{\tau}\) for \(r>0\). For \(r<0\), we can estimate \[\begin{split}\langle\mathcal{L}f,f\rangle_{\mathbf{x},\mathbf{v }}&=-\frac{1}{\tau}\int_{\mathbb{T}^{3}}\int_{\mathbb{R}^{3}} \Big{[}f-\mathbb{P}_{5}f-r\mathbb{P}_{8}f\Big{]}fe^{-\frac{|\mathbf{v}|^{2}}{2 }}\,d\mathbf{x}d\mathbf{v}\\ &=-\frac{1}{\tau}(\|f\|_{\mathbf{x},\mathbf{v}}^{2}-\|\mathbb{P} _{5}f\|_{\mathbf{x},\mathbf{v}}^{2}-r\|\mathbb{P}_{8}f\|_{\mathbf{x},\mathbf{ v}}^{2})\\ &\geq-\frac{1}{\tau}(\|f\|_{\mathbf{x},\mathbf{v}}^{2}-r\| \mathbb{P}_{8}f\|_{\mathbf{x},\mathbf{v}}^{2})\\ &\geq\frac{r-1}{\tau}\|f\|_{\mathbf{x},\mathbf{v}}^{2},\end{split} \tag{4.20}\] where we have used that \(\|\mathbb{P}_{8}f\|^{2}\leq\|f\|^{2}\) as well. Consequently, for \(r<0\), we can only infer the weaker decay rate \(\frac{r-1}{\tau}\) as opposed to \(\frac{1}{\tau}\) for \(r>0\). _Remark 4.2_.: The weaker decay rate in (4.20) for negative \(r\), i.e., Prandtl number larger than one, is already indicative for the existence of parts of the spectrum located below the essential spectrum (\(\{\Re\lambda=-\frac{1}{\tau}\}\)). Indeed, we will show in Section 4.6 that for \(r<0\), there exist eigenvalues below the essential spectrum for a certain range of wave numbers. Let us proceed with the spectral analysis by switching to frequency space. Since \(\mathbf{x}\in\mathbb{T}^{3}\), we can decompose \(f\) in a Fourier series as \[f(\mathbf{x},\mathbf{v})=\sum_{|\mathbf{k}|=0}^{\infty}\hat{f}(\mathbf{k}, \mathbf{v})e^{\mathrm{i}\mathbf{x}\cdot\mathbf{k}}, \tag{4.21}\] for the Fourier coefficients \[\hat{f}(\mathbf{k},\mathbf{v})=\frac{1}{(2\pi)^{3}}\int_{\mathbb{R}^{3}}f( \mathbf{x},\mathbf{v})e^{-\mathrm{i}\mathbf{x}\cdot\mathbf{k}}\,d\mathbf{x}. \tag{4.22}\] In frequency space, (4.13) becomes \[\hat{\mathcal{L}}_{\mathbf{k}}=-\mathrm{i}\mathbf{k}\cdot\mathbf{v}-\frac{1}{ \tau}+\frac{1}{\tau}\mathbb{B}_{8,r}, \tag{4.23}\] and the spectrum of \(\mathcal{L}\) can be calculated from the corresponding operator at each wave vector \(\mathbf{k}\): \[\sigma(\mathcal{L})=\bigcup_{\mathbf{k}\in\mathbb{Z}^{3}}\sigma(\hat{ \mathcal{L}}_{\mathbf{k}}). \tag{4.24}\] For \(\mathbf{k}=0\), we can read off the spectrum of (4.23) explicitly. Indeed, since \(\hat{\mathcal{L}}_{0}=-\frac{1}{\tau}(1-\mathbb{B}_{8,r})\), we find that \[\hat{\mathcal{L}}_{0}e_{j}=-\frac{1}{\tau}(1-\mathbb{B}_{8,r})e_{j}=0, \tag{4.25}\] for \(0\leq j\leq 4\), while \[\hat{\mathcal{L}}_{0}e_{l}=-\frac{1}{\tau}(1-\mathbb{B}_{8,r})e_{l}=-\frac{1} {\tau}(1-r)e_{l}=-\frac{1}{\tau_{\mathrm{slow}}}e_{l}, \tag{4.26}\] for \(5\leq l\leq 7\). On the other hand, we see from (4.25) that \(\hat{\mathcal{L}}_{0}\) just acts like \(-\frac{1}{\tau}\) on the orthogonal complement of \(\mathrm{span}\{e_{j}\}_{0\leq j\leq 7}\). This shows that \[\sigma(\hat{\mathcal{L}}_{0})=\left\{-\frac{1}{\tau_{\mathrm{fast}}},-\frac{1} {\tau_{\mathrm{slow}}},0\right\}, \tag{4.27}\] and the dimensions of the corresponding eigenspaces are given by \[\mathrm{codim\ eig}\left(-\frac{1}{\tau_{\mathrm{fast}}}\right)=8,\quad\dim \mathrm{eig}\left(-\frac{1}{\tau_{\mathrm{slow}}}\right)=3,\quad\mathrm{eig} \left(0\right)=5. \tag{4.28}\] Now, for the following, let us assume that \(\mathbf{k}\neq 0\). For more compact calculation, we define the operator \[\mathcal{S}_{\mathbf{k}}f=\mathbf{v}\cdot\mathbf{k}f. \tag{4.29}\] In the calculation of the discrete spectrum of the operator \(\hat{\mathcal{L}}_{\mathbf{k}}\), based on the second resolvent identity and finite-rank perturbations, we also follow the presentation in [36] closely. The spectrum of \(\hat{\mathcal{L}}_{\mathbf{k}}\) is then given by \[\begin{split}\sigma(\hat{\mathcal{L}}_{\mathbf{k}})& =-\frac{1}{\tau}-\sigma\left(\mathrm{i}\mathcal{S}_{\mathbf{k}}- \frac{1}{\tau}\mathbb{B}_{8,r}\right)\\ &=-\frac{1}{\tau}-\frac{1}{\tau}\sigma\left(\mathrm{i}\tau \mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}\right).\end{split} \tag{4.30}\] Since \(\mathbb{B}_{8,r}\) has finite rank, the operator \(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}\) is compact a compact perturbation of \(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}\) and we conclude that \[\sigma_{ess}(\hat{\mathcal{L}}_{\mathbf{k}})=-\frac{1}{\tau}-\sigma_{ess}\left( \mathrm{i}\mathcal{S}_{\mathbf{k}}\right)=-\frac{1}{\tau}+\mathrm{i}\mathbb{R}, \tag{4.31}\] where we have used that \(\sigma(\mathcal{S}_{\mathbf{k}})=\mathbb{R}\) for \(\mathbf{k}\neq 0\). We define the Green's function matrices as \[G_{T}(z,n,m) =\langle(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r} -z)^{-1}e_{n},e_{m}\rangle_{\mathbf{v}},\] \[G_{S}(z,n,m) =\langle(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-z)^{-1}e_{n},e_{ m}\rangle_{\mathbf{v}}, \tag{4.32}\] for \(0\leq n,m\leq 4\) and set \(G_{S}(z)=\{G_{S}(z,n,m)\}_{0\leq n\leq 4}\), \(G_{T}(z)=\{G_{T}(z,n,m)\}_{0\leq n\leq 4}\). By the second resolvent identity, \[\mathcal{R}(z;A)-\mathcal{R}(z;B)=\mathcal{R}(z;A)(B-A)\mathcal{R}(z;B), \tag{4.33}\] for any operators \(A,B\) and \(z\in\rho(A)\cap\rho(B)\), we have for \(A=\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}\) and \(B=\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}\) that \[(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}-z)^{-1}=(\mathrm{i} \tau\mathcal{S}_{\mathbf{k}}-z)^{-1}+(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}- z)^{-1}\mathbb{B}_{8,r}(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}-z)^{-1}. \tag{4.34}\] Applying equation (4.34) to \(e_{m}\) for \(0\leq m\leq 4\) and rearranging gives \[(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}-z)^{-1}e _{n} =(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-z)^{-1}e_{n}+(\mathrm{i} \tau\mathcal{S}_{\mathbf{k}}-z)^{-1}\mathbb{B}_{8,r}(\mathrm{i}\tau\mathcal{S} _{\mathbf{k}}-\mathbb{B}_{8,r}-z)^{-1}e_{n}\] \[=(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-z)^{-1}e_{n}+(\mathrm{i} \tau\mathcal{S}_{\mathbf{k}}-z)^{-1}\sum_{j=0}^{7}\langle(\mathrm{i}\tau \mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}-z)^{-1}e_{n},e_{j}\rangle_{\mathbf{v }}(\mathbf{D}_{r}\mathbf{e})_{j}\] \[=(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-z)^{-1}e_{n}+\sum_{j=0}^ {7}G_{T}(z,n,j)(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-z)^{-1}(\mathbf{D}_{r} \mathbf{e})_{j}, \tag{4.35}\] for \(z\in\mathbb{C}\setminus\mathrm{i}\mathbb{R}\). Thus, the resolvent of \((\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}-\mathbb{B}_{8,r}-z)\) includes the resolvent of \(\mathrm{i}\tau\mathcal{S}_{\mathbf{k}}\) as well as information from the matrix \(\{G_{T}(z,n,m)\}_{0\leq n,m\leq 7}\) as coefficients. Taking an inner product of (4.35) with \(e_{m}\) gives \[G_{T}(z,n,m) =G_{S}(z,n,m)+\sum_{j=0}^{7}G_{T}(z,n,j)\langle(\mathrm{i}\tau \mathcal{S}_{\mathbf{k}}-z)^{-1}(\mathbf{D}_{r}\mathbf{e})_{j},e_{m}\rangle_{ \mathbf{v}}\] \[=G_{S}(z,n,m)+\sum_{j=0}^{7}G_{T}(z,n,j)\mathbf{D}_{r}G_{S}(z,j,m) \tag{4.36}\] for \(0\leq n,m\leq 7\) and \(z\in\mathbb{C}\setminus\mathrm{i}\mathbb{R}\). System (4.36) defines sixty-four equations for the coefficients \(G_{T}(z,n,m)\), which can be re-written more compactly as \[G_{T}=G_{S}+G_{T}\mathbf{D}_{r}G_{S}, \tag{4.37}\] or, equivalently, \[G_{T}(\mathrm{Id}-\mathbf{D}_{r}G_{S})=G_{S}. \tag{4.38}\] Consequently, we can solve for the entries of \(G_{T}\) unless \(\det(\mathrm{Id}-\mathbf{D}_{r}G_{S})=0\), which implies that \(\sigma_{\mathrm{disc}}(\mathrm{i}\tau\mathcal{S}_{k}-\mathbb{B}_{8,r})=\{z\in \mathbb{C}:\det(\mathrm{Id}-\mathbf{D}_{r}G_{S}(z))=0\}\), which implies the explicit formula \[\sigma_{\mathrm{disc}}(\hat{\mathcal{L}}_{\mathbf{k}})=-\frac{1}{\tau}-\frac{1}{ \tau}\left\{z\in\mathbb{C}:\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r} \mathbf{e}(\mathbf{v})\otimes\mathbf{e}(\mathbf{v})\frac{e^{-\frac{|\mathbf{v} |^{2}}{2}}}{\mathrm{i}\tau\mathbf{k}\cdot\mathbf{v}-z}\;\frac{d\mathbf{v}}{(2 \pi)^{\frac{3}{2}}}-\mathrm{Id}\right)=0\right\}. \tag{4.39}\] An eigenvalue \(\lambda\) of the operator \(\hat{\mathcal{L}}_{\mathbf{k}}\) is then related to the zero \(z\) in (4.39) via \[z=-\tau\lambda-1. \tag{4.40}\] To ease notation, we define the _spectral function_ \[\Sigma_{\mathbf{k},\tau}(\lambda)=\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r }\mathbf{e}(\mathbf{v})\otimes\mathbf{e}(\mathbf{v})\frac{e^{-\frac{|\mathbf{ v}|^{2}}{2}}}{\mathrm{i}\tau\mathbf{k}\cdot\mathbf{v}+\lambda\tau+1}\,\frac{d \mathbf{v}}{(2\pi)^{\frac{3}{2}}}-\mathrm{Id}\right), \tag{4.41}\] such that \[\sigma_{\mathrm{disc}}(\hat{\mathcal{L}}_{\mathbf{k}})=\{\lambda\in\mathbb{C} :\Sigma_{\mathbf{k},\tau}(\lambda)=0\}. \tag{4.42}\] ### Derivation of a Spectral Function To evaluate the integral expression in (4.41), we we decompose the wave vector \(\mathbf{k}\) into its magnitude along a coordinate direction and a rotation: \[\mathbf{k}=\mathbf{Q}_{\mathbf{k}}(k,0,0)^{T}, \tag{4.43}\] where \(Q_{\mathbf{k}}Q_{\mathbf{k}}^{T}=\mathrm{Id}\). Setting \(\mathbf{w}=\mathbf{Q}_{\mathbf{k}}^{T}\mathbf{v}\), we have that \[\mathbf{k}\cdot\mathbf{v}=\mathbf{Q}_{\mathbf{k}}(|\mathbf{k}|,0,0)^{T}\cdot \mathbf{v}=(k,0,0)\cdot\mathbf{w}=kw_{1}, \tag{4.44}\] while the vector of basis functions \(\mathbf{e}\) transforms according to \[\begin{split}\mathbf{e}(\mathbf{v})&=\left(1, \mathbf{v},\frac{|\mathbf{v}|^{2}{-}3}{\sqrt{6}},\mathbf{v}\frac{|\mathbf{v}|^ {2}{-}5}{\sqrt{10}}\right)=\left(1,\mathbf{Q}_{\mathbf{k}}\mathbf{w},\frac{| \mathbf{w}|^{2}{-}3}{\sqrt{6}},\mathbf{Q}_{\mathbf{k}}\mathbf{w}\frac{| \mathbf{w}|^{2}{-}5}{\sqrt{10}}\right)\\ &=\begin{pmatrix}1&0&0&0\\ 0&\mathbf{Q}_{\mathbf{k}}&0&0\\ 0&0&1&0\\ 0&0&0&\mathbf{Q}_{\mathbf{k}}\end{pmatrix}\mathbf{e}(\mathbf{w}).\end{split} \tag{4.45}\] _Remark 4.3_.: The first column of the rotation matrix \(\mathbf{Q}_{\mathbf{k}}\) can be chosen as \(\frac{1}{k}\mathbf{k}\), which, by the orthonormality of the columns of \(\mathbf{Q}_{\mathbf{k}}\), implies (4.44). The change of coordinates \(\mathbf{v}=\mathbf{Q}_{\mathbf{k}}\mathbf{w}\) can then interpreted as writing the velocity vector \(\mathbf{v}\) as the sum of a a component parallel to the wave vector \(\mathbf{k}\) and a component orthogonal to it: \(\mathbf{v}=\mathbf{v}_{\parallel}+\mathbf{v}_{\perp}\) with \(\mathbf{v}_{\parallel}=\frac{\mathbf{v}\cdot\mathbf{k}}{k^{2}}\mathbf{k}\). By the orthogonality of \(\mathbf{Q}_{\mathbf{k}}\), the volume element transforms as \(d\mathbf{v}=d\mathbf{w}\) and we can calculate \[\begin{split}&\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r} \mathbf{e}(\mathbf{v})\otimes\mathbf{e}(\mathbf{v})\frac{e^{-\frac{|\mathbf{ v}|^{2}}{2}}}{\mathrm{i}\tau\mathbf{k}\cdot\mathbf{v}-z}\,\frac{d\mathbf{v}}{(2 \pi)^{\frac{3}{2}}}-\mathrm{Id}\right)\\ &\quad=\det\left(\int_{\mathbb{R}^{3}}\left(\mathbf{D}_{r} \begin{pmatrix}1&0&0&0\\ 0&\mathbf{Q}_{\mathbf{k}}&0&0\\ 0&0&1&0\\ 0&0&0&\mathbf{Q}_{\mathbf{k}}\end{pmatrix}\mathbf{e}(\mathbf{w})\right) \otimes\begin{pmatrix}1&0&0&0\\ 0&\mathbf{Q}_{\mathbf{k}}&0&0\\ 0&0&1&0\\ 0&0&0&\mathbf{Q}_{\mathbf{k}}\end{pmatrix}\mathbf{e}(\mathbf{w})\frac{e^{- \frac{|\mathbf{w}|^{2}}{2}}}{\mathrm{i}\tau kw_{1}-z}\,\frac{d\mathbf{w}}{(2\pi) ^{\frac{3}{2}}}-\mathrm{Id}\right)\\ &=\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r}\mathbf{e}(\mathbf{w}) \otimes\mathbf{e}(\mathbf{w})\frac{e^{-\frac{|\mathbf{w}|^{2}}{2}}}{\mathrm{i }\tau kw_{1}-z}\,\frac{d\mathbf{w}}{(2\pi)^{\frac{3}{2}}}-\mathrm{Id}\right), \end{split} \tag{4.46}\] where we have used that \(\mathbf{D}_{r}\) only acts on the last three columns by multiplication with a constant and hence commutes with the block-rotation-matrix, together with the orthogonality of \(\mathbf{Q}_{\mathbf{k}}\) in factoring the determinant. From equation (4.46), we see that the spectral function \(\Sigma_{\mathbf{k},\tau}\) depends on the wave vector \(\mathbf{k}\) only through \(\tau k\) and we define \[\kappa:=\tau k. \tag{4.47}\] A lengthy but elementary calculations allows us to integrate out the variables \(w_{2}\) and \(w_{3}\) in (4.46), which simplifies the spectral function according to \[\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r}\mathbf{e}(\mathbf{v })\otimes\mathbf{e}(\mathbf{v})\frac{e^{-\frac{|\mathbf{v}|^{2}}{2}}}{\mathrm{ i}\tau\mathbf{k}\cdot\mathbf{v}-z}\,\frac{d\mathbf{v}}{(2\pi)^{\frac{3}{2}}}- \mathrm{Id}\right) =\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r}\mathbf{M}(w)\frac {e^{-\frac{w^{2}}{2}}}{\mathrm{i}\kappa w-z}\,\frac{dw}{\sqrt{2\pi}}-\mathrm{ Id}\right)\] \[=\frac{1}{(\mathrm{i}\kappa)^{8}}\det\left(\int_{\mathbb{R}^{3}} \mathbf{D}_{r}\mathbf{M}(w)\frac{e^{-\frac{w^{2}}{2}}}{w-\zeta}\,\frac{dw}{ \sqrt{2\pi}}-(\mathrm{i}\kappa)\mathrm{Id}\right)_{\zeta=\frac{\pi}{\mathrm{i }\kappa}}, \tag{4.48}\] where \[\mathbf{M}(w)=\begin{pmatrix}\mathbf{M}_{11}(w)&\mathbf{M}_{12}(w)\\ \mathbf{M}_{12}^{T}(w)&\mathbf{M}_{22}(w)\end{pmatrix}, \tag{4.49}\] and the coefficient matrices are given by \[\mathbf{M}_{11}(w)=\begin{pmatrix}1&0&0&w\\ 0&1&0&0\\ 0&0&1&0\\ w&0&0&w^{2}\end{pmatrix},\quad\mathbf{M}_{12}(w)=\begin{pmatrix}\frac{w^{2}-1} {\sqrt{6}}&0&0&\frac{w\left(w^{2}-3\right)}{\sqrt{10}}\\ 0&\frac{\left(w^{2}-1\right)}{\sqrt{10}}&0&0\\ 0&0&\frac{\left(w^{2}-1\right)}{\sqrt{10}}&0\\ \frac{w\left(w^{2}-1\right)}{\sqrt{6}}&0&0&\frac{w^{2}\left(w^{2}-3\right)}{ \sqrt{10}}\end{pmatrix},\] \[\mathbf{M}_{22}(w)=\begin{pmatrix}\frac{1}{6}\left(w^{4}-2w^{2}+5 \right)&0&0&0&\frac{w\left(w^{4}-4w^{2}+7\right)}{2\sqrt{15}}\\ 0&\frac{1}{10}\left(w^{4}-2w^{2}+9\right)&0&0\\ 0&0&\frac{1}{10}\left(w^{4}-2w^{2}+9\right)&0\\ \frac{w\left(w^{4}-4w^{2}+7\right)}{2\sqrt{15}}&0&0&\frac{1}{10}w^{2}\left(w^ {4}-6w^{2}+13\right)\end{pmatrix} \tag{4.50}\] Since the entries of \(\mathbf{M}\) are polynomials of degree at most six, we can expand \[\mathbf{M}(w)=\sum_{n=0}^{6}\mathbf{M}_{n}w^{n}, \tag{4.51}\] for some matrix coefficients \(\mathbf{M}_{n}\in\mathbb{R}^{8\times 8}\). To evaluate the integral expression on the right-hand side of (4.48), we will use the plasma dispersion function (2.12). To this end, we have to calculate expressions for the derivative of \(Z\). By repeated application of (2.13), we conclude that \[\frac{d^{n}Z}{dz^{n}}(\zeta)=p_{n}(\zeta)+q_{n}(\zeta)Z(\zeta),\qquad n\geq 1 \tag{4.52}\] for some polynomials \(p_{n},q_{n}\) with integer coefficients. The first few derivatives of \(Z\) can be expressed as \[\begin{split}\frac{dZ}{d\zeta}&=-1-\zeta Z,\\ \frac{d^{2}Z}{d\zeta^{2}}&=\zeta+(\zeta^{2}-1)Z,\\ \frac{d^{3}Z}{d\zeta^{3}}&=2-\zeta^{2}+(3\zeta- \zeta^{2})Z,\\ \frac{d^{4}Z}{d\zeta^{4}}&=-5\zeta+\zeta^{3}+( \zeta^{4}-6\zeta^{2}+3)Z.\end{split} \tag{4.53}\] We claim that \(q_{n}(\zeta)=(-1)^{n}H_{n}(\zeta)\) for the \(n^{th}\) Hermite polynomial. Indeed, from the recurrence relation of \(Z\) in (2.13), we have that \[\begin{split} p_{n+1}+q_{n+1}Z&=\frac{d^{n+1}Z}{d \zeta^{n+1}}=\frac{d}{d\zeta}(p_{n}+q_{n}Z)=p_{n}^{\prime}+q_{n}^{\prime}Z+q_{n }Z^{\prime}\\ &=p_{n}^{\prime}+q_{n}^{\prime}Z+q_{n}(-\zeta Z-1)=(p_{n}^{\prime }-q_{n})+(q_{n}^{\prime}-\zeta q_{n})Z,\end{split} \tag{4.54}\] showing that \(q_{n+1}=q_{n}^{\prime}-\zeta q_{n}\), which is - up to a sign flip - the recurrence relation of the \(n^{th}\) Hermite polynomial (2.11). With the relation \[\begin{split}\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}H_{k}(v)\frac{ e^{-\frac{v^{2}}{2}}}{v-\zeta}\,dv&=\frac{1}{\sqrt{2\pi}}\int_{ \mathbb{R}}\left[\left(-\frac{d}{dv}\right)^{k}e^{-\frac{v^{2}}{2}}\right] \frac{dv}{v-\zeta}=\frac{(-1)^{k}k!}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-\frac{v ^{2}}{2}}\frac{dv}{(v-\zeta)^{k+1}}\\ &=\frac{(-1)^{k}}{\sqrt{2\pi}}\,\frac{d^{k}}{d\zeta^{k}}\int_{ \mathbb{R}}e^{-\frac{v^{2}}{2}}\,\frac{dv}{v-\zeta}=(-1)^{k}\frac{d^{k}Z}{d \zeta^{k}},\end{split} \tag{4.55}\] we can now conclude that for any polynomial expanded in Hermite basis \(P(w)=\sum_{n=0}^{N}P_{n}H_{n}(w)\): \[\begin{split}\int_{\mathbb{R}}P(w)\frac{e^{-\frac{w^{2}}{2}}}{w- \zeta}\,dw&=\sum_{n=0}^{N}P_{n}\int_{\mathbb{R}}H_{n}(w)\frac{e^ {-\frac{w^{2}}{2}}}{w-\zeta}\,dw=\sum_{n=0}^{N}P_{n}(-1)^{n}\frac{d^{n}Z}{d \zeta^{n}}(\zeta)\\ &=\sum_{n=0}^{N}P_{n}(-1)^{n}(p_{n}(\zeta)+(-1)^{n}H_{n}(\zeta)Z( \zeta))\\ &=P(\zeta)Z(\zeta)+P\sum_{n=0}^{N}P_{n}(-1)^{n}p_{n}(\zeta).\end{split} \tag{4.56}\] With the help of the plasma dispersion function \(Z\) and the insight (4.56), we readily calculate \[\begin{split}\int_{\mathbb{R}}\mathbf{M}(w)\frac{e^{-\frac{w^{2}} {2}}}{w-\zeta}\,dw-\mathbf{M}(\zeta)Z(\zeta)&=\int_{\mathbb{R}}[ \mathbf{M}(w)-\mathbf{M}(\zeta)]\frac{e^{-\frac{w^{2}}{2}}}{w-\zeta}\,dw\\ &=\sum_{n=0}^{6}\mathbf{M}_{n}\int_{\mathbb{R}}\frac{w^{n}-\zeta^{ n}}{w-\zeta}e^{-\frac{w^{2}}{2}}\,dw\\ &=\sum_{n=0}^{6}\sum_{j=0}^{n-1}\mathbf{M}_{n}\left(\int_{ \mathbb{R}}w^{n-j-1}e^{-\frac{w^{2}}{2}}\,dw\right)\zeta^{j}.\end{split} \tag{4.57}\] Using (4.57), we can evaluate the integral expression in (4.46) to an explicit matrix depending on polynomials in \(\zeta\) and \(Z(\zeta)\) in a linear way. Indeed, with the help of (4.57), we have that \[\det\left(\int_{\mathbb{R}^{3}}\mathbf{D}_{r}\mathbf{M}(w)\frac{e^{-\frac{w^{2} }{2}}}{w-\zeta}\,\frac{dw}{\sqrt{2\pi}}-(\mathrm{i}\kappa)\mathrm{Id}\right)_{ \zeta=\frac{x}{\mathrm{i}\kappa}}=\det\left(\mathbf{D}_{r}\mathbf{N}(\zeta)-( \mathrm{i}\kappa)\mathrm{Id}\right), \tag{4.58}\] for the matrix \[\mathbf{N}(\zeta)=\begin{pmatrix}\mathbf{N}_{11}(\zeta)&\mathbf{N}_{12}(\zeta )\\ \mathbf{N}_{12}^{T}(\zeta)&\mathbf{N}_{22}(\zeta)\end{pmatrix}, \tag{4.59}\] and the coefficient matrices are given by \[\mathbf{N}_{11}(\zeta) =\begin{pmatrix}Z(\zeta)&0&0&\zeta Z(\zeta)+1\\ 0&Z(\zeta)&0&0\\ 0&0&Z(\zeta)&0\\ \zeta Z(\zeta)+1&0&0&\zeta+\zeta^{2}Z(\zeta)\end{pmatrix},\] \[\mathbf{N}_{12}(\zeta) =\begin{pmatrix}\frac{\zeta}{\sqrt{6}}+\frac{\left(\zeta^{2}-1 \right)Z(\zeta)}{\sqrt{6}}&0&0&\frac{\left(\zeta^{2}-2\right)}{\sqrt{10}}+ \frac{\zeta\left(\zeta^{2}-3\right)Z(\zeta)}{\sqrt{10}}\\ 0&\frac{\zeta}{\sqrt{10}}+\frac{\left(\zeta^{2}-1\right)Z(\zeta)}{\sqrt{10}}& 0&0\\ 0&0&\frac{\zeta}{\sqrt{10}}+\frac{\left(\zeta^{2}-1\right)Z(\zeta)}{\sqrt{10}} &0\\ \frac{\zeta^{2}}{\sqrt{6}}+\frac{\left(\zeta^{2}-1\right)\zeta Z(\zeta)}{\sqrt {6}}&0&0&\frac{\left(\zeta^{2}-2\right)\zeta}{\sqrt{10}}+\frac{\left(\zeta^{2 }-3\right)\zeta^{2}Z(\zeta)}{\sqrt{10}}\end{pmatrix},\] \[\mathbf{N}_{22}(\zeta) =\begin{pmatrix}N_{22,1}&0&0&\frac{\left(\zeta^{4}-3\zeta^{2}+6 \right)}{2\sqrt{15}}+\frac{\zeta\left(\zeta^{4}-4\zeta^{2}+7\right)Z(\zeta)}{ 2\sqrt{15}}\\ 0&N_{22,2}&0&0\\ 0&N_{22,3}&0\\ \frac{\zeta^{4}-3\zeta^{2}+6}{2\sqrt{15}}+\frac{\zeta\left(\zeta^{4}-4\zeta^{ 2}+7\right)Z(\zeta)}{2\sqrt{15}}&0&0&N_{22,4}\end{pmatrix},\] \[N_{22,1} =\frac{1}{6}\zeta\left(\zeta^{2}-1\right)+\frac{1}{6}\left(\zeta^ {4}-2\zeta^{2}+5\right)Z(\zeta)\] \[N_{22,2} =\frac{1}{10}\zeta\left(\zeta^{2}-1\right)+\frac{1}{10}\left(\zeta ^{4}-2\zeta^{2}+9\right)Z(\zeta)\] \[N_{22,3} =\frac{1}{10}\zeta\left(\zeta^{2}-1\right)+\frac{1}{10}\left(\zeta ^{4}-2\zeta^{2}+9\right)Z(\zeta)\] \[N_{22,4} =\frac{1}{10}\left(\zeta^{4}-5\zeta^{2}+10\right)\zeta+\frac{1}{1 0}\left(\zeta^{4}-6\zeta^{2}+13\right)\zeta^{2}Z(\zeta). \tag{4.60}\] Using the change of coordinates \[\zeta=\frac{-\tau\lambda-1}{\mathrm{i}\kappa}=\mathrm{i}\frac{\tau\lambda+1}{ \kappa}, \tag{4.61}\] the spectral function \(\Sigma_{\mathbf{k},\tau}\) takes the explicit form \[\Sigma_{\mathbf{k},\tau}(\lambda)=\frac{1}{3000(\mathrm{i}\kappa)^{8}}\Big{[} \Sigma_{0}(\zeta)+\Sigma_{1}(\zeta)Z(\zeta)+\Sigma_{2}(\zeta)Z(\zeta)^{2} \Big{]}^{2}\Big{[}\Sigma_{3}(\zeta)+\Sigma_{4}(\zeta)Z(\zeta)+\Sigma_{5}( \zeta)Z^{2}(\zeta)\Big{]}_{\zeta=\mathrm{i}\frac{\tau\lambda+1}{\kappa}}, \tag{4.62}\] for the polynomials \[\Sigma_{0}(\zeta) =10\kappa^{2}+\zeta r\left(\mathrm{i}\zeta^{2}\kappa+\zeta-\mathrm{ i}\kappa\right),\] \[\Sigma_{1}(\zeta) =10\mathrm{i}\kappa+r\left(\mathrm{i}\zeta^{4}\kappa+\zeta^{3}-2 \mathrm{i}\zeta^{2}\kappa-\zeta+9\mathrm{i}\kappa\right),\] \[\Sigma_{2}(\zeta) =-8r,\] \[\Sigma_{3}(\zeta) =5\kappa\left(\mathrm{i}\zeta^{3}\kappa^{2}+2\zeta^{2}\kappa+ \mathrm{i}\zeta\left(5\kappa^{2}-1\right)+6\left(\kappa^{3}+\kappa\right)\right)\] \[\qquad+r\left(3\mathrm{i}\zeta^{5}\kappa^{3}+9\zeta^{4}\kappa^{2 }-3\mathrm{i}\zeta^{3}\kappa\left(5\kappa^{2}+3\right)-\zeta^{2}\left(43 \kappa^{2}+3\right)+2\mathrm{i}\zeta\kappa\left(15\kappa^{2}+23\right)+6 \left(5\kappa^{2}+3\right)\right),\] \[\Sigma_{4}(\zeta) =\mathrm{i}\left(5\kappa\left(\zeta^{4}\kappa^{2}-2\mathrm{i} \zeta^{3}\kappa+\zeta^{2}\left(4\kappa^{2}-1\right)+11\kappa^{2}+5\right)+r \left(3\zeta^{6}\kappa^{3}-9\mathrm{i}\zeta^{5}\kappa^{2}-9\zeta^{4}\left(2 \kappa^{3}+\kappa\right)\right.\right.\] \[\qquad+\left.\left.\left.\mathrm{i}\zeta^{3}\left(64\kappa^{2}+3 \right)+\zeta^{2}\kappa\left(39\kappa^{2}+79\right)-\mathrm{i}\zeta\left(23 \kappa^{2}+33\right)+16\kappa\right)\right),\] \[\Sigma_{5}(\zeta) =-4\left(5\kappa\left(\zeta^{2}\kappa-\mathrm{i}\zeta+\kappa \right)+\zeta r\left(3\zeta^{3}\kappa^{2}-6i\zeta^{2}\kappa+5\zeta\kappa^{2}-3 \zeta-5\mathrm{i}\kappa\right)\right). \tag{4.63}\] Equation (4.62) shows that the spectral function factors into two parts, which we denote as \[\Sigma_{\mathrm{shear}}(\zeta) :=\frac{1}{10\kappa^{2}}[\Sigma_{0}(\zeta)+\Sigma_{1}(\zeta)Z( \zeta)+\Sigma_{2}(\zeta)Z^{2}(\zeta)],\] \[\Sigma_{\mathrm{diff,ac}}(\zeta) :=\frac{1}{30\kappa^{4}}[\Sigma_{3}(\zeta)+\Sigma_{4}(\zeta)Z( \zeta)+\Sigma_{5}(\zeta)Z^{2}(\zeta)]. \tag{4.64}\] Let us conclude this section with some remarks. The main result of the preceding calculations - the derivation of the spectral function (4.62) with coefficient polynomials and factorization (4.64) - allows us to conclude specific features of the spectrum by solving for the zeros of a holomorphic function. This is a tremendous simplification compared to the study of (4.13) directly, where the transport and the collision term interact in a delicate manner. This is as close an analog of a determinant in finite-dimensional systems as one could hope for. In the following section, we will take a closer look at the structure of the zero set of (4.62). In particular, we will identify different families of zeros (branches) that relate to hydrodynamics. ### Hydrodynamic Modes In this section, we will identify the branches of the zero set of the spectral function \(\Sigma_{\mathbf{k},\tau}\) in dependence on the modified wave number \(\kappa\) and the parameter \(r\). First, let us show that in the limit \(\kappa\to 0\) (for fixed \(\tau\)), we recover the spectral structure of (4.27). To this end, we use the asymptotic expansion \[Z(\zeta)\sim-\sum_{n=0}^{\infty}\frac{(2n-1)!\,!}{\zeta^{2n+1}},\quad\text{ for }|\mathrm{arg}(\zeta)|\leq\frac{\pi}{2}-\delta,\quad\zeta\to\infty, \tag{4.65}\] under the assumption that \(\Im\zeta>0\). For a proof of (4.65), we refer to the Appendix. The limit \(\kappa\to 0\) with \(\Re\lambda<0\) satisfies the assumptions of (4.65). A lengthy calculation shows that, indeed, \[\Sigma_{\mathbf{k},\tau}(\lambda)\sim\frac{\lambda^{5}\tau^{5}(\lambda\tau+1- r)^{3}}{(\lambda\tau+1)^{8}},\quad\text{ for }\kappa\to 0\text{ and }\Re\lambda>0, \tag{4.66}\] which is consistent with (4.27). Since, for small \(\kappa\), the spectral function \(\Sigma_{\mathbf{k},\tau}\) is continuous in \(\kappa\), it follows that there exist five branches of eigenvalues (indexed by \(\kappa\)) emerging out of the five-fold degenerate zero \(\lambda=0\), which we call _primary hydrodynamic modes_. Furthermore, there exist three branches of eigenvalues emerging from the zero \(\lambda=-\frac{1}{\tau_{\mathrm{slow}}}\) which we call _secondary hydrodynamic modes_. From the factorization in (4.62) and (4.64), we see that any zero of \(\Sigma_{1}\) will occur with algebraic multiplicity of two. For small wave number, the primary hydrodynamic modes consist of a pair of complex conjugate eigenvalues, the _primary acoustic modes_, a real eigenvalue of algebraic multiplicity two (but geometric multiplicity one), called _primary shear mode_ and a algebraically and geometrically simple, real eigenvalue, called _primary diffusion mode_. Similarly, for small wave number, the secondary hydrodynamic modes consist of a real eigenvalue of algebraic multiplicity two (but geometric multiplicity one), called _secondary shear mode_ and a algebraically and geometrically simple, real eigenvalue, called _secondary diffusion mode_, see Figure 4.1. For larger wave numbers, depending on \(Pr\), the primary and secondary shear modes may collide and produce another pair of complex conjugated eigenvalues, called _secondary acoustic modes_ or _second sound_, see Figure 4.1. This occurs through a saddle-node bifurcation, see Section 4.5. We denote the families of modes index by wave number \(k\) as \[\kappa\mapsto\lambda_{N}(\kappa),\quad\text{ for }N\in\text{Modes}(\kappa), \tag{4.67}\] where the low wave-number set of modes is given by \[\text{Modes}_{1}=\{\text{shear}_{1},\text{diff}_{1},\text{ac}_{1},\text{ac}_{ 1}*,\text{shear}_{2},\text{diff}_{2}\}, \tag{4.68}\] while the higher wave-number set of modes is given by \[\text{Modes}_{2}=\{\text{shear}_{1},\text{ac}_{1},\text{ac}_{1}*,\text{shear }_{2},\text{ac}_{2},\text{ac}_{2}*\}. \tag{4.69}\] As mentioned before, in a certain range of Prandtl numbers, the set of modes can change from (4.68) to (4.69) if the wave number is increased. ### Existence of a Critical Wave Number and Finiteness of the Hydrodynamic Spectrum Along the same lines as in [36], we will show that for each family of modes, there exists a critical wave number such that \(k_{\text{crit}}\) such that \[\sigma_{\text{disc}}(\hat{\mathcal{L}}_{\mathbf{k}})=\emptyset,\quad\text{ for }|\mathbf{k}|>k_{\text{crit}}. \tag{4.70}\] In fact, each mode has its own critical wave number, which depends on the specific properties of the branch.s Proof.: The claim follows from a combination of Rouche's theorem applied to the spectral function \(\Sigma_{\mathbf{k},\tau}\) with the asymptotic expansion (4.65). Indeed, for fixed \(\kappa\), we find that \[\begin{split}\Sigma_{shear}(\zeta)&\sim\frac{1}{10 \kappa^{2}}\left[\Sigma_{0}(\zeta)+\Sigma_{1}(\zeta)\left(-\frac{1}{\zeta}- \frac{1}{\zeta^{3}}+\mathcal{O}(\zeta^{-5})\right)+\Sigma_{2}(\zeta)\left(- \frac{1}{\zeta}-\frac{1}{\zeta^{3}}+\mathcal{O}(\zeta^{-5})\right)\right]\\ &\sim\frac{10\zeta^{3}\kappa\left(\zeta^{3}\kappa-\text{i}\zeta ^{2}-i\right)+r\left(-7\text{i}\zeta^{5}\kappa-7\zeta^{4}-9\text{i}\zeta^{3 }\kappa-16\zeta^{2}-8\right)}{10\kappa^{2}\zeta^{6}}\\ &\sim 1,\end{split} \tag{4.71}\] Figure 4.1. Argument plot of the spectral function (4.41) for relaxation time \(\tau=0.5\), Prandtl number \(Pr=0.4\) and different wave numbers \(k\). The zeros of (4.41) define eigenvalues of the linearized Shakhov mode (points where a small, counter-clockwise loop runs through the whole rainbow at least once). All eigenvalues have negative real part and are located above the essential spectrum \(\{\Re\lambda=-\frac{1}{\tau}\}\) (solid black line), which is consistent with the decay estimates (4.19). At small wave numbers (\(k=0.4\) and \(k=0.5\)), the primary diffusion mode decreases along the real axis, while the secondary diffusion mode increases along the real axis. Around \(k\approx 0.6\) they collide, a bifurcation takes place and a pair of complex conjugated modes, the secondary acoustics, is created (\(k=0.7\)). \(|\arg(\zeta)|\leq\frac{\pi}{2}-\delta,\quad\zeta\to\infty\), for any real number \(0<\delta\leq\frac{\pi}{2}\). Analogously, we find that \[\begin{split}\Sigma_{diff,ac}(\zeta)&\sim\frac{1}{30 \kappa^{4}}\left[\Sigma_{3}(\zeta)+\Sigma_{4}(\zeta)\left(-\frac{1}{\zeta}- \frac{1}{\zeta^{3}}+\mathcal{O}(\zeta^{-5})\right)+\Sigma_{5}(\zeta)\left(- \frac{1}{\zeta}-\frac{1}{\zeta^{3}}+\mathcal{O}(\zeta^{-5})\right)\right]\\ &\sim\frac{\mathrm{i}}{30\kappa^{4}\zeta^{10}}\Big{[}5\kappa \left(-6i\zeta^{10}\kappa^{3}-18\zeta^{9}\kappa^{2}+18i\zeta^{8}\kappa+\zeta^{ 7}\left(6-23\kappa^{2}\right)\right.\\ &\qquad\left.+36i\zeta^{6}\kappa+\zeta^{5}\left(13-33\kappa^{2} \right)+52i\zeta^{4}\kappa+24\zeta^{3}+60i\zeta^{2}\kappa+36\zeta+36i\kappa \right)\\ &\qquad+\zeta r\left(15\zeta^{8}\kappa^{3}-45i\zeta^{7}\kappa^{2} -9\zeta^{6}\kappa\left(13\kappa^{2}+5\right)+i\zeta^{5}\left(281\kappa^{2}+15 \right)+236\zeta^{4}\kappa\\ &\qquad\left.+12i\zeta^{3}\left(19\kappa^{2}-6\right)+336\zeta^ {2}\kappa+36i\zeta\left(5\kappa^{2}-3\right)+180\kappa\right)\Big{]}\\ &\sim 1,\end{split} \tag{4.72}\] \(|\arg(\zeta)|\leq\frac{\pi}{2}-\delta,\quad\zeta\to\infty\), for any real number \(0<\delta\leq\frac{\pi}{2}\). Since \(\Sigma_{|\mathbf{k}|,\tau}\) is an analytic function in the strip \(\{-\frac{1}{\tau}<\Re\lambda<0\}\) and continuously extends to the boundary, it follows from (4.71) and (4.72) that \(\lambda\mapsto|\Sigma_{|\mathbf{k}|,\tau}(\lambda)-1|\) is bounded and converges to zero for \(\lambda\to\infty\) with \(-\frac{1}{\tau}\leq\Re\lambda\leq 0\). Next, we observe that \(|\Sigma_{|\mathbf{k}|,\tau}(\lambda)-1|\) only contains terms in \(k\) of order \(\mathcal{O}(k^{-1})\). This shows that there exists a number \(k_{\mathrm{crit}}\) such that \[|\Sigma_{|\mathbf{k}|,\tau}(\lambda)-1|<1, \tag{4.73}\] for \(k>k_{\mathrm{crit}}\), uniformly on any rectangle of the form \(\mathbf{R}_{a}=\{-a,a,a+\mathrm{i}\frac{1}{\tau|\mathbf{k}|},-a+\mathrm{i} \frac{1}{\tau|\mathbf{k}|}\}\) for \(a>0\). Consequently, by Rouche's theorem, since the constant function \(1\) does not have any zeros, the spectral function \(\lambda\mapsto\Sigma_{|\mathbf{k}|,\tau}(\lambda)\) cannot have any zeros with \(-\frac{1}{\tau}\leq\Re\lambda\leq 0\) for \(k>k_{\mathrm{crit}}\) either. This proves the claim. _Remark 4.4_.: The critical wave number obtained before depend inversely on the (non-dimensional) relaxation parameter. Defining the typical length of a mean free path as \[L_{mfp}=\tau v_{thermal}, \tag{4.74}\] and transforming back to physical units, we see that the critical wave number is numerically proportional to the inverse of (4.74). Indeed, we obtain that \[k_{\mathrm{crit}}\sim\sqrt{\frac{k_{B}T_{0}}{m}}\frac{1}{\tau_{phys}}\sim \frac{1}{l_{mfp}}. \tag{4.75}\] _Remark 4.5_.: In fact, each family of modes has its own critical wave number \(k_{crit,N}\). Since branches can merge, ### Global Bifurcation of Eigenvalues, Merging of Branches and Second Sound In this section, we discuss the phenomenon of branch merging already indicated by (4.68) and (4.69) in more details. Through this section, we assume \(0\leq r\leq 1\) (for a discussion of \(r<0\) and the existence of ghost modes, we refer to the following section). In general, any zero of the spectral function can be expanded in a Puiseux-Newton series with appropriately chosen exponent as already noted in, e.g., [37]. Namely, the shear modes defined by \(\tilde{\Sigma}_{1}\) can be expanded in a Puiseux-Newton series with exponent \(\frac{1}{4}\) (four-fold degeneracy for \(k=0\) and \(r<1\)). The other modes defined by \(\tilde{\Sigma}_{2}\) can be expanded in a Puiseux-Newton series with exponent \(\frac{1}{4}\) as well (also a four-fold degeneracy for \(k=0\) and \(r<1\)). A lengthy (and cumbersome) expansion calculation shows, however, that each branch is actually analytic in \(k\) and we can therefore expand \[\lambda(k,r)=\sum_{k=0}^{\infty}\lambda_{n}(r)k^{n}, \tag{4.76}\] where \(\lambda_{0}\) and \(\lambda_{1}\) determine the branch of eigenvalues. This is consistent with the asymptotic expansions for the hydrodynamic branches derived in [14] for general kinetic equations and small wave number. Indeed, the instantaneous directional motion of an eigenvalue \(k\mapsto\lambda(k)\) is given by \[\frac{\partial\lambda}{\partial k}=-\frac{\partial_{k}\Sigma_{\mathbf{k}, \tau}(\lambda)}{\partial_{\lambda}\Sigma_{\mathbf{k},\tau}(\lambda)}, \tag{4.77}\] whenever \(k\mapsto\lambda(k)\) is differentiable. To obtain the coefficients \(\lambda_{n}(r)\), we plug (4.76) into the spectral function (4.41) and compare powers of \(k\) (using the asymptotic expansion (B.16)): \[\Gamma_{|\mathbf{k}|,\tau}\left(\sum_{k=0}^{\infty}\lambda_{n}(r)k^{n}\right) =\sum_{n=0}^{\infty}G_{n}k^{n}. \tag{4.78}\] Then, we can solve the equations \(G_{n}\) for \(\lambda_{n}\) successively. A lengthy calculation shows that \[\lambda_{0}\in\left\{\frac{r-1}{\tau},0\right\},\quad r<1, \tag{4.79}\] which is consistent with (4.27). To ease notation, we define the two families of branches as \[\lambda_{n,fast}(r) :=\lambda_{n}\left(r\Big{|}\lambda_{0}=\frac{r-1}{\tau}\right),\] \[\lambda_{n,slow}(r) :=\lambda_{n}\left(r|\lambda_{0}=0\right). \tag{4.80}\] Expanding further reveals that \[\lambda_{1,fast}=0,\quad r<1, \tag{4.81}\] while \[\lambda_{1,slow}\in\left\{0,\pm\mathrm{i}\sqrt{\frac{5}{3}}\right\},\quad r<1. \tag{4.82}\] Furthermore, we find that \[\lambda_{2,fast}(r)=\frac{(81r-56)\tau}{15(1-r)r}, \tag{4.83}\] which implies that the secondary diffusion mode moves to the right for \(\frac{56}{81}<r<1\), while it moves to the left for \(0<r<\frac{56}{81}\) initially. This shows that, for \(r\) sufficiently close to one, the primary and secondary diffusion branch will inevitably collide and produce a pair of complex-conjugate zeros (secondary acoustics) through a saddle-node bifurcation. On the other hand, we see that for \(r\) close to \(0\), the secondary diffusion mode will travel to the left from the very beginning and will leaf the domain \(-\frac{1}{\tau}<\Re\lambda<0\) before it gets the chance to collide with the primary diffusion branch, see Figure 4.3. Consequently, there does not exist the phenomenon of branch merging and second sound. Indeed, in the limit \(r\to 0\), the Shakhov S-model reduces to the BGK model, for which no second sound exists. We summarize that - if the Prandtl number is close to one, i.e., close to the dynamics of the three-dimensional linear BGK equation, there exists no second sound (clearly, there is no second sound for the BGK equation), while for Prandtl number close to zero, there will always be branch merging. _Remark 4.6_.: While the coefficient (4.83) governs the motion of the secondary diffusion mode for small wave numbers, the behavior for larger \(k\) might be different. Indeed, for a certain value of \(r\), the secondary diffusion mode might start out by moving to the left, but then turn to the right and collide with primary diffusion mode anyways. The above considerations thus only imply that there is a certain range for which second sound exists and that there is a certain range for which it does not exist. ### Spectral Properties for Prandtl number greater than one: existence of ghost modes In this section, we derive the behavior of the spectrum of (3.14) for \(r<0\) (or \(Pr>1\)). As already indicated by the estimate (4.20), potential eigenvalues are no longer guaranteed to exist above the essentially spectrum exclusively for \(r<0\). Figure (4.4) shows some typical argument plots of the spectral function for \(Pr=1.5\) (\(r=-0.5\)) with eigenvalues below the essential spectrum. Obviously, from the considerations around the spectrum of \(\hat{\mathcal{L}}_{0}\) in (4.27), we see that for \(Pr>1\) (which is equivalent to \(\tau_{\text{fast}}>\tau_{\text{slow}}\)), the eigenvalue \(-\frac{1}{\tau_{\text{slow}}}\) is indeed located below the essential spectrum \(\{\Re\lambda=-\frac{1}{\tau_{\text{fast}}}\}\) : \[-\frac{1}{\tau_{\text{slow}}}<-\frac{1}{\tau_{\text{fast}}}. \tag{4.84}\] Figure 4.2. The instantaneous second-order directional motion at \(k=0\) of the secondary diffusion mode in dependence of the parameter \(r\). For \(\frac{56}{81}<r<1\), the secondary diffusion mode moves towards zero until it collides with the primary diffusion mode (second sound). For \(0<r<\frac{56}{81}\), the diffusion mode moves towards the essential spectrum at \(k=0\) - even with a singularity at \(r=0\) (BGK equation). For \(r<0\), the secondary diffusion mode is a ghost mode (below the essential spectrum) and always moves towards it instantaneously. Figure 4.3. Argument plot of the spectral function (4.41) for relaxation time \(\tau=0.5\), Prandtl number \(Pr=0.6\) and different wave numbers \(k\). The zeros of (4.41) define eigenvalues of the linearized Shakhov mode (points where a small, counter-clockwise loop runs through the whole rainbow at least once). All eigenvalues have negative real part and are located above the essential spectrum \(\{\Re\lambda=-\frac{1}{\tau}\}\) (solid black line), which is consistent with the decay estimates (4.19). Already at small wave numbers (\(k=0.4\) and \(k=0.5\)), the secondary diffusion and secondary shear mode are close to the essential spectrum (\(\{\Re\lambda=-2\}\)). For this Prandtl number, the secondary diffusion mode is smaller than the secondary shear mode. At larger wave numbers (\(k=0.5\) and \(k=0.6\)), these modes decrease. At \(k=0.7\), the secondary diffusion mode has already disappeared and no merging of branches can occur at this Prandtl number. From the instantaneous directional motion of the secondary diffusion mode (4.83), we see that \[\lambda_{2,fast}(r)=\frac{(81r-56)\tau}{15(1-r)r}>0,\quad r<0, \tag{4.85}\] which shows that the ghost (diffusion) mode increases with wave number until it is absorbed by the essential spectrum, see Figure 4.4. A similar argument holds true for the secondary shear modes. Existence of non-hydrodynamic modes below the essential spectrum suggests that Shakhov's model becomes unrealistic \(\mathrm{Pr}>1\), and other kinetic models, such as the ES-BGK should be used instead for \(\mathrm{Pr}>1\). ## 5. Linear Hydrodynamic Manifolds We define a hydrodynamic manifold as the (unique) slow spectral submanifold associated to a set of eigenvectors. In the case of linear dynamics, the manifold itself is given by the invariant linear subspace spanned by the eigenvalues associated to a set of slow eigenvalues. In particular, the hydrodynamic manifold has the following properties: 1. It contains an appropriately scaled, spatially independent stationary distribution (e.g. global Maxwellian) as a base solution 2. The projection onto the hydrodynamic moments along the manifold provide a closure of the hydrodynamic moments (mass-density, velocity and temperature) 3. It attracts _all_ trajectories in the space of probability-density functions (which are close enough to the base solution) exponentially fast, thus acting as a slow manifold 4. It is unique. We write the hydrodynamic manifold in frequency space as a superposition of eigen-functions associated to each mode \[\hat{f}_{\mathrm{hydro}}(\mathbf{k},\mathbf{v},t)=\sum_{n\in\mathrm{Modes}(k) }\beta(t,\mathbf{k})\hat{f}_{n}^{eig}(\mathbf{k},\mathbf{v}), \tag{5.1}\] where the set of modes is given by \(\mathrm{Modes}=\{\mathrm{ac}_{1},\mathrm{ac}_{1}*,\mathrm{diff}_{1},\mathrm{ shear}_{1},\mathrm{diff}_{2},\mathrm{shear}_{2}\}\) or \(\mathrm{Modes}=\{\mathrm{ac}_{1},\mathrm{ac}_{1}*,\mathrm{shear}_{1}, \mathrm{ac}_{2},\mathrm{ac}_{2}*,\mathrm{shear}_{2}\}\) depending on \(k\) and \(r\). To omit cluttering of the notation, we omit the second elements in the Jordan block generated by the shear The frequency-dependent eigen-functions solve the equation \[-\mathrm{i}\mathbf{k}\cdot\mathbf{v}\hat{f}_{n}^{eig}-\frac{1}{\tau}\hat{f}_{ n}^{eig}+\mathbb{B}_{8,r}\hat{f}_{n}^{eig}=\lambda_{n}\hat{f}_{n}^{eig} \tag{5.2}\] \[\hat{f}_{n}^{eig}(\mathbf{k},\mathbf{v})=\frac{\mathbf{e}(\mathbf{v})\cdot \boldsymbol{\alpha}_{n}}{\tau\mathrm{i}\mathbf{k}\cdot\mathbf{v}+1+\tau \lambda_{n}(k\tau)}, \tag{5.3}\] which satisfies \[\boldsymbol{\alpha}_{n}=\langle\hat{f}_{n}^{eig},\mathbf{e}\rangle_{\mathbf{v}}. \tag{5.4}\] Indeed, equation (5.4) is equivalent to \[\boldsymbol{\alpha}_{n}\in\ker(\mathbf{D}_{r}G_{S}-Id)_{z=-1-\tau\lambda_{n}}, \tag{5.5}\] for the matrix \[G_{S}(z)=\int_{\mathbb{R}^{3}}\mathbf{e}(\mathbf{v})\otimes\mathbf{e}( \mathbf{v})\frac{e^{-\frac{|\mathbf{v}|}{2}}}{\tau\mathrm{i}\mathbf{k}\cdot \mathbf{v}-z}\,d\mathbf{v} \tag{5.6}\] Figure 4.4. Argument plot of the spectral function (4.41) for relaxation time \(\tau=0.5\), Prandtl number \(Pr=1.5\) (\(r=-0.5\)) and different wave numbers \(k\). The zeros of (4.41) define eigenvalues of the linearized Shakhov mode (points where a small, counter-clockwise loop runs through the whole rainbow at least once). All eigenvalues have negative real part, but some of them are located below the essential spectrum \(\{\Re\lambda=-\frac{1}{\tau}\}\) (solid black line), which is consistent with the decay estimates (4.20). At small wave numbers, three ghost modes, namely a shear mode and a diffusion mode, appear below the essential spectrum (\(k=0.4\)). As the wave number is increased, the ghost modes move closer towards the essential spectrum until the diffusion mode is absorbed (\(k=0.5\), \(k=0.8\)). Finally, also the ghost shear mode is absorbed (\(k=1.2\)) and the spectrum consist of primary hydrodynamic modes above the essential spectrum only (up to the critical wave number). We define the hydrodynamic variables as \[\hat{\mathbf{h}}=\langle f_{\mathrm{hydro}},\mathbf{e}\rangle_{\mathbf{v}}, \tag{5.7}\] which gives \[\hat{\mathbf{h}}=\langle f_{\mathrm{hydro}},\mathbf{e}\rangle_{\mathbf{v}}=\sum_ {n\in M}\beta_{n}\langle\hat{f}_{n}^{eig},\mathbf{e}\rangle_{\mathbf{v}}= \mathbf{A}\boldsymbol{\beta}, \tag{5.8}\] where, depending on \(r\) and \(k\), either \[\mathbf{A}=(\boldsymbol{\alpha}_{\mathrm{ac1}},\boldsymbol{\alpha}_{\mathrm{ ac1}*},\boldsymbol{\alpha}_{\mathrm{diff1}},\boldsymbol{\alpha}_{\mathrm{ shear1}},\boldsymbol{\alpha}_{\mathrm{diff2}},\boldsymbol{\alpha}_{\mathrm{ shear2}}), \tag{5.9}\] or \[\mathbf{A}=(\boldsymbol{\alpha}_{\mathrm{ac1}},\boldsymbol{\alpha}_{\mathrm{ ac1}*},\boldsymbol{\alpha}_{\mathrm{shear1}},\boldsymbol{\alpha}_{\mathrm{ac2}}, \boldsymbol{\alpha}_{\mathrm{ac2}*}\boldsymbol{\alpha}_{\mathrm{shear2}}). \tag{5.10}\] The vector \(\boldsymbol{\beta}\) describes the evolution of the hydrodynamic variables in terms of the basis of eigenvalues (spectral basis). The time-evolution of the hydrodynamics can then be written as \[\hat{\mathbf{h}}_{t}=\langle\partial_{t}f_{\mathrm{hydro}},\mathbf{e}\rangle_ {\mathbf{v}}=\sum_{n\in\mathrm{Modes}(k)}\langle\hat{f}_{n}^{eig},\mathbf{e} \rangle_{\mathbf{v}}\lambda_{n}\beta_{n}=\Theta\mathbf{A}\Lambda\boldsymbol{ \beta}, \tag{5.11}\] or, solving for \(\beta\) in (5.8): \[\hat{\mathbf{h}}_{t}=\mathbf{A}\Lambda\mathbf{A}^{-1}\hat{\mathbf{h}}. \tag{5.12}\] Equation (5.12) defines the exact hydrodynamics derived from the slow motion along the hydrodynamic manifold. _Remark 5.1_.: The evaluation of the right-hand side of (5.12) is by no means trivial and involves properties of the spectral projection. The properties of the exact, spectrally-closed hydrodynamics together with their physical properties will be discussed in a forthcoming paper [35]. ## 6. Conclusion and Further Perspectives We performed a complete spectral analysis for the Shakhov model linearized around a global Maxwellian. The discrete eigenvalues above the essential spectrum \(\{\Re\lambda=-\frac{1}{\tau_{\mathrm{fast}}}\}\) are described as zeros of a spectral function at each wave number. In this way, we identified families of modes (branches), depending on wave number. For small wave numbers, the family of modes is given by \(\mathrm{Modes}=\{\mathrm{ac}_{1},\mathrm{ac}_{1}*,\mathrm{diff}_{1},\mathrm{ diff}_{2},\mathrm{shear}_{1},\mathrm{shear}_{2}\}\), the pair of primary acoustic modes, the primary and secondary diffusion modes as well as the primary and secondary shear modes. Within a certain range of Prandtl numbers, a merging of branches can occur at a specific wave number and the modes \(\mathrm{diff}_{1}\) and \(\mathrm{diff}_{2}\) may collide, producing another pair of acoustic modes \(\mathrm{ac}_{2}\) and \(\mathrm{ac}_{2}*\) via a saddle-node bifurcation. This phenomenon is known as _second sound_. The approach presented in [34, 36] as well as in the current paper is general enough to infer spectral properties for any finitely-truncated collision operator, such as quasi-equilibrium approximations [24] or even Maxwell molecules [47]. The explicit knowledge and quantitative properties of the spectra identified for several kinetic model equations [34, 36] also allow us to move to the existence theory of non-linear hydrodynamic equations for various (finitely-truncated) kinetic models. Indeed, the fact that the discrete spectrum is well separated from the essential spectrum allows us to define a spectral projection for the _whole_ set of eigenvalues, thus giving the first-order approximation (in terms of nonlinear deformations) to the hydrodynamic manifolds. In particular, we expect that the theory of thermodynamic projectors [22] may be helpful in proving the nonlinear extension. The quantitative insights in the structure of the spectrum could also be used to derive simplified, but still non-local, approximate hydrodynamics. This could also improve present numerical methods [32]. ## Acknowledgement This work was supported by European Research Council (ERC) Advanced Grant 834763-PonD. Computational resources at the Swiss National Super Computing Center CSCS were provided under the grant s1066. ## Declaration of Interest The authors declare that there is no conflict of interests. Appendix A Linearization and Non-Dimensionalization of the Shakhov Equation Around a Global Maxwellian In this section we perform - for the sake of completeness - the linearization of the Shakhov model around a global Maxwellian. To obtain the linearization of (3.1) around \(F_{0}^{eq}\), we write \(F=F_{0}^{eq}+\varepsilon f\) and calculate (A.1) \[\frac{d}{d\varepsilon}\bigg{|}_{\varepsilon=0}Q_{fs}[F_{0}^{eq}+ \varepsilon f] =f-\left(\left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0}F^{ eq}[F_{0}^{eq}+\varepsilon f]\right)\left(1+(1-Pr)\frac{\mathbf{q}[F_{0}^{ eq}]\cdot(\mathbf{v}-\mathbf{u}[F_{0}^{eq}])}{Rp[F_{0}^{eq}]T[F_{0}^{eq}]}\left( \frac{|\mathbf{v}-\mathbf{u}[F_{0}^{eq}]|^{2}}{5RT[F_{0}^{eq}]}-1\right)\right)\] \[\quad-F^{eq}[F_{0}^{eq}]r\left.\frac{d}{d\varepsilon}\right|_{ \varepsilon=0}\frac{\mathbf{q}[F_{0}^{eq}]\cdot(\mathbf{v}-\mathbf{u}[F_{0}^{ eq}+\varepsilon f])}{Rp[F_{0}^{eq}+\varepsilon f]T[F_{0}^{eq}+\varepsilon f ]}\left(\frac{|\mathbf{v}-\mathbf{u}[F_{0}^{eq}+\varepsilon f]|^{2}}{5RT[F_{0 }^{eq}+\varepsilon f]}-1\right)\] Using the relations (A.2) \[n[F_{0}^{eq}]=n_{0},\quad\mathbf{u}[F_{0}^{eq}]=0,\quad\mathbf{q}[F_{0}^{eq}] =0,\quad p[F_{0}^{eq}]=mRn_{0}T_{0},\quad T[F_{0}^{eq}]=T_{0},\] which in particular imply that \(F^{eq}[F_{0}^{eq}]=F_{0}^{eq}\), we can reduce (A.1) to (A.3) \[\frac{d}{d\varepsilon}\bigg{|}_{\varepsilon=0}Q_{fs}[F_{0}^{eq}+ \varepsilon f] =f-\left(\left.\frac{d}{d\varepsilon}\right|_{\varepsilon=0}F^{ eq}[F_{0}^{eq}+\varepsilon f]\right)\] \[\quad-rF_{0}^{eq}\left.\frac{d}{d\varepsilon}\right|_{\varepsilon =0}\frac{\mathbf{q}[F_{0}^{eq}+\varepsilon f]\cdot(\mathbf{v}-\mathbf{u}[F_ {0}^{eq}+\varepsilon f])}{Rp[F_{0}^{eq}+\varepsilon f]T[F_{0}^{eq}+ \varepsilon f]}\left(\frac{|\mathbf{v}-\mathbf{u}[F_{0}^{eq}+\varepsilon f ]|^{2}}{5RT[F_{0}^{eq}+\varepsilon f]}-1\right).\] Denoting (A.4) \[\mathbf{m}_{n}=\int_{\mathbf{R}^{3}}f(\mathbf{x},\mathbf{v},t)\mathbf{v}^{ \otimes n}\,d\mathbf{v},\] the moments of the perturbation density \(f\), the moments (3.8) transform according to (A.5) \[\mathbf{M}_{0} =n_{0}+\varepsilon\mathbf{m}_{0},\] \[\mathbf{M}_{1} =\varepsilon\mathbf{m}_{1},\] \[\mathbf{M}_{2} =Rn_{0}T_{0}\mathrm{Id}_{3\times 3}+\varepsilon\mathbf{m}_{2},\] \[\mathbf{M}_{3} =\varepsilon\mathbf{m}_{3},\] which in turn implies that (A.6) \[\begin{split}& n=n_{0}+\varepsilon\mathbf{m}_{0},\\ &\mathbf{u}=\frac{\varepsilon\mathbf{m}_{1}}{n_{0}+\varepsilon \mathbf{m}_{0}},\\ & p=\frac{m}{3}(3Rn_{0}T_{0}+\varepsilon\text{tracem}_{3})- \varepsilon^{2}\frac{m}{3}\frac{|\mathbf{m}_{1}|^{2}}{n_{0}+\varepsilon \mathbf{m}_{0}},\\ &\mathbf{q}=-\frac{3}{2}\left(\frac{m}{3}(3Rn_{0}T_{0}+ \varepsilon\text{tracem}_{3})-\varepsilon^{2}\frac{m}{3}\frac{|\mathbf{m}_{1} |^{2}}{n_{0}+\varepsilon\mathbf{m}_{0}}\right)\frac{\varepsilon\mathbf{m}_{1}} {n_{0}+\varepsilon\mathbf{m}_{0}}\\ &\qquad+\varepsilon\frac{m}{2}\tilde{\mathbf{m}}_{3}+\frac{m}{2} \frac{\varepsilon^{3}|\mathbf{m}_{1}|^{2}}{(n_{0}+\varepsilon\mathbf{m}_{0})^ {3}}\mathbf{m}_{1}-m(Rn_{0}T_{0}\text{Id}_{3\times 3}+\varepsilon\mathbf{m}_{2}) \frac{\varepsilon\mathbf{m}_{1}}{n_{0}+\varepsilon\mathbf{m}_{0}}.\end{split}\] Consequently, the \(\varepsilon\)-derivatives of the hydrodynamic moments become (A.7) \[\begin{split}&\frac{\partial n}{\partial\varepsilon}\bigg{|}_{ \varepsilon=0}=\mathbf{m}_{0},\\ &\frac{\partial\mathbf{u}}{\partial\varepsilon}\bigg{|}_{ \varepsilon=0}=\frac{\mathbf{m}_{1}}{n_{0}},\\ &\frac{\partial p}{\partial\varepsilon}\bigg{|}_{\varepsilon=0}= \frac{m}{-3}\text{tracem}_{2},\\ &\frac{\partial q}{\partial\varepsilon}\bigg{|}_{\varepsilon=0}= \frac{3mRT_{0}}{2}\mathbf{m}_{1}+\frac{m}{2}\tilde{\mathbf{m}}_{3}-mRT_{0} \mathbf{m}_{1}=-\frac{5mRT_{0}}{2}\mathbf{m}_{1}+\frac{m}{2}\tilde{\mathbf{m}} _{3}.\end{split}\] With (A.7), we can calculate (A.8) \[\begin{split}\frac{\partial}{\partial\varepsilon}\bigg{|}_{ \varepsilon=0}F^{eq}[F^{eq}_{0}+\varepsilon f]&=(2\pi RT_{0})^{ -\frac{3}{2}}\,e^{-\frac{|\mathbf{v}|^{2}}{2RT_{0}}}\left(\mathbf{m}_{0}- \frac{\text{tracem}_{2}-3RT_{0}\mathbf{m}_{0}}{2RT_{0}}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.+\frac{\mathbf{m}_{1 }\cdot\mathbf{v}}{RT_{0}}+\frac{\text{tracem}_{2}-3RT_{0}\mathbf{m}_{0}}{6} \frac{|\mathbf{v}|^{2}}{(RT_{0})^{2}}\right),\end{split}\] while (A.9) \[\begin{split} F^{eq}_{0}[F^{eq}_{0}]\left.\frac{d}{d\varepsilon }\right|_{\varepsilon=0}&\frac{\mathbf{q}[F^{eq}_{0}+\varepsilon f ]\cdot(\mathbf{v}-\mathbf{u}[F^{eq}_{0}+\varepsilon f])}{Rp[F^{eq}_{0}+ \varepsilon f]T[F^{eq}_{0}+\varepsilon f]}\left(\frac{|\mathbf{v}-\mathbf{u}[ F^{eq}_{0}+\varepsilon f]|^{2}}{5RT[F^{eq}_{0}+\varepsilon f]}-1\right)\\ &=n_{0}\left(2\pi RT_{0}\right)^{-\frac{3}{2}}e^{-\frac{|\mathbf{v }|^{2}}{2RT_{0}}}\left(\frac{|\mathbf{v}|^{2}}{5RT_{0}}-1\right)\frac{1}{Rp_{0 }T_{0}}\left.\frac{d\mathbf{q}[F^{eq}_{0}+\varepsilon f]}{d\varepsilon} \right|_{\varepsilon=0}\cdot\mathbf{v}\\ &=\left(2\pi RT_{0}\right)^{-\frac{3}{2}}e^{-\frac{|\mathbf{v}|^ {2}}{2RT_{0}}}\left(\frac{|\mathbf{v}|^{2}}{5RT_{0}}-1\right)\frac{n_{0}}{Rp_ {0}T_{0}}\left(-\frac{5mRT_{0}}{2}\mathbf{m}_{1}+\frac{m}{2}\tilde{\mathbf{m}} _{3}\right)\cdot\mathbf{v}.\end{split}\] Combining (A.8) with (A.9), equation (A.3) reads (A.10) \[\begin{split}\frac{d}{d\varepsilon}\bigg{|}_{\varepsilon=0}& Q_{fs}[F^{eq}_{0}+\varepsilon f]=f-(2\pi RT_{0})^{-\frac{3}{2}}e^{- \frac{|\mathbf{v}|^{2}}{2RT_{0}}}\left(\mathbf{m}_{0}-\frac{\text{tracem}_{2} -3RT_{0}\mathbf{m}_{0}}{2RT_{0}}+\frac{\mathbf{m}_{1}\cdot\mathbf{v}}{RT_{0}} \right.\\ &\qquad\qquad\qquad\qquad\left.+\frac{\text{tracem}_{2}-3RT_{0} \mathbf{m}_{0}}{6}\frac{|\mathbf{v}|^{2}}{(RT_{0})^{2}}+r\left(\frac{|\mathbf{v }|^{2}}{5RT_{0}}-1\right)\frac{1}{(RT_{0})^{2}}\left(-\frac{5RT_{0}}{2} \mathbf{m}_{1}+\frac{1}{2}\tilde{\mathbf{m}}_{3}\right)\cdot\mathbf{v}\right) \end{split}\] Defining the _thermal velocity_ as (A.11) \[v_{thermal}=\sqrt{RT_{0}},\] and re-scaling according to (A.12) \[\mathbf{v}\mapsto v_{thermal}\mathbf{v},\] implies that (A.13) \[\mathbf{m}_{n}\mapsto(RT_{0})^{\frac{3+n}{2}}\,\mathbf{m}_{n},\] which allows us to simplify (A.14) \[\begin{split}\frac{\partial}{\partial\varepsilon}\bigg{|}_{ \varepsilon=0}Q_{eq}[F_{0}^{eq}+\varepsilon f]=f-(2\pi)^{-3/2}e^{-\frac{| \mathbf{v}|^{2}}{2}}\left(\frac{5\mathbf{m}_{0}-\mathrm{trac}\mathbf{m}_{2}}{2 }+\mathbf{m}_{1}\cdot\mathbf{v}\right.\\ \left.+\frac{\mathrm{trac}\mathbf{m}_{2}-3\mathbf{m}_{0}}{6}| \mathbf{v}|^{2}{+}r\left(\frac{|\mathbf{v}|^{2}}{5}-1\right)\left(\frac{\tilde {\mathbf{m}}_{3}-5\mathbf{m}_{1}}{2}\right)\cdot\mathbf{v}\right)\\ =f-(2\pi)^{-3/2}e^{-\frac{|\mathbf{v}|^{2}}{2}}\left[\left(\frac {5-|\mathbf{v}|^{2}}{2}\right)\mathbf{m}_{0}+\left(1-\frac{5r}{2}\left(\frac{ |\mathbf{v}|^{2}}{5}-1\right)\right)\mathbf{v}\cdot\mathbf{m}_{1}\right.\\ \left.+\left(\frac{|\mathbf{v}|^{2}{-}3}{2}\right)\mathrm{trac} \mathbf{m}_{2}+r\left(\frac{|\mathbf{v}|^{2}{-}5}{10}\right)\mathbf{v}\cdot \tilde{\mathbf{m}}_{3}\right]\end{split}\] Similarly, we re-scale (A.15) \[\mathbf{x}\mapsto L\mathbf{x},\] which implies that \(\mathbf{x}\in\mathbb{T}^{3}\) henceforth. Defining the thermal time (A.16) \[t_{thermal}=L\sqrt{\frac{m}{k_{B}T_{0}}},\] we can re-scale and non-dimensionalize (A.17) \[t\mapsto tt_{thermal},\qquad\tau\mapsto\tau t_{thermal},\] and finally arrive at the linearized and non-dimensionalized Shakhov equation: (A.18) \[\begin{split}\frac{\partial f}{\partial t}=-\mathbf{v}\cdot \nabla_{\mathbf{x}}f-\frac{1}{\tau}f+\frac{1}{\tau}(2\pi)^{-3/2}e^{-\frac{| \mathbf{v}|^{2}}{2}}&\left[\left(\frac{5-|\mathbf{v}|^{2}}{2} \right)\mathbf{m}_{0}+\left(1+\frac{r}{2}\left(\frac{|\mathbf{v}|^{2}{-}5}{5} \right)\right)\mathbf{v}\cdot\mathbf{m}_{1}\right.\\ &\left.+\left(\frac{|\mathbf{v}|^{2}{-}3}{2}\right)\mathrm{trac} \mathbf{m}_{2}+r\left(\frac{|\mathbf{v}|^{2}{-}5}{10}\right)\mathbf{v}\cdot \tilde{\mathbf{m}}_{3}\right].\end{split}\] ## Appendix B Properties of the Plasma Dispersion Function \(Z\) In the following, we collect some properties of the plasma dispersion function \(Z\), defined through the integral expression (2.12). In our presentation, we will closely follow the calculations performed in [36]. First, let us derive an expression of the integral (2.12) in terms of less exotic functions. To this end, we rely on the identities in [1, p.297]. Let (B.1) \[w(\zeta)=e^{-\zeta^{2}}(1-\mathrm{erf}(-\mathrm{i}\zeta)),\quad\zeta\in \mathbb{C},\] which satisfies the functional identity (B.2) \[w(-\zeta)=2e^{-\zeta^{2}}-w(\zeta),\quad\zeta\in\mathbb{C}.\] Function (B.1) is called _Faddeeva function_ and is frequently encountered in problems related to kinetic equations [16]. We then have that (B.3) \[w(\zeta)=\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^{2}}}{\zeta-s}\,ds, \quad\Im\zeta>0,\] and, by relation (B.2), we have for \(\Im\zeta<0\): (B.4) \[\begin{split}\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^ {2}}}{\zeta-s}\,ds&=-\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}} \frac{e^{-s^{2}}}{(-\zeta)+s}\,ds\\ &=-\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s^{2}}}{(- \zeta)-s}\,ds\\ &=-w(-\zeta)\\ &=e^{-\zeta^{2}}[-1-\operatorname{erf}(-\mathrm{i}\zeta)].\end{split}\] Consequently, we obtain (B.5) \[\begin{split}\int_{\mathbb{R}}\frac{1}{s-\zeta}e^{-\frac{s^{2}}{ 2}}\,ds&=\int_{\mathbb{R}}\frac{e^{-s^{2}}}{s-\frac{\zeta}{\sqrt {2}}}\,ds\\ &=\mathrm{i}\pi\frac{\mathrm{i}}{\pi}\int_{\mathbb{R}}\frac{e^{-s ^{2}}}{\frac{\zeta}{\sqrt{2}}-s}\,ds\\ &=\begin{cases}\mathrm{i}\pi e^{-\frac{\zeta^{2}}{2}}\left[1- \operatorname{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}}\right)\right],& \qquad\text{if }\Im\zeta>0,\\ \mathrm{i}\pi e^{-\frac{\zeta^{2}}{2}}\left[-1-\operatorname{erf}\left(\frac{ -\mathrm{i}\zeta}{\sqrt{2}}\right)\right],&\qquad\text{if }\Im\zeta<0,\end{cases}\end{split}\] where in the first step, we have re-scaled \(s\mapsto\sqrt{2}s\) in the integral. Written more compactly, we arrive at (B.6) \[Z(\zeta)=\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}\left[\operatorname {sign}(\Im\zeta)-\operatorname{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{2}} \right)\right],\quad\Im\zeta\neq 0.\] An an argument plot together with an modulus-argument plot of \(Z\) are shown in Figure B.1. Clearly, \(Z\) is discontinuous across the real line (albeit that \(Z|_{\mathbb{R}}\) exists in the sense of principal values as the Hilbert transform of a real Gaussian [13]). The properties (B.7) \[\begin{split}|Z(\zeta)|&\leq\sqrt{\frac{\pi}{2}}, \,\text{for }\zeta\in\mathbb{C}\setminus\mathbb{R},\\ 0&<\arg Z(\zeta)<\pi\text{ for }\Im(\zeta)>0,\\ -\pi&<\arg Z(\zeta)<0\text{ for }\Im(\zeta)<0,\end{split}\] are easy to show and can be read off from the plots (B.1) directly as well. We also note that (B.8) \[\begin{split}\lim_{\zeta\to 0,\Im\zeta>0}Z(\zeta)=\mathrm{i} \sqrt{\frac{\pi}{2}},\\ \lim_{\zeta\to 0,\Im\zeta<0}Z(\zeta)=-\mathrm{i}\sqrt{\frac{\pi}{2}}, \end{split}\] as can be seen from (B.6). Function (B.6) satisfies an ordinary differential equation (in the sense of complex analytic functions) on the upper and on the lower half-plane. Indeed, integrating (2.12) by parts gives (B.9) \[\begin{split} 1&=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}(v- \zeta)\frac{e^{-\frac{v^{2}}{2}}}{v-\zeta}\,dv=-\zeta Z+\frac{1}{\sqrt{2\pi}} \int_{\mathbb{R}}v\frac{e^{-\frac{v^{2}}{2}}}{v-\zeta}\,dv\\ &=-\zeta Z-\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\frac{e^{-\frac {v^{2}}{2}}}{(v-\zeta)^{2}}\,dv=-\zeta Z-\frac{d}{d\zeta}Z,\end{split}\] which implies that \(Z\) satisfies the differential equation (B.10) \[\frac{d}{d\zeta}Z=-\zeta Z-1,\] for \(\zeta\in\mathbb{C}\setminus\mathbb{R}\). Formula (B.10) can also be used as a recurrence relation for the higher derivatives of \(Z\). Since we will be interested in function (B.6) for \(\Im\zeta\) positive and negative as global functions, we define (B.11) \[\begin{split} Z_{+}(\zeta)&=\mathrm{i}\sqrt{\frac{ \pi}{2}}e^{-\frac{\zeta^{2}}{2}}\left[1-\mathrm{erf}\left(\frac{-\mathrm{i} \zeta}{\sqrt{2}}\right)\right],\\ Z_{-}(\zeta)&=\mathrm{i}\sqrt{\frac{\pi}{2}}e^{- \frac{\zeta^{2}}{2}}\left[-1-\mathrm{erf}\left(\frac{-\mathrm{i}\zeta}{\sqrt{ 2}}\right)\right],\end{split}\] for all \(\zeta\in\mathbb{C}\). Both functions can be extended to analytic functions on the whole complex plane via analytic continuation. Recall that the error function has the properties that (B.12) \[\mathrm{erf}(-\zeta)=-\mathrm{erf}(\zeta),\qquad\mathrm{erf}(\zeta^{*})= \mathrm{erf}(\zeta)^{*},\] for all \(\zeta\in\mathbb{C}\), which implies that for \(x\in\mathbb{R}\), (B.13) \[\mathrm{erf}(\mathrm{i}x)=-\mathrm{erf}(-\mathrm{i}x)=-\mathrm{erf}(\mathrm{i} x)^{*},\] Figure B.1. Complex plots of the function Z. i.e, the error function maps imaginary numbers to imaginary numbers. Defining the _imaginary error function_, (B.14) \[\operatorname{erfi}(\zeta):=-\mathrm{i}erf(\mathrm{i}\zeta),\] for \(\zeta\in\mathbb{C}\), which, by (B.13) satisfies \(\operatorname{erfi}|_{\mathbb{R}}\subset\mathbb{R}\), it follows that for \(x\in\mathbb{R}\): (B.15) \[\Re Z_{+}(x)=-\sqrt{\frac{\pi}{2}}e^{-\frac{x^{2}}{2}}\operatorname{erfi} \left(\frac{x}{\sqrt{2}}\right),\quad\Im Z_{+}(x)=-\sqrt{\frac{\pi}{2}}e^{- \frac{x^{2}}{2}},\] similarly for \(Z_{-}(x)\). Next, let us prove the following asymptotic expansion of \(Z_{+}\): (B.16) \[Z_{+}(\zeta)\sim-\sum_{n=0}^{\infty}\frac{(2n-1)!!}{\zeta^{2n+1}},\qquad\text{ for }|\mathrm{arg}(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] for any \(0<\delta\leq\frac{\pi}{2}\), see also [31]. The proof will be based on a generalized version of Watson's Lemma [50]. To this end, let us define the Laplace transform (B.17) \[\mathcal{L}[f](\zeta)=\int_{0}^{\infty}f(x)e^{-\zeta x}\,dx,\quad\zeta\in \mathbb{C},\] of an integrable function \(f:[0,\infty)\to\mathbb{C}\). _Lemma B.1_.: [Generalized Watson's Lemma] Assume that (B.17) exists for some \(\zeta=\zeta_{0}\in\mathbb{C}\) and assume that \(f\) admits an asymptotic expansion of the form (B.18) \[f(x)=\sum_{n=0}^{N}a_{n}x^{\beta_{n}-1}+o(x^{\beta_{N}-1}),\qquad x>0,\quad x \to 0,\] where \(a_{n}\in\mathbb{C}\) and \(\beta_{n}\in\mathbb{C}\) with \(\Re\beta_{0}>0\) and \(\Re\beta_{n}>\Re\beta_{n-1}\) for \(1\leq n\leq N\). Then \(\mathcal{L}[f](\zeta)\) admits an asymptotic expansion of the form (B.19) \[\mathcal{L}[f](\zeta)=\sum_{n=0}^{N}a_{n}\Gamma(\beta_{n})\zeta^{-\beta_{n}}+o (\zeta^{-\beta_{N}}),\quad v,\quad\zeta\to\infty,\] for any real number \(0<\delta\leq\frac{\pi}{2}\), where \(\Gamma\) is the standard Gamma function. For a proof of the above Lemma, we refer e.g. to [15]. Classically, Lemma (B.1) is applied to prove that the imaginary error function admits an asymptotic expansion for \(x\in\mathbb{R}\) of the form (B.20) \[\operatorname{erfi}(x)\sim\frac{e^{x^{2}}}{\sqrt{\pi}x}\sum_{k=0}^{\infty} \frac{(2k-1)!!}{(2x^{2})^{k}},\qquad\text{ for }x>0,\quad x\to\infty,\] see also [39], based on the classical version of Watson's Lemma, whose assumptions are, however, unnecessarily restrictive [52]. For completeness, we recall the derivation of (B.16) based on Lemma B.1. First, let us rewrite erfi as a Laplace transform using the change of variables \(t=\sqrt{1-s}\) with \(dt=\frac{ds}{2\sqrt{1-s}}\) (B.21) \[\begin{split}\operatorname{erfi}(\zeta)&=\int_{0}^{ 1}\frac{d}{dt}\operatorname{erfi}(t\zeta)\,dt=\frac{2\zeta}{\sqrt{\pi}}\int_{0 }^{1}e^{t^{2}\zeta^{2}}\,dt=\frac{2\zeta}{\sqrt{\pi}}\int_{0}^{1}e^{\zeta^{2} (1-s)}\,\frac{ds}{2\sqrt{1-s}}\\ &=\frac{\zeta e^{\zeta^{2}}}{\sqrt{\pi}}\int_{0}^{1}\frac{1}{ \sqrt{1-s}}e^{-s\zeta^{2}}\,ds=\frac{\zeta e^{\zeta^{2}}}{\sqrt{\pi}}\int_{0} ^{\infty}\frac{\chi_{[0,1]}(s)}{\sqrt{1-s}}e^{-s\zeta^{2}}\,ds.\end{split}\] From the Taylor expansion of the Binomial function, we know that (B.22) \[\frac{1}{\sqrt{1-s}}=\sum_{n=0}^{\infty}\binom{-1/2}{n}(-s)^{n}=\sum_{n=0}^{ \infty}4^{-n}\binom{2n}{n}s^{n},\] which allows us to apply Lemma (B.1) with \(\beta_{n}=n+1\) and \(a_{n}=4^{-n}\binom{2n}{n}\), thus leading to (B.23) \[\begin{split}\operatorname{erfi}(\zeta)&\sim\frac {\zeta e^{\zeta^{2}}}{\sqrt{\pi}}\sum_{n=0}^{\infty}4^{-n}\binom{2n}{n}\Gamma(n +1)\zeta^{-2(n+1)}\\ &\sim\frac{e^{\zeta^{2}}}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(2n )!}{4^{n}n!}\zeta^{-2n-1}\\ &\sim\frac{e^{\zeta^{2}}}{\zeta\sqrt{\pi}}\sum_{n=0}^{\infty} \frac{(2n-1)!\,!}{(2\zeta)^{n}},\end{split}\] for \(\zeta\to\infty\) and \(|\arg(\zeta)|\leq\frac{\pi}{2}-\delta\), \(0<\delta\leq\frac{\pi}{2}\). This is consistent with formula (B.20) for the limit along the real line. Finally, we arrive at the following asymptotic expansion for \(Z\): (B.24) \[Z_{+}(\zeta)\sim\mathrm{i}\sqrt{\frac{\pi}{2}}e^{-\frac{\zeta^{2}}{2}}-\sum_{ n=0}^{\infty}\frac{(2n-1)!\,!}{\zeta^{2n+1}},\qquad\text{ for }|\arg(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] which is, of course, equivalent to (B.25) \[Z_{+}(\zeta)\sim-\sum_{n=0}^{\infty}\frac{(2n-1)!\,!}{\zeta^{2n+1}},\qquad \text{ for }|\arg(\zeta)|\leq\frac{\pi}{2}-\delta,\qquad\zeta\to\infty,\] since \(|e^{-\zeta^{2}}|^{2}=e^{-2(x^{2}-y^{2})}\to 0\) for \(\Re\zeta=x\to\infty\).
2307.08046
NANOGrav Signal and PBH from the Modified Higgs Inflation
This study investigates the classical Higgs inflation model with a modified Higgs potential featuring a dip. We examine the implications of this modification on the generation of curvature perturbations, stochastic gravitational wave production, and the potential formation of primordial black holes (PBHs). Unlike the classical model, the modified potential allows for enhanced power spectra and the existence of PBHs within a wide mass range $1.5\times10^{20}$ g -- $9.72\times10^{32}$ g. We identify parameter space regions that align with inflationary constraints and have the potential to contribute significantly to the observed dark matter content. Additionally, the study explores the consistency of the obtained parameter space with cosmological constraints and discusses the implications for explaining the observed excess in gravitational wave signals, particularly in the NANOGrav experiment. Overall, this investigation highlights the relevance of the modified Higgs potential in the classical Higgs inflation model, shedding light on the formation of PBHs, the nature of dark matter, and the connection to gravitational wave observations.
Kingman Cheung, C. J. Ouseph, Po-Yan Tseng
2023-07-16T14:02:14Z
http://arxiv.org/abs/2307.08046v1
# NANOGrav Signal and PBH from the Modified Higgs Inflation ###### Abstract This study investigates the classical Higgs inflation model with a modified Higgs potential featuring a dip. We examine the implications of this modification on the generation of curvature perturbations, stochastic gravitational wave production, and the potential formation of primordial black holes (PBHs). Unlike the classical model, the modified potential allows for enhanced power spectra and the existence of PBHs within a wide mass range \(1.5\times 10^{20}\) g - \(9.72\times 10^{32}\) g. We identify parameter space regions that align with inflationary constraints and have the potential to contribute significantly to the observed dark matter content. Additionally, the study explores the consistency of the obtained parameter space with cosmological constraints and discusses the implications for explaining the observed excess in gravitational wave signals, particularly in the NANOGrav experiment. Overall, this investigation highlights the relevance of the modified Higgs potential in the classical Higgs inflation model, shedding light on the formation of PBHs, the nature of dark matter, and the connection to gravitational wave observations. Introduction Cosmological inflation is the most favorable theory of the early universe [1]. It not only explains the absence of a number of relics that should have existed from the Big Bang, but also provides the seeds for the growth of structures in the Universe. In the last two decades, people have been attempting to figure out the most promising candidate for cosmological inflation. Primordial black holes (PBHs) can also be created as a result of cosmological inflation, potentially generating PBHs from the seeds formed during the radiation- or matter-dominated epochs. Consequently, studying the formation and evolution of PBHs offers an effective means to investigate the early period of cosmology. The existence of primordial black holes was initially postulated by Zel'dovich and Novikov [2], and later supported by Hawking and Carr [3; 4; 5], suggesting that these hypothetical entities emerged in the early universe. According to the theory, PBHs may form in the regions with significant density perturbations. The primary motivation for studying PBHs lies in their potential as a natural candidate for dark matter. Despite recent observations imposing strict limitations on the abundance of PBHs, there exists a mass range, specifically from \(10^{16}\) g to \(10^{20}\) g, where PBHs could play a significant role in contributing to the overall dark matter content. Production of gravitational waves through the second-order effect is closely intertwined with the formation of PBHs, occurring simultaneously as certain modes re-enter the Hubble radius. These gravitational waves, once generated, propagate freely throughout subsequent epochs of the Universe due to their low interaction rate. Millisecond pulsars (MPs) are featured by their stable rotating period which are comparable to the timing precision of atomic clocks. They are ideal astrophysical objects being utilized in the pulsar-timing arrays (PTAs) to probe the low-frequency gravitation waves (GWs) from nanohertz to microhertz. The NANOGrav collaboration has been observing 67 pulsars over 15 years and recently reported evidence for the correlations following Hellings-Downs pattern [6], pointing to the stochastic GW as the origin. Furthermore, they confirmed the excess of the red common-spectrum signal with a strain amplitude of \(\mathcal{O}(10^{-14})\) at the frequency \(\simeq 3\times 10^{-8}\) Hz. To explain the GW signal, there are many plausible mechanisms and hypothetical candidates being proposed; in particular, the population of supermassive black-hole binarys [6; 7; 8], inflation scenarios [9; 10; 11; 12; 13], cosmological first-order phase transition [14; 15; 16; 17], and alternative interpretations [18; 19; 20; 21]. There have been numerous attempts to incorporate inflation into the standard model (SM) and theories beyond. The SM Higgs field has always been an intriguing candidate as the inflaton due to its lack of requirement for additional scalar degrees of freedom. However, the minimal Higgs inflation model is not favored, and possibly even ruled out, due to the fine-tuned value of the Higgs self-coupling constant, \(\lambda\). To address this issue, a non-minimal coupling between the SM Higgs field and the Ricci scalar, \(\mathcal{R}\), was introduced in an attempt to reduce the value of \(\lambda\)[22]. However, such attempts may potentially lead to violations of unitarity [23; 24; 25], although our current study does not focus on these infractions. The fundamental Higgs inflation model [22] fails to account for both the inflationary phase and the formation of PBHs simultaneously. Numerous studies have explored both inflation and PBH formation within the framework of Higgs inflation by introducing new interactions to the Higgs field. By incorporating these additional interactions, it is possible to achieve a successful inflationary epoch while creating the conditions necessary for the generation of PHBs. In this investigation, we examine a modified form of the Higgs potential that aims to address both the phenomenon of inflation and the formation of PBHs, and at the same time address the excess in GW signal reported by NANOGrav. The paper is organized as follows. In Sec. II, we revisit the classical Higgs inflation model and demonstrate that it is incapable of generating the correct power spectrum required for PBH formation. In Sec. III, we delve into possible modifications to the Higgs potential. Specifically, we explore the introduction of a dip in the potential, which can accommodate PBH formation during the inflationary scenario. We also discuss the viable parameter space by considering the characteristics of the dip for PBH formation. Sections IV and V are dedicated to presenting the PBH abundance and gravitational wave (GW) spectrum resulting from the modified Higgs potential. In Sec. V, we discuss the gravitational wave signals within this model and demonstrate that the obtained results can potentially explain the NANOGrav 15-year signal in a straightforward manner. We conclude with the implications of these findings. Revisiting Higgs Inflation Model In this section, we revisit the classical Higgs Inflation model, which considers the SM Higgs Boson as a promising candidate for inflation. The Higgs inflation model [22] was proposed a long time ago to bridge the gap between the two most successful models of physics: the standard model of particle physics and the standard model of cosmology. Numerous studies [26; 27; 28; 29; 30; 31; 32] discussed the possibility of the SM Higgs as the inflaton in different contexts. In our discussion, we focus on the simplest model of Higgs Inflation [22]. This model addresses inflation by introducing a non-minimal coupling, where the SM Higgs is coupled to the Ricci scalar \(\mathcal{R}\) with a non-minimal coupling strength \(\xi\). The effective action for this theory is given as \[\mathcal{S}_{\mathcal{J}}=\int\ d^{4}x\ \sqrt{-g}\big{[}-\frac{M_{\rm PL}^{2}+ \xi h^{2}}{2}\mathcal{R}+\frac{\partial_{\mu}h\partial^{\mu}h}{2}-\frac{ \lambda}{4}(h^{2}-v^{2})^{2}\big{]}. \tag{1}\] This Lagrangian has been studied in details in many works on inflation [33; 34; 35]. The scalar sector of the SM coupled to the gravity in a non-minimal way. Here the authors considered the unitary gauge \(H=\frac{h}{\sqrt{2}}\) and neglected the interactions for the time being. An action in the Einstein frame was obtained by the conformal transformation [36; 37]\(\hat{g}_{\mu\nu}=\Omega g_{\mu\nu}\), where \(\Omega=1+\frac{\xi h^{2}}{M_{\rm PL}^{2}}\). The conformal transformation can eliminate the non-minimal coupling to gravity. This transformation leads to a non-minimal kinetic term for the Higgs field. So, it is convenient to make the change to the new scalar field \(\phi\) with \[\frac{d\phi}{dh}=M_{\rm PL}\sqrt{\frac{\Omega}{\Omega^{2}}+\frac{3}{2}\frac{( \frac{d\Omega}{dh})^{2}}{\Omega^{2}}} \tag{2}\] The action in the Einstein frame is \[\mathcal{S}_{\mathcal{E}}=\int d^{4}x\sqrt{-\hat{g}}\big{[}-\frac{M_{\rm PL}^{ 2}}{2}\hat{\mathcal{R}}+\frac{\partial_{\mu}\phi\partial^{\mu}\phi}{2}-U(\phi )\big{]}. \tag{3}\] The exponentially flat effective potential for the Higgs field is given by \[U(\phi)=\frac{\lambda M_{PL}^{4}e^{-\frac{2\sqrt{\frac{3}{2}}\phi}{M_{PL}}} \left(e^{\frac{\sqrt{\frac{3}{2}}\phi}{M_{PL}}}-1\right)^{2}}{\xi^{2}} \tag{4}\] The slow-roll parameters of this model of inflation can be expressed as the function of \(h(\phi)\) as follows \[\epsilon=\frac{M_{\rm PL}^{2}}{2}\Bigg{(}\frac{\frac{\partial U}{\partial\phi} }{U}\Bigg{)}^{2}=\frac{4M_{\rm PL}^{4}}{3\xi^{2}h^{4}} \tag{5}\] \[\eta=M_{\rm PL}^{2}\Bigg{(}\frac{\frac{\partial^{2}U}{\partial\phi^{2}}}{U} \Bigg{)}=-\frac{4M_{\rm Pl}^{2}}{3\xi h^{2}}. \tag{6}\] The slow roll ends when \(\epsilon=1\), the field value at the end of inflation is given by, \[h_{end}=\bigg{(}\frac{4}{3}\bigg{)}^{\frac{1}{4}}\bigg{(}\frac{M_{\rm PL}}{ \sqrt{\xi}}\bigg{)}. \tag{7}\] The number of \(e\)-folds that are required to change the field \(h\) from \(h_{int}\) to \(h_{end}\) is given by \[N_{e}=\int_{h_{end}}^{h_{int}}\frac{1}{M_{\rm PL}^{2}}\Bigg{(}\frac{U}{\frac{ \partial U}{\partial h}}(\frac{\partial\phi}{\partial h})^{2}\Bigg{)}dh. \tag{8}\] The field value \(h_{int}\) at the beginning of inflation can be expressed as a function of \(e\)-folds \[h_{int}=\frac{2M_{\rm PL}}{\sqrt{3\xi}}\big{[}\frac{\sqrt{3}}{2}+N_{e}\big{]} ^{1/2}. \tag{9}\] The constraint over the Higgs self coupling constant \(\lambda\) and the non-minimal coupling constant \(\xi\) can be obtained from the COBE normalization \(\frac{U}{\epsilon}=(0.027M_{\rm PL})^{4}\)[38]. Plugging Eq. (9) into the COBE normalization, we could express \(\frac{\lambda}{\xi^{2}}\) as a function of the number of \(e\)-folds. For \(N_{e}\)=60, the constrain on \(\frac{\lambda}{\xi^{2}}=4.41026\times 10^{-10}\). The inflationary parameters such as the scalar spectral index \(n_{s}\) and the tensor-to-scalar ratio \(r\) are defined as \(n_{s}=1-6\epsilon+2\eta\), \(r=16\epsilon\). With the number of e-folds \(N_{e}=60\) this model gives the values of \(r(0.0032)\) and \(n_{s}(0.9633)\), which are well within the Planck bounds [39] of \(n_{s}=0.9677\pm 0.0060\) and \(r<0.11\) at 95% C.L. The scalar power spectrum \({\cal P}_{\cal R}\) is defined as \[{\cal P}_{\cal R}=\frac{1}{12\pi^{2}}\frac{U^{3}(\phi)}{M_{PL}^{6}U^{\prime 2 }(\phi)}\, \tag{10}\] where \(U^{\prime}(\phi)\) is the derivative of \(U(\phi)\) with respect to \(\phi\) and both \(U^{\prime}(\phi)\) and \(U(\phi)\) are calculated at \(\phi_{int}\)1. Recent CMB observations [40] suggested the value of \({\cal P}_{\cal R}=2.1\times 10^{-9}\) at the CMB pivot scale. Using Eq. (10) and Eq. (9) we can express the power spectra as a function of the number of e-folds with different choices of \(\lambda/\xi^{2}\). It is evident from Fig. 1 that \({\cal P}_{\cal R}\) attains a value of \(2.1\times 10^{-9}\) at \(N_{e}\sim 60\) e-folds with \(\lambda/\xi^{2}=10^{-10}\). Footnote 1: Eq. 2 gives the relation connecting \(\phi\) and \(h\), \(\phi=\sqrt{\frac{3}{2}}{\rm Mpl}\log\Big{(}\frac{h^{2}\xi}{{\rm Mpl}^{2}}+1 \Big{)}\) The concept of generating PBHs is examined within a class of single-field models of inflation. In this scenario, there is a notable contrast between the dynamics on small cosmological scales and the dynamics on large scales observed through the cosmic microwave background (CMB). This disparity proves advantageous in establishing the appropriate conditions for generating PBHs. Consequently, as the perturbed scales re-enter our Universe's horizon during later stages of radiation and subsequent matter dominance, these initial seeds undergo collapse, leading to the formation of PBHs. The generation of substantial scalar fluctuations during the inflationary period can lead to formation of significant density fluctuations, which play a vital role in the emergence of PBHs. The study of PBHs has garnered considerable attention over years, as PHBs have the potential to contribute a significant portion or even the entirety of the dark matter content of the Universe. However, the abundance of PBHs is subject to a number of stringent constraints imposed by their gravitational effects and evaporation rate. To produce PBHs in the early Universe, the magnitude of the curvature power spectrum needs to be approximately at the order of \(10^{-3}\) to \(10^{-2}\). In order to satisfy a successful inflation model, the curvature power spectrum is expected to yield a value of approximately \(2.1\times 10^{-9}\) at the scale of the CMB. Based on the information presented in Fig. 1, it is evident that the basic Higgs inflation model [22] lacks the ability to simultaneously address both the inflationary period and the production of PBHs. Several attempts have been made to address both scenarios, inflation and the production of PBHs, within the framework of Higgs Inflation. These attempts involve the introduction of new interactions to the Higgs field. By incorporating these additional interactions, it is possible to achieve a successful inflationary period while also generating the necessary conditions for production of PBHs [41; 42; 43; 44; 45]. ## III Modified Higgs potential This study examines a modified version of the Higgs potential that aims to address both the inflation and production of PBHs. Additionally, we investigate the implications of this modified potential in light of the recent NANOGrav signal [46]. Here we are adding a Gaussian dip (bump) [47] to the Higgs potential in Eq. (1). The structure of the Gaussian bump (dip) is given as follows \[\pm\Bigg{[}Ae^{-\frac{(h(\phi)-h_{0}(\phi))^{2}}{2\sigma^{2}}}\Bigg{]} \tag{11}\] After the conformal transformation, the potential transforms as \[U_{eff}(\phi)=\frac{\lambda h^{4}(\phi)}{4}\frac{\left(1\pm Ae^{-\frac{(h( \phi)-h_{0}(\phi))^{2}}{2\sigma^{2}}}\right)}{\left(\frac{h^{2}(\phi)\xi}{M_{ PL}^{2}}+1\right)^{2}}. \tag{12}\] The Gaussian bump (dip) described in the above potential is featured by its height (depth) \(A\) and position \(h_{0}\) and width \(\sigma\). The potential can be expressed in terms of the redefined field \(\phi\) using equations 12 and 2. The slow roll parameters and the power spectrum for the modified Higgs potential are given in Appendix A ### Parameter space search In order to investigate the appropriate parameter space of the power spectra \(\mathcal{P}_{\mathcal{R}}[A,\sigma,h_{0},\lambda,\xi]\) that yields a value of \(\mathcal{P}_{\mathcal{R}}=10^{-2}-10^{-3}\) for primordial black hole (PBH) formation and \(\mathcal{P}_{\mathcal{R}}=2.1\times 10^{-9}\) for a successful inflation model at the cosmic microwave background (CMB) scale, we conduct a scan over the characteristics of the bump and dip. For this analysis, we fix the Higgs self-coupling constant at \(\lambda=0.1\) and the Higgs gravity coupling at \(\xi=10^{4}\). Our parameter scans reveal that the addition of a dip feature to the potential produces a desired power spectrum for PBH formation in the early stages of inflation, as depicted in Fig. 2. On the other hand, the inclusion of a bump feature in the potential only results in the required power spectrum for PBH formation at late CMB scales (see Appendix B-Fig. 6). Figure 2 illustrates the parameter space of \(\sigma\) and \(N_{e}\) with varying values of \(A\) and \(h_{0}\). The red contour represents combinations of \(\sigma\) and \(N_{e}\) that yield \(\mathcal{P}_{\mathcal{R}}=2.1\times 10^{-9}\) with fixed \(A\) and \(h_{0}\). Meanwhile, the green contour represents \(\mathcal{P}_{\mathcal{R}}=1\times 10^{-2}\). We consider a range of \(\sigma\) from \(10^{15}\) to \(10^{18}\) GeV and choose four arbitrary values for the depth of the dip \(A\) (0.075, 0.1, 0.29, and 0.3). Similarly, we select four arbitrary values for the position of the dip \(h_{0}\) (\(1.76\times 10^{17}\) GeV, \(1.8\times 10^{17}\) GeV, \(2\times 10^{17}\) GeV, and \(2.1\times 10^{17}\) GeV). For each combination we identify the corresponding values of \(\sigma\) that yield the desired \(\mathcal{P}_{\mathcal{R}}\) values for both inflation and PBH formation. By observing Fig. 3 it becomes evident that certain parameter combinations can generate the correct power spectrum for both PBH formation and inflation at CMB scales. A comparison between Fig. 3 and Fig. 1 allows us to readily identify that the inclusion of a Gaussian dip in the potential has a significant impact on the power spectrum and enabling the formation of PHB seeds. ## IV PBH formation The PBH formation requires the power spectrum to be at least \(\mathcal{O}(0.01)\), the power spectrum recorded in Fig. 3 guarantees this requirement. The power spectrum has a narrow peak without oscillations. The curvature perturbation \(\mathcal{R}_{k}\) is related to the density contrast by \[\delta(t,k)=\frac{2(1+\omega)}{5+3\omega}\frac{k}{aH}\mathcal{R}_{k}. \tag{13}\] When the perturbations reenter the horizon in the radiation-dominated era, the over-dense region in the Universe (with \(\delta>\delta_{c}\)) would collapse into PBHs due to the increased amplification of curvature perturbations. This collapse occurs with \(\omega=1/3\) and \(\delta(t,k)=\frac{4}{9}\mathcal{R}_{k}\). Since the specifics of the PBH formation process are still unclear [48; 49], the precise value of the threshold \(\delta_{c}\) remains uncertain. However, if we assume a Gaussian probability distribution function for the perturbations, the mass fraction \(\beta\) of PBHs at the time of their formation can be calculated. \[\beta(M_{PBH})=2\gamma\int_{\delta_{c}}^{\infty}\frac{d\delta}{\sqrt{2\pi}\sigma_{ MPH}}exp\Big{(}-\frac{\delta^{2}}{2\sigma_{PBH}^{2}}\Big{)}\simeq\sqrt{\frac{2}{\pi}} \frac{\gamma}{\nu_{c}}exp\Big{(}-\frac{\nu_{c}^{2}}{2}\Big{)}\;, \tag{14}\] where \(\gamma\) is the fraction of mass transformed to be PBHs that has \(\delta>\delta_{c}\), and in this study, we choose \(\gamma=0.4\)[50; 51; 52; 41]. Here \(\nu_{c}=\delta_{c}/\sigma_{M_{PBH}}\) and the variance \(\sigma_{M_{PBH}}\) is defined as \[\sigma_{M_{PBH}}^{2}=\int_{0}^{\infty}\frac{dk}{k}\frac{16}{81}(kR)^{4}W^{2}( kR)\mathcal{P}_{R}(k)\;, \tag{15}\] Figure 2: The contour plot illustrates the permitted parameter space of \(\sigma\) and \(N_{e}\) for different choices of \(A\) and \(h_{0}\) in the scenario of adding a dip structure, where the green contours correspond to \(\mathcal{P}_{\mathcal{R}}=1\times 10^{-2}\) and the red contours correspond to \(\mathcal{P}_{\mathcal{R}}=2.1\times 10^{-9}\). \begin{table} \begin{tabular}{c c c c c c c} Region & A & h\({}_{0}\)(GeV) & \(\sigma\)(GeV) & N & \(n_{s}\) & \(r\) \\ \hline a & 0.29 & \(1.76\times 10^{17}\) & \(1.31\times 10^{17}\) & N=47 & 0.950907 & 0.0242479 \\ \hline b & 0.3 & \(1.8\times 10^{17}\) & \(1.40\times 10^{17}\) & N=50 & 0.980819 & 0.0244824 \\ \hline c & 0.1 & \(1.8\times 10^{17}\) & \(6.75\times 10^{16}\) & N=51 & 0.988732 & 0.0300489 \\ \hline d & 0.3 & \(2\times 10^{17}\) & \(1.74\times 10^{17}\) & N=65 & 0.98924 & 0.0206994 \\ \hline e & 0.1 & \(2\times 10^{17}\) & \(8.45\times 10^{16}\) & N=68 & 0.98953 & 0.0279209 \\ \hline f & 0.075 & \(2.1\times 10^{17}\) & \(7.83\times 10^{16}\) & N=78 & 0.989654 & 0.0275894 \\ \end{tabular} \end{table} Table 1: Inflationary observables for the different choices of the potential parameters. The resulting PBH fraction \(P_{\rm PBH}\) of each region is indicated in Fig. 4. Here \(N\) is the number of e-folds, \(A\), \(h_{0}\), and \(\sigma\) are the depth, the position, and the width of the dip, respectively. Figure 3: The power spectra \({\cal P}_{\cal R}\) versus the number of e-folds \(N_{e}\). Here the power spectra exhibit new characteristics as a result of the addition of a Gaussian dip in the Higgs potential. where \(W(kR)\) is the window used to smooth the density contrast on comoving scale \(R\). We use the Gaussian-type window function in this work, \[W(kR)=\exp\Big{(}-\frac{k^{2}R^{2}}{2}\Big{)}. \tag{16}\] The mass fraction is ultimately determined or obtained by performing the necessary calculations or calculations based on the assumptions and considerations mentioned earlier[53; 54]. \[\beta(M_{PBH})=\gamma\sqrt{\frac{2}{\pi}}\ \frac{4\mathcal{P}_{R}(k)}{9\delta_{c }}exp\Big{(}-\frac{81\delta_{c}^{2}}{32\mathcal{P}_{R}(k)}\Big{)}\;. \tag{17}\] The mass fraction of PBHs can be related to the abundance \(f_{PBH}\) as follows when considering PBHs as a fraction of dark matter: \[\beta(M_{PBH})=3.7\times 10^{-9}\Big{(}\frac{\gamma}{0.2}\Big{)}^{-1/2}\times \Big{(}\frac{g_{*form}}{10.75}\Big{)}^{1/4}\Big{(}\frac{M_{PBH}}{M_{\odot}} \Big{)}^{1/2}f_{PBH}\;, \tag{18}\] where \(M_{\odot}\) is the solar mass and \(g_{*form}\) is the relativistic degrees of freedom at formation. The mass of PBH at the formation can be written as a fraction of horizon mass given by \[M_{PBH}=\gamma\frac{4\pi M_{P}^{2}}{H_{N}}e^{2N} \tag{19}\] where \(N\) is the number of e-folds during horizon exit and \(H_{N}\) is the Hubble expansion rate evaluated near the inflection point. One can calculate the mass fraction of PBHs using Eq.[17; 18; 19]. We obtain the abundance of PBHs as dark matter, denoted as \(f_{PBH}\), for a critical threshold parameter \(\delta_{c}=0.414\)[50]. The corresponding results are presented in Fig. 4. Information regarding the distinct regions marked in Fig. 4 can be found in Table 1. A dip is observed with a depth of \(A=0.29\) and \(0.3\), accompanied by widths of \(\sigma=1.31\times 10^{17}\) GeV and \(1.40\times 10^{17}\) GeV, and positioned at \(h_{0}=1.76\times 10^{17}\) GeV and \(1.8\times 10^{17}\) GeV. This setting generates PBHs that potentially contribute to approximately 100% of the dark matter, as indicated by the regions labeled \(a\) and \(b\) in Fig. 4. The parameters associated with regions \(a\), \(b\)\(c\), \(d\), \(e\), and \(f\) produce PBHs and spectral index \(n_{s}\) situated on the edge of the allowed values obtained from cosmic microwave background (CMB) observations. Moreover, these parameters have the capability to generate heavier PBHs. In our study, we conducted several parameter space scans to generate PBHs. One common feature that emerged was an increase in the depth \(A\) of the potential, resulting in the production of lighter PBHs while keeping other parameters fixed. For instance, in Table 1, we observe this behavior in the case of region \(b\) and \(c\), where \(h_{0}\) remains fixed and \(A\) ranges from 0.1 to 0.3. Similarly, decreasing the value of \(h_{0}\) while keeping other parameters fixed leads to a transition that yields lighter PBHs. This is exemplified by the behavior of region \(d\) and \(b\) in Table 1, where \(A\) is fixed and \(h_{0}\) varies from \(2\times 10^{17}\) GeV to \(1.8\times 10^{17}\) GeV. Another noteworthy feature of the dip is that fixing both the depth \(A\) and position \(h_{0}\) to specific values while increasing the width \(\sigma\) of the dip, would result in a reduction and eventual disappearance of the \(f_{PBH}\) curve. Higher values of \(\sigma\) indicate the absence of a dip, Figure 4: The fraction of PBHs as a dark matter candidate for the parameters set Region \(-\) a, b, c, d, e, and \(f\) in Table.1. The relevant observational constraints on the current primordial black hole (PBH) mass spectrum are represented by solid lines with shades. These constraints include extra-galactic gamma-ray (EG\(\gamma\)) observations [55], femtolensing data (Femto) [56], the presence of white dwarfs in our local galaxy (WD) [57], Subaru HSC microlensing (HSC) results [58], Kepler milli/microlensing (Kepler) measurements [59], EROS/MACHO microlensing observations (EROS/MACHO) [60], dynamical heating of ultra-faint dwarf galaxies (UFD) [61], constraints from X-ray/radio observations [62], and the accretion constraints by CMB [63; 64; 65]. with the potential reverting to its original form and no dip effect. We have demonstrated this behavior in Appendix C, where we fixed \(A=0.3\) and \(h_{0}=1.8\times 10^{17}\) GeV and progressively increased the width \(\sigma\). ## V Stochastic second-order gravitational wave background Production of gravitational waves through the second-order effect occurs simultaneously with the formation of PBHs, specifically when the modes re-enter the Hubble radius. Following their production, the gravitational waves propagate freely during subsequent epochs of the Universe due to their low interaction rates. The frequency of these gravitational waves corresponds to the Hubble mass at that particular time. Considering that the mass of PBHs is proportional to the Hubble mass, we can establish a relationship between the PBH mass and the present-day frequency of gravitational waves [66]. \[f_{GW}\simeq 10^{-9}\Big{(}\frac{M_{PBH}}{30M_{\odot}}\Big{)}^{-\frac{1}{2}}\ \text{Hz} \tag{20}\] The large density perturbations not only produce the PBH dark matter but also generate the second-order gravitational wave signal [67; 68]. The current relative energy density of gravitational wave obtained from the power spectra recorded in [69; 70; 71] \[\Omega_{GW}=10\mathcal{P}_{\mathcal{R}}^{2}a_{eq} \tag{21}\] We choose the current scale factor \(a=1\) and \(a_{eq}\) is the value of scale factor at the matter radiation equality defined as \[a_{eq}=\frac{a_{0}}{3.1\times 10^{4}\ \Omega_{M}h^{2}}, \tag{22}\] where \(h=\frac{H_{0}}{100km/s/Mpc}\), \(H_{0}=67.27\,\text{km}/s\) and \(\Omega_{M}=0.3\). We plot \(\Omega_{GW}h^{2}\) in Fig. 5. In the main plot (Fig. 5), our results indicate the values of \(\Omega_{GW}h^{2}\), which are divided into several distinct regions. Each region is denoted and explained in Table 1. Region f, depicted in Fig. 5, is particularly relevant as it can potentially explain the recent findings from NANOGrav [46]. The parameter space associated with region f is characterized by a dip in the Higgs potential with a depth of \(A=0.075\) and a width of \(7.83\times 10^{16}\) GeV. This dip is positioned at \(h_{0}=2.1\times 10^{17}\) GeV. The relationship expressed in Eq. (20) reveals that \(f_{GW}\) is proportional to \((M_{PBH})^{-1/2}\), indicating an inverse dependence between the frequency of gravitational waves (\(f_{GW}\)) and the mass of primordial black holes (\(M_{PBH}\)). Additionally, it is worth noting that increasing the depth \(A\) and shifting the position \(h_{0}\) of the dip in the Higgs potential can lead to a shift in the corresponding curves towards higher frequency regimes. This observation suggests that adjustments in the parameters controlling the dip can influence the frequency spectrum of the generated gravitational waves. Figure 5: The gravitational wave abundance \(\Omega_{\rm GW}h^{2}\) versus the frequency \(f\), corresponding to the benchmark parameter sets listed in Table 1. They are compared with the recent NANOGrav 15 years sensitivity [46] (black curve) and projecting SKA/THEIA [72; 73], which utilize the observations of pulsar timing array for stochastic GW of \(\mathcal{O}\)(nHz). The planned GW interferometers LISA/\(\mu\)Ares [74; 75; 76] will cover the range from \(\mu\)Hz to Hz. Conclusions In this study, we have investigated possible modifications to the classical Higgs inflation model proposed by M. Shaposhnikov and F. L. Bezrukov. We have introduced a modification to the Higgs potential by incorporating a dip at the top base of the potential. This modification has a significant impact on the generation of curvature perturbations, resulting in the amplification of second-order stochastic gravitational wave production and potential formation of PBHs. These effects were absent in the classical model. The introduction of the dip in the potential leads to an enhancement in the power spectrum, which in turn allows for the potential existence of PBHs with masses ranging from \(1.5\times 10^{20}\) g to \(9.72\times 10^{32}\) g, depending on the chosen parameter space. Additionally, we have demonstrated that the selected parameter values for PBH production align with the allowed values for inflationary parameters. This suggests a consistent and viable framework for understanding the origins of PBHs and their role as potential contributors to the dark matter content of our universe. Furthermore, we have identified specific regions within the parameter space that could account for a significant portion of the observed dark matter. By considering various cosmological constraints, we have established the consistency of these parameter regions. Additionally, we found that the resulting gravitational wave signals from our model can explain the observed excess observed by NANOGrav. In conclusion, our study highlights the importance of the modified Higgs potential in the context of the classical Higgs inflation model. The introduced dip in the potential not only enhances the power spectrum and allows for the formation of PBHs, but also provides a promising avenue for explaining the observed excess in gravitational wave signals and addressing the dark matter puzzle in our Universe. ## Acknowledgement Special thanks are extended to Yogesh for engaging in an enlightening and productive discussion. K.C. also thanks Hyun Min Lee for the discussion related to the NANOGrav data. K.C. and C.J.O. are supported by MoST under Grant no. 110-2112-M-007-017-MY3. P.Y.Tseng is supported in part by the National Science and Technology Council with Grant No. NSTC-111-2112-M-007-012-MY3. ## Appendix A Slow Roll Parameters and Power Spectrum for the Modified Higgs Potential The slow-roll and other inflationary parameters with this effective potential (Eq. 12) can be expressed as by following Eq. (5) - Eq. (10), \[\epsilon=\frac{\left(A\text{Mpl}^{2}\left(h\left(h^{2}-4\sigma^{2}\right)\pm h ^{2}h_{0}\right)+Ah^{4}\xi\left(h\pm h_{0}\right)+4h\text{Mpl}^{2}\sigma^{2}e^{ \frac{\left(h-h_{0}\right)^{2}}{2\sigma^{2}}}\right)^{2}}{12h^{6}\xi^{2}\sigma ^{4}\left(A\pm e^{\frac{\left(h-h_{0}\right)^{2}}{2\sigma^{2}}}\right)^{2}} \tag{10}\] Using \(\epsilon\)=1, we can obtain the Higgs field value at the end of inflation. \[\begin{split}\eta&=\frac{\pm A\left(h^{6}\xi\left(2 \left(h-2h_{0}\right)\text{Mpl}^{2}+\xi\left(-2h\sigma^{2}+hh_{0}^{2}+h_{0} \sigma^{2}\right)\right)\right.}{6h^{4}h\xi^{2}\sigma^{4}\left(e^{\frac{\left( h-h_{0}\right)^{2}}{2\sigma^{2}}}\pm A\right)}\\ &\quad+\frac{h^{4}\text{Mpl}^{2}\left(\left(h-2h_{0}\right) \text{Mpl}^{2}+2\xi\left(-5h\sigma^{2}+hh_{0}^{2}+4h_{0}\sigma^{2}\right) \right)}{6h^{4}h\xi^{2}\sigma^{4}\left(e^{\frac{\left(h-h_{0}\right)^{2}}{2 \sigma^{2}}}\pm A\right)}\\ &\quad+\frac{h\text{Mpl}^{2}\left(h^{2}\left(h_{0}^{2}\text{Mpl} ^{2}-8\sigma^{2}\left(\text{Mpl}^{2}+\xi\sigma^{2}\right)\right)+8\text{Mpl}^{ 2}\sigma^{4}\right)}{6h^{4}h\xi^{2}\sigma^{4}\left(e^{\frac{\left(h-h_{0} \right)^{2}}{2\sigma^{2}}}\pm A\right)}\\ &\quad+\frac{7h_{0}h^{2}\text{Mpl}^{4}\sigma^{2}+\left(h-2h_{0} \right)h^{8}\xi^{2}\right)-8h\text{Mpl}^{2}\sigma^{4}e^{\frac{\left(h-h_{0} \right)^{2}}{2\sigma^{2}}}\left(h^{2}\xi-\text{Mpl}^{2}\right)}{6h^{4}h\xi^{2} \sigma^{4}\left(e^{\frac{\left(h-h_{0}\right)^{2}}{2\sigma^{2}}}\pm A\right)} \end{split} \tag{11}\] The number of e-folds for each case is calculated as \[N_{e}=6\xi^{2}\int_{h_{end}}^{h_{int}}\frac{h^{3}}{\left(h^{2}\xi+\text{Mpl} ^{2}\right)^{2}\left(\frac{A(h^{2}\pm hh_{0})}{\sigma^{2}\left(e^{\frac{ \left(h-h_{0}\right)^{2}}{2\sigma^{2}}}\pm A\right)}+\frac{4\text{Mpl}^{2}}{h^ {2}\xi+\text{Mpl}^{2}}\right)}\,dh \tag{12}\] The \(\pm\) signs in equations [10]-[12] represents the bump and dip, respectively. Compared to the original model of inflation, the new modification generates complex values for \(\epsilon,\eta,\text{ and }N_{e}\). The analytical evaluation of these quantities is harder, and it is not possible to solve the Eq. (12) analytically, so we employ a numerical approach to compute the inflationary observables. The calculations of these observables, as well as the analysis of PBHs, are performed using a Python code developed by the authors. The power spectra are obtained as \[\mathcal{P}_{\mathcal{R}}=\left\{\begin{aligned} h^{8}\lambda\xi^{2} \sigma^{4}e^{-\frac{(h-h_{0})^{2}}{2\sigma^{2}}}\left(A+e^{\frac{(h-h_{0})^{2} }{2\sigma^{2}}}\right)^{3}\\ \frac{8\pi^{2}\left(h^{2}\xi+\text{Mpl}^{2}\right)^{2}\left(A\left( hh_{0}\left(h^{2}\xi+\text{Mpl}^{2}\right)-h^{2}\text{Mpl}^{2}+h^{4}(-\xi)+4 \text{Mpl}^{2}\sigma^{2}\right)+4\text{Mpl}^{2}\sigma^{2}e^{\frac{(h-h_{0})^{2} }{2\sigma^{2}}}\right)^{2}}{\begin{aligned} &\text{Potential with bump}\\ & h^{8}\lambda\xi^{2}\sigma^{4}e^{-\frac{(h-h_{0})^{2}}{2\sigma^{2}} }\left(e^{\frac{(h-h_{0})^{2}}{2\sigma^{2}}}-A\right)\\ &\frac{8\pi^{2}\left(h^{2}\xi+\text{Mpl}^{2}\right)^{2}\left(A \left(-hh_{0}\left(h^{2}\xi+\text{Mpl}^{2}\right)+h^{2}\text{Mpl}^{2}+h^{4}\xi- 4\text{Mpl}^{2}\sigma^{2}\right)+4\text{Mpl}^{2}\sigma^{2}e^{\frac{(h-h_{0})^{2 }}{2\sigma^{2}}}\right)^{2}}{\begin{aligned} &\text{Potential with dip} \\ \end{aligned}}\end{aligned}\right. \tag{10}\] The new power spectrum is characterized by 5 parameters \([A,\sigma,h_{0},\lambda,\xi]\). By tuning these variables we can obtain the proper parameter space for the inflation, PBH production, and stochastic gravitational wave background (SGWB). ## Appendix B Bump parameter space The addition of a Gaussian bump to the Higgs Inflation model can indeed amplify the power spectra. However, such enhancements are only observed within a specific region characterized by a large number of e-folds. For PBH formation to occur, the inflationary power spectrum needs to be enhanced by a factor of \(10^{7}\) within fewer than 40 e-folds of expansion. By comparing Figure 2 with Figure 6, it becomes evident that the current bump on the potential does not provide an adequate parameter space for PBH formation. In Fig.6 we have shown some examples of such behavior, We explore a specific parameter space characterized by a bump with a height of \(A=0.1\) (or 0.3) and positioned at \(1.8\times 10^{17}\) GeV (or \(2\times 10^{17}\) GeV). By scanning the values of \(\sigma\) and \(N_{e}\), we generate the necessary power spectra for PBH formation. It is observed that these parameter combinations are effective for generating PBHs during later and heavier epochs of inflation, as illustrated in Figure 2. Conversely, when a similar set of values is used with a dip feature in the potential, it facilitates PBH formation during smaller e-folding periods. ## Appendix C PBH abundance Vs \(\sigma\) values In this section, we demonstrate, using Fig. 7, that when both the depth \(A\) and position \(h_{0}\) of the dip in the potential are fixed at specific values, increasing the width \(\sigma\) leads to a reduction and eventual disappearance of the \(f_{PBH}\) curve. Higher values of \(\sigma\) indicate the absence of a dip, causing the potential to revert to its original form without a dip effect. Specifically, we set \(A=0.3\) and \(h_{0}=1.8\times 10^{17}\) GeV to illustrate this feature. It is important to note that this behavior can also be observed with other choices of \(A\) and \(h_{0}\).
2303.15286
Unsupervised Adaptation from Repeated Traversals for Autonomous Driving
For a self-driving car to operate reliably, its perceptual system must generalize to the end-user's environment -- ideally without additional annotation efforts. One potential solution is to leverage unlabeled data (e.g., unlabeled LiDAR point clouds) collected from the end-users' environments (i.e. target domain) to adapt the system to the difference between training and testing environments. While extensive research has been done on such an unsupervised domain adaptation problem, one fundamental problem lingers: there is no reliable signal in the target domain to supervise the adaptation process. To overcome this issue we observe that it is easy to collect unsupervised data from multiple traversals of repeated routes. While different from conventional unsupervised domain adaptation, this assumption is extremely realistic since many drivers share the same roads. We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain. Concretely, we generate pseudo-labels with the out-of-domain detector but reduce false positives by removing detections of supposedly mobile objects that are persistent across traversals. Further, we reduce false negatives by encouraging predictions in regions that are not persistent. We experiment with our approach on two large-scale driving datasets and show remarkable improvement in 3D object detection of cars, pedestrians, and cyclists, bringing us a step closer to generalizable autonomous driving.
Yurong You, Cheng Perng Phoo, Katie Z Luo, Travis Zhang, Wei-Lun Chao, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger
2023-03-27T15:07:55Z
http://arxiv.org/abs/2303.15286v1
# Unsupervised Adaptation from Repeated Traversals for Autonomous Driving ###### Abstract For a self-driving car to operate reliably, its perceptual system must generalize to the end-user's environment -- ideally without additional annotation efforts. One potential solution is to leverage unlabeled data (e.g., unlabeled LiDAR point clouds) collected from the end-users' environments (i.e. target domain) to adapt the system to the difference between training and testing environments. While extensive research has been done on such an unsupervised domain adaptation problem, one fundamental problem lingers: there is no reliable signal in the target domain to supervise the adaptation process. To overcome this issue we observe that it is easy to collect unsupervised data from multiple traversals of repeated routes. While different from conventional unsupervised domain adaptation, this assumption is extremely realistic since many drivers share the same roads. We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain. Concretely, we generate pseudo-labels with the out-of-domain detector but reduce false positives by removing detections of supposedly mobile objects that are persistent across traversals. Further, we reduce false negatives by encouraging predictions in regions that are not persistent. We experiment with our approach on two large-scale driving datasets and show remarkable improvement in 3D object detection of cars, pedestrians, and cyclists, bringing us a step closer to generalizable autonomous driving. Code is available at [https://github.com/YurongYou/Rote-DA](https://github.com/YurongYou/Rote-DA). ## 1 Introduction Autonomous vehicles and driver-assist systems require 3D object detectors to accurately identify and locate other traffic participants (cars, pedestrians and so on) to drive safely [26; 31; 12; 25; 21]. Modern 3D object detectors achieve high accuracy on benchmark datasets [9; 11; 41; 35; 36; 33; 42]. However, most benchmark data sets train and test classifiers on essentially the same locations (city, country), time, and weather conditions, and therefore represent the "best case" of an end-user using the self-driving car in precisely the same conditions it was trained on. A more realistic scenario is that self-driving cars trained in, for example, Germany will be driven in the USA. Unfortunately, past work has shown that this domain gap results in a catastrophic drop in accuracy [32]. Given that an end-user may choose to operate their car wherever they please, adapting the perception pipeline effectively to such domain shifts is a critical challenge. An obvious solution is to retrain the detector in the target domain (i.e., the end-user's location/environment). Unfortunately, this requires large amounts of labeled data, where expert human annotators painstakingly locate every object in LiDAR scans in every conceivable location. Such labeled data is all but impossible to obtain in sufficient quantity. However, _unlabeled_ data is not. No matter where the end-user intends to use their car, likely thousands of cars drive there every day already. By simply logging the data collected by cars with adequate drive-assist sensors, one obtains a wealth of information about the local environment, which should be useful to adapt a detector to this new domain. But it is unclear how to use this data: in particular, how can the detector learn to correct its many mistakes in this new domain if it has no labels at all? The key here is the fact that this unlabeled data is not just an arbitrary collection of unrelated scenes. If we look at a population of cars driving around in a city, we observe that they all visit a shared set of roads and intersections. Indeed, as pointed out by [40], any single vehicle will probably be driven on the same route, day in and day out (e.g., commute, grocery shopping, patrol routes). Even when one end-user takes their car on a new route, it is likely that other cars have taken that very route not long before. This fact implies that the unlabeled data obtained from cars will typically contain _multiple traversals of the same route_, obtained for free without any targeted data collection. Previous work has already shown that aggregating data from multiple traversals can aid visual odometry [2] and unsupervised object discovery [40]. In this paper we argue that multiple traversals are particularly suited for end-user domain adaptation. We assume the existence of unlabeled LiDAR data from several repeated traversals of routes within the target domain (e.g. collected a few hours or days apart). For a LiDAR point captured in any one of these traversals, we use the other traversals to compute a persistency prior (PP-score) [40], capturing how persistent this LiDAR point has been across traversals: persistent points are likely static background. The PP-score thus yields a proxy signal for foreground vs background. This provides a powerful signal to correct both false positives and false negatives: detector outputs that mostly capture background points are likely false positives, and foreground points that are not captured by any detection reveal false negatives. To formalize this intuition, we propose a new iterative fine-tuning approach. We use the detector to generate 3D bounding boxes along the recorded traversals but remove boxes with lots of persistent (and thus static) points as false positives. We fine-tune the detector on this filtered data, and then "rinse and repeat". To reduce false negatives during this training, we introduce a new auxiliary loss that forces the detector to classify non-persistent LiDAR points as foreground. We refer to our method as _Rote Domain Adaptation (Rote-DA)_. The resulting approach is a simple modification of existing object detectors, but offers substantial accuracy gains in unsupervised domain adaptation. We demonstrate on the Lyft [11] and Ithaca-356 [5] benchmark data sets that our approach consistently leads to drastic improvements when adapting a detector trained on KITTI [9] to the local environments -- in some categories even outperforming a dedicated model trained on hand labeled data in the target domain (which we intended as an upper bound). ## 2 Related Works We seek to adapt a 3D object detector from a source to a target domain with the help of unlabeled target data of _repeated traversals_. **Unsupervised Domain Adaptation in 3D.** Improving generalizability of visual recognition systems (trained on a source domain) without annotated data from the testing environment (target domain) falls under the purview of unsupervised domain adaptation (UDA). The key to successful adaptation is leveraging the right information about the target domain. After all, without any knowledge of the target domain, adapting any learning systems would be extremely challenging, if not impossible. The most common source of information used for adaptation in the literature is the unlabeled data from the target domain; ST3D [37] improves conventional self-training adaption using stronger data augmentations and maintaining a memory bank of high quality predictions for self-training throughout adaptation; inspired by success in 2D UDA approaches that leverage feature alignment techniques [16; 29; 10; 20; 23], MLC-Net [17] proposes to encourage domain alignment by imposing consistency between a source detector and its exponential moving average [28] at point, instance, and neural-statistics-level on the target unlabeled data. Other than unlabeled data, other work has also sought to use other information from the target domain to improve adaptation. One notable work along these lines is _statistical normalization_[32] where the authors identify car size difference as the biggest source of domain gap and propose to scale the source data with the target data car size for adaptation. Knowing that the difference in weather conditions between source and target domain would cause changes in point cloud distributions, SPG [36] seeks to fill in points at foreground regions to address the domain gap. In addition to unlabeled data, other work [24; 38] has also explored temporal consistency -- or more precisely, tracking of rigid 3D objects -- to improve adaptation. Our work explores another rich yet easily attainable source of information -- repeated traversals. In principle, one could combine our approach with prior UDA methods that use only the unlabeled data [37; 17] for additional marginal gains at the cost of increased algorithmic complexity. We choose to keep our contribution simple and clear and focus only on self-training with repeated traversals, which in itself is very effective and straightforward to replicate. **Repeated Traversals.** Repeated traversals contain rich information that have already been used in a variety of scenarios. Early works utilize multiple traversals of the same route for localization [2; 14]. Repeated traversals of the same location allows discovering of non-stationary points in a point cloud captured by modern self-driving sensor since non-stationary points are less likely to persist across different traversals of the same location. To formalize this intuition, [2] develop an entropy-based measure, termed ephemerality score (see background 3.1) to determine dynamic points in a scene and subsequently, uses the signal to learn a representation for 2D visual odometry in a self-supervised manner. Building upon ephemerality, [40] utilize multiple common sense rules to discover a set of mobile objects for self-training a mobile object detector without any human supervision. Similar to [40] we leverage information from repeated traversals and use self-training; however in contrast to our work they focus on single class object discovery and our work is the first to show how multiple traversals can be used for domain adaptation. In addition to detecting foreground points/objects, repeated traversals have also been utilized by Hindsight [39] to decorate 3D point clouds with learned features for better 3D object detection. In principle, we could combine our approach with Hindsight to bring forth better generalizability but we did not explore such combination for simplicity and leave the exploration for future work. ## 3 Rote Domain Adaptation (Rote-DA) We seek to adapt a 3D object detector pretrained on a certain area/domain (e.g. KITTI [8] in Germany) for reliable deployment to a different target area/domain (e.g., Lyft [11] in the USA). Without loss of generality, we assume all objects of interest are _dynamic_ (e.g., cars, pedestrians, and cyclists). Similar to prior work [37; 36; 17; 32], we assume access to unlabeled target data for adaptation. Crucially different from previous work, we assume that the unlabeled target data are collected from the same routes repeatedly, and the localization information is available for adaptation. We note that such an additional assumption is highly realistic, since with current localization technology [4; 39] these data could be easily collected by the end-users going about their daily lives. For simplicity, we will focus Figure 1: A schematic layout of Rote-DA. The PointRCNN Proposal network classifies each input LiDAR point as car/pedestrian/cyclist (Class Predictions) with three binary classifiers. The PP-Score is used during fine-tuning in an auxiliary loss function \(L_{prop}^{cls}\) to reduce false-negatives. The Refinement network produces bounding boxes for the target data (Raw Detection Outputs). For the next self-training round, these are filtered with posterior and foreground/background filtering to reduce false positives, giving rise to the next pseudo-labels (bottom right). on adapting point-based detectors [38; 39; 40; 18; 24], specifically PointRCNN [26] which is one of the current state-of-the-art 3D object detectors. ### Background Our work leverages persistence prior score (PP-score) from multiple traversals [40] to adapt PointRCNN to a new, target domain. We review key concepts relevant to the understanding of our approach. **Persistence prior score (PP-score) from multiple traversals.** The PP-score [40; 2] is an entropy-based measure that quantifies how persistent a single LiDAR point is across multiple traversals. We assume access to unlabeled LiDAR data that are collected from multiple traversals of a set of location \(L\); each traversal contains a series LiDAR scans. To calculate the PP-score, we further assume that these LiDAR scans of a traversal \(t\) have been pre-processed, such that the LiDAR points around a location \(g\in L\) are aggregated to form a dense point cloud \(\mathbf{S}_{g}^{t}\). We note that \(\mathbf{S}_{g}^{t}\) is only used for PP-score computation, not as an input to 3D object detectors. Given a single 3D point \(\mathbf{q}\) around location \(g\), we can calculate its PP-score by the following steps. First, we count the number of its neighboring points within a certain radius \(r\) (say \(0.3\)m) in each \(\mathbf{S}_{g}^{t}\): \[N_{t}(\mathbf{q})=\left|\{\mathbf{p}_{i}\mid\|\mathbf{p}_{i}-\mathbf{q}\|_{2}<r,\mathbf{p}_{i}\in \mathbf{S}_{g}^{t}\}\right|. \tag{1}\] We then normalize \(N_{t}(\mathbf{q})\) across traversals \(t\in\{1,\cdots,T\}\) into a categorical probability: \[P(t;\mathbf{q})=\frac{N_{t}(\mathbf{q})}{\sum_{t^{\prime}=1}^{T}N_{t^{\prime}}(\mathbf{q} )}. \tag{2}\] With \(P(t;\mathbf{q})\), we can then compute the PP-score \(\tau(\mathbf{q})\) by \[\tau(\mathbf{q})=\begin{cases}0&\text{if }N_{t}(\mathbf{q})=0\ \ \forall t;\\ \frac{H(P(t;\mathbf{q}))}{\log(T)}&\text{otherwise,}\end{cases} \tag{3}\] where \(H\) is the information entropy. Essentially, the more uniform \(P(t;\mathbf{q})\) is across traversals, the higher the PP-score is. This happens when the neighborhood of \(\mathbf{q}\) is stationary across traversals; i.e., \(\mathbf{q}\) is likely a background point. In contrast, a low PP-score indicates that some traversals \(t\) have much higher \(P(t;\mathbf{q})\) than some other traversals. This suggests that the neighborhood of \(\mathbf{q}\) is sometimes empty (so low probability) and sometimes occupied (e.g., by a foreground car, so high probability), and when \(\mathbf{q}\) is detected by LiDAR, it is likely reflected from a foreground object. **PointRCNN.** PointRCNN [26] is a two-stage detector. In the first stage, each LiDAR point is classified into a foreground class or background, and a 3D box proposal is generated around each foreground point. The proposals are then passed along to the second stage for bounding box refinement, which refines both the class label and box pose. It is worth noting that this two-stage pipeline is widely adopted in many other detectors [6; 22; 34; 1]. An understanding and solution to the error patterns of PointRCNN, especially when it is applied to new environments, are thus very much applicable to other detectors. By taking a deeper look at the inner working of PointRCNN, we found that if a foreground LiDAR point is misclassified as the background in the first stage, then it is removed from consideration for the refinement (i.e., bound to be a false negative). In our approach, we thus propose to incorporate the PP-score to correct this error during iterative fine-tuning. In the following, we describe the original loss function used to train PointRCNN's first stage. Let us denote by \(N_{c}\) the number of foreground classes and by \(N_{p}\) the number of points in a scene. An annotated point cloud can be represented by a set of tuples \(\{(\mathbf{q_{i}},\mathbf{y_{i}},\mathbf{b_{i}})\}_{i=1}^{N_{p}}\), where \(\mathbf{y_{i}}\) is a one-hot \(N_{c}\)-dimensional class label vector and \(\mathbf{b_{i}}\) is the bounding box pose that encapsulates \(\mathbf{q_{i}}\). The loss function can be decomposed into two terms: \[L(\{\mathbf{q_{i}},\mathbf{y_{i}},\mathbf{b_{i}}\}_{i=1}^{N_{p}})=\sum_{i=1}^{N_{p}}L_{ \text{cls}}(\mathbf{q_{i}},\mathbf{y_{i}})+L_{\text{reg}}(\mathbf{q_{i}},\mathbf{b_{i}}). \tag{4}\] The first term \(L_{\text{cls}}\) is for per-point classification (or equivalently, _segmentation_ of the point cloud). The second term \(L_{\text{reg}}\) is for proposal regression. For the former, a focal loss [15] is used: \[\frac{1}{\alpha}L_{\text{cls}}(\mathbf{q_{i}},\mathbf{y_{i}})=\sum_{c=1}^{N_{c}}y_{ic}(1 -p_{c})^{\gamma}\log(p_{c})+(1-y_{ic})(p_{c})^{\gamma}\log(1-p_{c}) \tag{5}\] where \(p_{c}\) is the one-vs-all probability of class \(c\), produced by PointRCNN's first stage; \(y_{ic}\) indexes the \(c\)-th position of \(\mathbf{y_{i}}\); \(\alpha\) and \(\gamma\) are hyperparameters for the focal loss (we use default value \(\alpha=0.25\) and \(\gamma=2.0\)). ### Adaptation Strategy **Approach overview.** Our adaptation approach is built upon the conceptually-simple but highly effective self-training for adaptation [13; 38]. The core idea is to iteratively apply the current model to obtain _pseudo-labels_ on the unlabeled target data, and use the pseudo-labels to fine-tune the current model. The current model is initialized by the source model (i.e., a pre-trained PointRCNN). Self-training works when the pseudo-labels are of high quality -- in the ideal case that the pseudo-labels are exactly the ground truths, self-training is equivalent to supervised fine-tuning. In practice, the pseudo-labels can be refined through the iterative process, but errors may also get reinforced. It is therefore important to have a "quality control" mechanism on pseudo-labels. In this section, we propose a set of novel approaches to improve the quality of pseudo-labels, taking advantage of the PP-scores, illustrated in Figure 1. **Pseudo-label refinement for false positives removal.** Pseudo-labels generated by the source model are often noisy when the source model is first applied to the target data that are different from the source data. As shown in [32] and our experiments, PointRCNN suffers a serious performance drop in new environments. Many of the detected boxes are false positives or false negatives. To control the quality, we first leverage the PP-score to identify and filter out false positives. To assess the quality of a bounding box \(b\), we first crop out the points \(\{\mathbf{q}_{j}\}_{j\in b}\) in it, and query their PP-scores \(\{\tau(\mathbf{q}_{j})\}_{j\in b}\). A bounding box is highly likely to be a false positive if it contains so many _persistent_ points, i.e., points with high PP-scores. To this end, we summarize the PP-scores \(\{\tau(\mathbf{q}_{j})\}_{j\in b}\) of a box by the \(\alpha_{\text{{FB-F}}}\) percentile, and remove the box if the value is larger than a threshold \(\gamma_{\text{{FB-F}}}\). In our experiments, we set \(\alpha_{\text{{FB-F}}}=20\) and \(\gamma_{\text{{FB-F}}}=0.5\) (We find these values are not sensitive). Since we are effectively filtering out boxes that do not respect the foreground/background segmentation obtained from multiple traversals, we term this filtering approach _Foreground Background Filtering_**(FB-F)**. In addition, we present another complementary way to identify another kind of false positives. Essentially, FB-F can effectively identify false positives that should have been detected as background. However, it cannot identify false positives that result from wrong classification or size estimates. Indeed, after FB-F, we still see a decent amount of this kind of false positives; the remaining pseudo-labels are more numerous than the ground-truth boxes. One naive way to remove them is by thresholding the model's confidence on them. However, setting a suitable threshold is nontrivial in self-training, since the model's confidence will get higher along the iterations. We thus propose to directly set a cap on the average number of pseudo-labels per class \(c\) in a scene. We make the following assumption: as long as the source and target domains are not from drastically different areas (e.g., a city vs. barren land), the object frequency in the source domain can serve as a good indicator for what a well-performing object detector should see in the target domain. To this end, we set the cap by \(\beta\times\frac{N_{c}^{\mathcal{S}}}{N_{\text{scene}}^{\mathcal{S}}}\), where \(N_{\text{scenes}}^{\mathcal{S}}\) and \(N_{c}^{\mathcal{S}}\) are the total number of source training scenes and the ground-truth objects of class \(c\) in them, respectively. The value \(\beta\in[0,1]\) is a hyperparameter that controls the tightness of the cap. With this cap, after creating pseudo-labels on \(N_{\text{scenes}}^{\mathcal{T}}\) target scenes we keep the top \(\beta\times\frac{N_{c}^{\mathcal{S}}}{N_{\text{scene}}^{\mathcal{S}}}\times N _{\text{scenes}}^{\mathcal{T}}\) of them for each class \(c\) according to the model's confidence. Given we control the distribution of objects (similar to posterior regularization [7]), we term this filtering step Posterior Filtering (PO-F). **Foreground Background Supervision (FB-S) for false negatives reduction.** FB-F, as discussed above, can effectively filter out false positives that should have been background. Now we show that the PP-scores are also useful for correcting false negatives. As mentioned in subsection 3.1, the first stage of PointRCNN is the key to false negatives: if a foreground point is misclassified as background, then it is bound to be a false negative. To rectify this, we incorporate the PP-score into the fine-tuning process. Specifically, we modify the pseudo-class-label \(\mathbf{\hat{y}_{i}}\) of a point \(\mathbf{q_{i}}\) in Equation 5 with PP-score \(\tau(\mathbf{q_{i}})\): \[\mathbf{y_{i}}=\begin{cases}\mathbf{0}&\text{if $\tau(\mathbf{q_{i}})>\tau_{U}$,}\\ \mathbf{1}&\text{if $\tau(\mathbf{q_{i}})<\tau_{L}$ and $\mathbf{\hat{y}_{i}}=\mathbf{0}$,}\\ \mathbf{y_{i}}&\text{otherwise.}\end{cases} \tag{6}\] where \(\mathbf{0}\) is a zero vector and \(\mathbf{1}\) is an all-one vector. Essentially, if a point is persistent (i.e., high \(\tau(\mathbf{q_{i}})\)), we set the the pseudo-class-label as background \(\mathbf{0}\). On the contrary, for a non-persistent point (i.e., low \(\tau(\mathbf{q_{i}})\)) that is deemed as background (i.e., \(\mathbf{\hat{y}_{i}}=\mathbf{0}\)) by the current model, we encourage the scores of all the foreground classes to be as high as possible, so that a foreground proposal can be generated. We note that while this foreground class label may be wrong, the subsequent refinement by PointRCNN's second stage can effectively correct it. ## 4 Experiments **Datasets.** We validate our approach on a single source dataset, the KITTI dataset [8] and two target datasets: the Lyft Level 5 Perception dataset [11] and the Ithaca-365 dataset [5]. The KITTI dataset is collected in Karlsruhe, Germany, while the Lyft and Ithaca-365 dataset is collected in Palo Alto (California) and Ithaca (New York) in the US respectively. Such setup is chosen to simulate large domain difference [32]. To show good generalizability, we use _exactly the same_ hyper-parameters for adaptation experiments on these two target datasets. To the best of our knowledge3, Lyft and Ithaca-365 are the only two publicly available autonomous driving datasets that have both bounding box annotations and multiple traversals with accurate 6-DoF localization. We use these two datasets to test out two different adaptation scenarios. The first scenario is that the detector is trained on data from nearby locations, _but not from the roads and intersections it will be driven on_. Thus, following [40], we split the Lyft dataset so that the "train"/test set are _geographically disjoint_; we also discard locations with less than 2 traversals in the "train" set. This results a "train"/test split of 11,873/4,901 point clouds for the Lyft dataset. We use all traversals available (2-10 in the dataset) to compute PP-score for each scene. Footnote 3: We note that though there are some scenes with multiple traversals in the nuScenes dataset [3] as used in [39; 40], the localization in \(z\)-axis is not accurate ([https://www.nuscenes.org/nuscenes#data-format](https://www.nuscenes.org/nuscenes#data-format)). The second adaptation scenario is when the detector uses unlabeled data _from the same routes that it sees at test time_. This scenario is highly likely in practice since, as mentioned before, a self-driving car can leverage data collected by other cars on the same route previously. To test this, we split the Ithaca-365 dataset based on the data collection date, keeping the same geographical locations in both train and test. This results in 4445/1644 point clouds. The "train" sets of these two target datasets are used without labels. We use the roof LiDAR (40/60-beam in Lyft; 128-beam in Ithaca-365), and the global 6-DoF localization with the calibration matrices directly from the raw data. We do not use the intensity channel of the LiDAR data due to drastic difference in sensor setups between datasets. We use 5 traversals to compute PP-score for each scene. We pre-train the 3D object detection models on the train split (3,712 point clouds) of KITTI datasets to detect _Car_, _Pedestrian_ and _Cyclist_ classes, and adapt them to detect the same objects in the Lyft and _Car_, _Pedestrian_ in the Ithaca-365 since there too few Cyclist in the dataset to provide reasonable performance estimate. Since KITTI only provides 3D object labels within frontal view, we focus on frontal view object detection only during adaption and evaluation. **Evaluation metric.** On the Lyft dataset, we follow [39] to evaluate object detection in the bird's-eye view (BEV) and in 3D for the mobile objects by KITTI [9] metrics and conventions: we report average precision (AP) with the intersection over union (IoU) thresholds at 0.7/0.5 for Car and 0.5/0.25 for Pedestrian and Cyclist. We further follow [32] to evaluate the AP at various depth ranges. Due space constraint, we present AP\({}_{\text{BEV}}\) at IoU=0.7 for Car and 0.5 for Pedestrian and Cyclist in the main text and defer the rest of the results to the supplementary materials. On the Ithaca-365 dataset, the default match criterion is by the minimum distance to ground-truth bounding boxes. We evaluate the mean of AP with match thresholds of {0.5, 1, 2, 4} meters for Car and Pedestrian. We follow [39] to evaluate only detection in frontal view. **Implementation of PointRCNN.** We use the default implementation/configuration of PointRCNN [26] from OpenPCDet [19]. For fine-tuning, we fine-tune the model for 10 epochs with learning rate \(1.5\times 10^{-3}\) (pseudo-labels are regenerated and refined after each epoch). All models are trained/fine-tuned with 4 GPUs (NVIDIA 2080Ti/3090/A6000). **Comparisons.** We compare the proposed method against two methods with publicly available implementation: Statistical Normalization (SN) [32] and ST3D [37]. SN _assumes access to mean car sizes of target domain_, and applies object sizes scaling to address the domain gap brought by different car sizes. Since there is less variability on box sizes among pedestrians and cyclists, we only scale the car class using the target domain statistics. ST3D achieves adaptation via self-training on the target data with stronger augmentation and maintaining a memory bank of high quality pseudolabels. ### Adaptation performance on KITTI \(\rightarrow\) Lyft and Ithaca-365 In Table 1 and Table 2 we show the adaptation performance of adapting a KITTI pre-trained PointRCNN detection model to the Lyft and the Ithaca-365 datasets. We observe that despite its simplicity, Rote-DA outperforms all baselines on almost all metrics, across both datasets and across object types, confirming the potent learning signal from multiple traversals. Note that the hyper-parameters are kept as exactly the same between experiments in these two datasets, showing the strong generalizability of Rote-DA. While SN is more accurate than Rote-DA for cars on Lyft, it uses external information about car sizes that is unavailable to the other techniques, and that is not useful for other classes. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} & \multicolumn{4}{c}{Cyclist} \\ \cline{2-13} Method & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline No Adaptation & 54.3 & 32.1 & 1.2 & 29.3 & 52.1 & 21.1 & 0.0 & 25.4 \\ ST3D (R10) & 66.2 & 38.8 & 3.8 & 36.3 & 53.0 & 23.9 & 0.0 & 26.5 \\ ST3D (R30) & 65.4 & 28.5 & 9.9 & 33.3 & 47.9 & 24.9 & 0.0 & 25.7 \\ Rote-DA (Ours) & **66.9** & **43.5** & **15.6** & **43.5** & **53.6** & **33.0** & **0.2** & **31.2** \\ \hline SN & 54.7 & 33.0 & 2.0 & 30.0 & 52.3 & 22.2 & 0.0 & 26.0 \\ \hline In Domain & 72.4 & 50.1 & 24.4 & 50.5 & 55.3 & 29.9 & 2.6 & 32.7 \\ \hline \hline \end{tabular} \end{table} Table 2: **Detection performance of KITTI \(\rightarrow\) Ithaca-365 adaptation.** We evaluate the mAP as described in section 4 by different depth ranges and object types. Please refer to Table 1 for namings. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} \\ \cline{2-13} Method & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline No Adaptation & 54.3 & 32.1 & 1.2 & 29.3 & 52.1 & 21.1 & 0.0 & 25.4 \\ ST3D (R10) & 66.2 & 38.8 & 3.8 & 36.3 & 53.0 & 23.9 & 0.0 & 26.5 \\ ST3D (R30) & 65.4 & 28.5 & 9.9 & 33.3 & 47.9 & 24.9 & 0.0 & 25.7 \\ Rote-DA (Ours) & **66.9** & **43.5** & **15.6** & **43.5** & **53.6** & **33.0** & **0.2** & **31.2** \\ \hline SN & 54.7 & 33.0 & 2.0 & 30.0 & 52.3 & 22.2 & 0.0 & 26.0 \\ \hline In Domain & 72.4 & 50.1 & 24.4 & 50.5 & 55.3 & 29.9 & 2.6 & 32.7 \\ \hline \hline \end{tabular} \end{table} Table 1: **Detection performance of KITTI \(\rightarrow\) Lyft adaptation.** Given a PointRCNN detector [26] pre-trained on the KITTI dataset, adaptation strategies improves its detection performance on the target Lyft dataset. We breakdown their detection AP\({}_{\text{BEV}}\) by depth ranges. We also show in-domain performance of the same model (training and testing on the Lyft dataset) as a reference. Please refer to supplementary material for corresponding AP\({}_{\text{3D}}\) results and results under other IoU metrics, where we observe a similar trend. * ST3D’s adaptation involves 30 epochs of self-training by defaults so for fair comparison, we show ST3D’s results early-stopped at the 10-th epoch. Rote-DA works especially well on the challenging categories of pedestrians and cyclists, almost doubling the performance on cyclists and even outperforming an in-domain detector in some scenarios (pedestrians, 0-30 m range). In contrast, prior domain adaptation strategies actually _hurt_ performance for these categories. For e.g., ST3D through the course of self-training gradually over-fits to cars and "forgets" pedestrians and cyclists (comparing row ST3D(R10) and ST3D(R30), see also Figure 4). Interestingly, when Rote-DA has access to unlabeled data from past traversals of the test routes (as on the Ithaca-365 dataset), the performance gains are even more significant, especially on the mid-to-far ranges (30-80m), improving accuracy by more than 10\(\times\) for cars in the 50-80m range. ### Analysis Unless otherwise stated, we conduct the following study on the Lyft dataset. Effects of different components. We ablate different components of Rote-DA: pseudo-labels refinement and Foreground Background Supervision (FB-S) in Table 3. To start, vanilla self-training without any of the components would only yield marginal improvements to detecting cars whereas performance of the adapted detectors for the rarer classes (pedestrians, cyclists) degrade significantly compared to no adaptation. Posterior Filtering (PO-F) is an effective strategy to prevent performance degradation. Combining Foreground Background Filtering (FB-F) with PO-F would always yield significant improvements regardless of classes, showing usefulness of using PP-score for filtering and the efficacy of our filtering pseudo-label refinement strategy. Combining the Foreground Background Supervision (FB-S) with only PO-F would not be effective always but combining FB-S with the full pseudo-label refinement procedure would would bring forth significant improvements especially on cyclist. Effects of different rounds of iterative fine-tuning. As customary to any iterative approach, we analyze the effect of the number of rounds of self-training in Figure 2. One conclusion is immediate: vanilla self-training degrades (even underperforms no adaptation) over more rounds of self-training potentially due to learning from erroneous pseudo-labels. Rote-DA (and its variants) improves for the first few rounds of training (before the 10th round), and experience little to no performance degradation over more rounds of training. Effect of Foreground Background Supervision (FB-F). FB-F seeks to reduce false negatives by correcting the foreground predictions by the model. To validate this claim, we plot the precision-recall curves of various detectors in Figure 3. Comparing Rote-DA and PO-F +FB-F, we observe that the max recall for Rote-DA is much higher than PO-F +FB-F, suggesting FB-F is encouraging the detector to produce more meaningful boxes at foreground regions, thus reducing false negatives. Qualitative visualization. In Figure 4, we visualize the adaptation results of various adaptation strategies in both Lyft and Ithaca-365 datasets. We observe that, aligning with quantitative results, ST3D has a good coverage of cars but usually ignores pedestrians and cyclists and generates many \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Car} & \multicolumn{4}{c}{Pedestrian} & \multicolumn{4}{c}{Cyclist} \\ \cline{3-13} PO-F & FB-F & FB-S & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 & 0-30 & 30-50 & 50-80 & 0-80 \\ \hline & & & 58.3 & 38.3 & 14.0 & 38.3 & 22.7 & 12.4 & 0.2 & 10.7 & 27.6 & 0.2 & 0.0 & 9.4 \\ ✓ & & & 68.8 & 52.7 & 12.3 & 46.0 & 46.3 & 30.9 & 0.0 & 22.5 & **66.3** & 6.3 & 0.0 & 35.2 \\ & ✓ & ✓ & 61.2 & 40.1 & 14.6 & 40.9 & 41.4 & 29.7 & 0.8 & 23.4 & 47.5 & 2.7 & 0.0 & 23.3 \\ ✓ & & ✓ & 68.4 & 55.4 & 19.6 & 49.0 & 41.7 & 30.8 & 1.4 & 21.8 & 43.3 & 1.7 & 0.0 & 21.2 \\ ✓ & ✓ & & **71.6** & 54.9 & 19.1 & 50.4 & **52.0** & 38.1 & **4.2** & **29.4** & 58.3 & 19.6 & 0.0 & 34.8 \\ ✓ & ✓ & ✓ & 69.0 & **58.8** & **22.6** & **52.1** & 48.1 & **40.8** & 2.6 & 28.7 & 64.7 & **26.4** & 0.0 & **40.0** \\ \hline No Adaptation & & 57.1 & 33.9 & 9.0 & 35.4 & 37.1 & 21.6 & 0.6 & 19.1 & 43.7 & 8.2 & 0.0 & 24.4 \\ \hline \hline \end{tabular} \end{table} Table 3: **Ablation study on different components in Rote-DA. Different from vanilla self-training, Rote-DA includes two additional components: pseudolabels refinement and Foreground Background Supervision (FB-S). In particular, the pseudo-labels refinement can be further subdivided into two subcomponents: FB-F and PO-F. We show detection performance (AP\({}_{\text{BEV}}\)) of variants of Rote-DA without either of these two parts. We report performance at R10 for all variants.** false positive cars; SN successfully corrects the car size bias, but can hardly improve the recall of the detection; Rote-DA adapts to the object size bias in the target domain while having a good recall rate of all three object classes. **Additional results, analyses, qualitative visualization.** Please refer to the supplementary material for evaluation with more metrics, results on a different detection model (PVRCNN [25]) and on a different adaptation scenario (Waymo Open Dataset [27] to Ithaca-365), and more qualitative results. ## 5 Discussion **Privacy concerns.** As our method relies on collecting unlabeled repeated traversal of the same routes, there are privacy concerns that have to be addressed before public deployment. This could be achieved by making data collection an opt-in option for drivers. Also, the collected data should be properly annoymized, or reduced to random road segments to remove any potential personal identifiable information. **Limitations.** Our method currently focuses on adapting _dynamic_, i.e. mobile, object detectors to target domains using multiple traversals. However, Rote-DA could be extended to _static_ objects easily via selecting the appropriate thresholds for Foreground Background Filtering and Foreground Background Supervision. We leave this exploration for future work. Also, we assume the source and target domain share the same object frequency for Posterior Filtering. However, this assumption could be alleviated via querying local authorities for the object frequency or by estimating the object frequencies from similar regions (we assume access to the locations of target domain). Figure 3: **Precision-recall curves on the KITTI \(\rightarrow\) Lyft with 10 rounds of self-training. We show the P-R curves of ablated Rote-DA, please refer to Figure 2 for naming. The precision and recall is calculated under \(\text{AP}_{\text{BEV}}\) with IoU=0.7 for Cars, 0.5 for Pedestrians and Cyclists.** Figure 2: Performance of various detectors on KITTI \(\rightarrow\) Lyft for different rounds of self-training (averaged across 3 runs with mean and one standard deviation reported). Van. ST stands for vanilla self-training without any modification; Dir. Apply stands for direct applying the source detector without any adaptation. We observe that the performance for vanilla self-training degrades over more rounds of self-training whereas variants of Rote-DA experience little to no degradation in performance after 10 rounds of training. ## 6 Conclusion End-user domain adaptation is one of the key challenges towards safe and reliable self-driving vehicles. In this paper we claim that unlike most domain adaptation settings in machine learning, the self-driving car setting naturally gives rise to a weak supervision signal that is exceptionally well-suited to adapt 3D object detector to a new environment. As drivers share roads, unlabeled LiDAR data automatically comes in the form of multiple traversals of the same routes. We show that with such data we can iteratively refine a detector to new domains. This is effective because we prevent it from reinforcing mistakes with three "safe guards": 1. Posterior Filtering, 2.Foreground Background Filtering, 3. Foreground Background Supervision. Although the experiments in this paper already indicate that Rote Domain Adaptation may currently be the most effective approach for adaptation in the self-driving context, we believe that the true potential of this method may be even greater than our paper seems to suggest. As cars with driver assist features become common place, collecting unlabeled data will become easier and cheaper. This could give rise to unlabeled data sets that are several orders of magnitudes larger than the original source data set, possibly yielding consistently more accurate detectors than are obtainable with purely hand-labeled training sets. ## 7 Acknowledgement This research is supported by grants from the National Science Foundation NSF (IIS-1724282, TRIPODS-1740822, IIS-2107077, OAC-2118240, OAC-2112606 and IIS-2107161), the Office of Naval Research DOD (N00014-17-1-2175), the DARPA Learning with Less Labels program (HR001118S0044), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875).
2308.06836
Global Weak Solutions for the Half-Wave Maps Equation in $\mathbb{R}$
We establish the existence of weak global solutions of the half-wave maps equation with the target $S^2$ on $\mathbb{R}^{1+1}$ with large initial data in $\dot{H}^1 \cap \dot{H}^{\frac{1}{2}}(\mathbb{R})$. We first prove the global well-posedness of a regularized equation. Then we show that the weak limit of the regularized solutions is a weak solution of the half-wave maps equation as the regularization parameter $\varepsilon \rightarrow 0$.
Yang Liu
2023-08-13T19:19:18Z
http://arxiv.org/abs/2308.06836v1
# Global weak solutions for the half-wave maps equation in \(\mathbb{R}\) ###### Abstract. We establish the existence of weak global solutions of the half-wave maps equation with the target \(S^{2}\) on \(\mathbb{R}^{1+1}\) with large initial data in \(\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R})\). We first prove the global well-posedness of a regularized equation. Then we show that the weak limit of the regularized solutions is a weak solution of the half-wave maps equation as the regularization parameter \(\varepsilon\to 0\). ## 1. Introduction Let \(u:\mathbb{R}^{1+1}\to S^{2}\subseteq\mathbb{R}^{3}\) be smooth and bounded with the property that \(\nabla_{t,x}u(t,\cdot)\in L^{r}(\mathbb{R}^{n})\) for some \(r\in(1,\infty)\), and furthermore \(\lim_{|x|\to+\infty}u(t,x)=Q\) for some fixed \(Q\in M\), for each \(t\). We define the operator \((-\Delta)^{\frac{1}{2}}u=-\sum_{j=1}^{n}(-\triangle)^{-\frac{1}{2}}\partial_{j} (\partial_{j}u)\). The Cauchy problem of _half-wave map_ is given by \[\begin{cases}&\partial_{t}u=u\times(-\Delta)^{\frac{1}{2}}u:=f(x,t)\\ &u(0,x)=u_{0}:\mathbb{R}\to S^{2},\end{cases} \tag{1.1}\] where \(u_{0}\in\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2})\) is a smooth and constant outside of a compact domain (this ensures that \((-\Delta)^{\frac{1}{2}}u_{0}\) is well-defined). We consider the Cauchy problem (1.1) with large data in \(\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R})\). We show that there exists a weak solution to the half-wave equation (1.1) in \(L^{2}_{t,loc}([0,\infty),\,\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2}))\) for smooth initial data \(u_{0}\in\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2})\). We say \(u\) is a global weak solution for (1.1) if it satisfies \[-\int_{0}^{\infty}\int_{\mathbb{R}}u\cdot\varphi_{t}dxdt-\int_{\mathbb{R}}u_{0 }\varphi(x)dx=\int_{0}^{\infty}\int_{\mathbb{R}}(-\Delta)^{\frac{1}{4}}(u \times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u\ dxdt, \tag{1.2}\] for all \(\varphi\in C^{\infty}_{c}(\mathbb{R},S^{2})\). ### Theorem _Let \(u_{0}\in\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2})\) be a smooth initial data and constant outside of a compact domain s.t. \(\lim_{|x|\to\infty}\ u_{0}(x)=Q\) for a fixed \(Q\in S^{2}\). Then the Cauchy problem (1.1) admits a global weak solution \(u\in L^{2}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2}))\) such that \(\lim_{|x|\to\infty}\ u(t,x)=Q\) for all \(t\in[0,\infty)\)._ We introduce the following parabolic regularization of (1.1): \[u_{t}-\varepsilon\triangle u=u\times(-\Delta)^{\frac{1}{2}}u. \tag{1.3}\] Using the theory of parabolic equations, we can establish the existence of classical global solutions \(u_{\varepsilon}\in L^{\infty}_{t}L^{\infty}_{x}\cap C^{0}_{t}([0,\infty),\dot{ H}^{\frac{1}{2}}(\mathbb{R}))\) for (1.3). Then we can show that the weak limit of these regularized solutions is a weak solution of the half-wave maps equation as \(\varepsilon\to 0\) as defined in (1.2). In this work, we only consider initial data, which is smooth and constant outside of a compact domain, since we need the extra regularity to establish the classical well-posedness theory of (1.3) and the converge of the solution \(u\) to the fixed point \(Q\). We expect the theorem holds for more general initial data in \(\dot{H}^{\frac{1}{2}}(\mathbb{R})\). We leave this for future work. ## 2. Background The half-wave equation is related to the well-studied Schrodinger maps equation in the form of \[u_{t}=u\times\triangle u,\] and the classical wave equation \[\Box u=\partial_{\alpha}\partial^{\alpha}u=-u\partial_{\alpha}u^{T}\partial^{ \alpha}u.\] Moreover, we can also view the half-wave map equation as the Landau-Lifshitz equation \[u_{t}=u\times(-\Delta)^{\frac{1}{2}}u+\lambda u\times(u\times(-\Delta)^{\frac {1}{2}}u)\] without the Gilbert damping term as \(\lambda\to 0\). The weak solution of the half-wave map equation with torus domain \(\mathbb{T}^{n}\) was studied in the works of [12], [19] and [20] in the context of the well-posedness problem of the fractional Landau-Lifshitz equation without Gilbert damping. Pu and Guo in [20] established the weak solutions of the half-wave map equation with torus domain \(\mathbb{T}^{n}\) via the vanishing viscosity method and Kato's method. The half-wave map equation (1.1) admits a conserved energy \[E(t):=\int_{\mathbb{R}^{n}}|(-\Delta)^{\frac{1}{4}}u|^{2}dx, \tag{2.1}\] where \((-\Delta)^{\frac{1}{4}}u:=-\sum_{j=1}^{n}(-\triangle)^{-\frac{3}{4}}\partial_{ j}(\partial_{j}u)\). This gives the a priori condition that \(u(t)\in\dot{H}^{\frac{1}{2}}(\mathbb{R}^{n})\) which implies that the half-wave maps is energy-critical when \(n=1\). In the work of [16], Lenzmann and Schikorra give a full classification of the traveling solitary waves for the energy-critical problem with target \(S^{2}\) for \(n=1\). It is also worth noting that the critical points of the (2.1) are the \(\frac{1}{2}-\)harmonic maps. The fractional harmonic maps were studied in the works of [4], [5] and [6]. The one-dimensional energy-critical half-wave maps are of notable physics interest, intensively studied in the works of [2, 8, 9, 15, 24]. The one-dimensional half-wave maps arise as a continuum limit of the discrete _Calogero-Moser (CM) spin system_. Interested readers shall refer to [17] for the derivation of the half-wave maps equation from the CM system. For more on the CM systems, we refer the reader to [3, 10] in which the authors study the theory of completely integrable systems. In addition, the classical CM spin systems can be obtained by taking a suitable semiclassical limit of the quantum spin chains related to the well-known _Haldane-Shastry (HS) spin chains_, see e.g. [11, 21], which are exactly solvable quantum models. The global well-posedness of the Cauchy problem (1.1) with target \(S^{2}\) for small \(\dot{B}^{\frac{n}{2}}_{2,1}\times\dot{B}^{\frac{n}{2}-1}_{2,1}\) initial data was established by Krieger and Sire [14] for \(n\geq 5\). The result was later improved by Kiesenhofer and Krieger [13] to \(n=4\). In previous work [18], the author established global well-posedness for the half-wave map with \(S^{2}\) target for small \(\dot{H}^{\frac{n}{2}}\times\dot{H}^{\frac{n}{2}-1}\) initial data for \(n\geq 5\). Global well-posedness for the equation with \(\mathbb{H}^{2}\) target for small smooth \(\dot{B}^{\frac{n}{2}}_{2,1}\times\dot{B}^{\frac{n}{2}-1}_{2,1}\) initial data was also proven in [18]. These works are based on the strategy that transforms the (1.1) to a nonlinear wave equation form. One can then utilize well-established theory for wave maps ( e.g. [22, 23]) to show global well-posedness for the half-wave maps equation with small initial data. ## 3. Regularization of Half-Wave Map Equation For \(\varepsilon,T>0\), we define the regularized half-wave map equation as \[\begin{cases}&\partial_{t}u_{\varepsilon}-\varepsilon\triangle u_{\varepsilon} =u_{\varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}:=N(u_{ \varepsilon})(x,t)\\ &u_{\varepsilon}(0,x)=u_{0}:\mathbb{R}\to S^{2}\end{cases} \tag{3.1}\] After the regularization, the equation (3.1) is a nonlinear parabolic equation. We define its corresponding fundamental solution \(K_{\varepsilon}(x,t)\) for each \(\varepsilon\) as \[K_{\varepsilon}(x,t)=\begin{cases}\int_{\mathbb{R}}e^{2\pi ix\cdot\xi- \varepsilon|\xi|^{2}t}d\xi=\frac{1}{\sqrt{4\varepsilon\pi t}}e^{-\frac{|x|^{2 }}{4\varepsilon t}}&\text{if $t>0$}\\ 0&\text{if $t=0$}\end{cases} \tag{3.2}\] Moreover, we know that \(\widehat{K}_{\varepsilon}(\xi,t)=e^{-\varepsilon|\xi|^{2}t}\) and \(\int_{\mathbb{R}}K_{\varepsilon}(x,t)\,dx=1\). In particular, \(K_{\varepsilon}(x,t)\) is a solution of \[\begin{cases}&\partial_{t}K_{\varepsilon}(x,t)-\varepsilon\triangle K_{ \varepsilon}(x,t)=0\\ &K_{\varepsilon}(x,0)=\delta(x)\end{cases} \tag{3.3}\] By the Duhamel principle, we can define a solution \(u_{\varepsilon}\) of (3.1) as \[u_{\varepsilon}(t,x)=\int_{0}^{t}\int_{\mathbb{R}}K_{\varepsilon}(x-y,t-s)\, \,N(u_{\varepsilon})(y,s)\,dy\,ds+\int_{\mathbb{R}}K_{\varepsilon}(x-y,t)u_{0 }(y)\,dy \tag{3.4}\] We first use an iteration scheme to find a local solution \(u_{\varepsilon}\) of (3.1) in \(C^{t}_{0}\dot{H}^{1}\cap L^{\infty}_{t}L^{\infty}_{x}([0,T]\times\mathbb{R}, \mathbb{R}^{3})\). **3.1 Theorem**.: _Let \(T,\varepsilon>0\), and \(u_{0}\in\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R})\) smooth and constant outside of a compact set of \(\mathbb{R}\). Then there exists a maximal time \(T(u_{0})>0\) such that (3.1) admit an unique solution \(u_{\varepsilon}\) in \(C^{t}_{0}\dot{H}^{1}_{x}\cap L^{\infty}_{t}L^{\infty}_{x}([0,T]\times\mathbb{R },\mathbb{R}^{3})\) such that \(\lim_{|x|\to\infty}\,\,u_{\varepsilon}(t,x)=Q\) for all \(t\in[0,T)\)._ Proof.: We use an iteration scheme to show there exists a solution for equation (3.1) in the solution space \(X_{T}=L^{\infty}_{t}L^{\infty}_{x}\cap C^{0}_{t}\dot{H}^{1}_{x}([0,T]\times \mathbb{R})\). We start with \(u^{(0)}_{\varepsilon}\) which solves the homogeneous equation: \[\begin{cases}&\partial_{t}u^{(0)}_{\varepsilon}-\varepsilon\triangle u^{(0)}_ {\varepsilon}=0\\ &u^{(0)}_{\varepsilon}(0,\cdot)=u_{0}\end{cases} \tag{3.5}\] By the fundamental solution, we know that \(u^{(0)}_{\varepsilon}(t,x)=K_{\varepsilon}\star u_{0}(t,x)=\int_{\mathbb{R}}K _{\varepsilon}(t,x-y)u_{0}(y)\,dy\). Hence, we have \[\|u^{(0)}_{\varepsilon}\|_{L^{\infty}_{t}L^{\infty}_{x}}\leq\|u_{0}\|_{L^{ \infty}_{t}L^{\infty}_{x}}|\int_{\mathbb{R}}K_{\varepsilon}(t,x-y)\,dy|\leq\| u_{0}\|_{L^{\infty}_{t}L^{\infty}_{x}}\] Hence we know that \[\|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t}L^{\infty}_{x}}\leq\|u_{0}\|_{L^{\infty}_ {t}L^{\infty}_{x}} \tag{3.6}\] Next,we consider the \(C^{t}_{0}\dot{H}^{1}\) norm of \(u_{\varepsilon}^{(0)}\). \[\begin{split}\|u_{\varepsilon}^{(0)}(t,\cdot)\|_{\dot{H}^{1}}& =\|\int_{\mathbb{R}}K_{\varepsilon}(t,y)\nabla_{x}u_{0}(x-y)\,dy \|_{L^{2}_{x}}\\ &\leq\|K_{\varepsilon}(t-s,\cdot)\|_{L^{1}_{x}}\|\nabla_{x}u_{0} (s,\cdot)\|_{L^{2}_{x}}\\ &\leq\|u_{0}\|_{L^{\infty}_{t}\dot{H}^{1}}\end{split} \tag{3.7}\] Hence, we have \[\|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t}\dot{H}^{1}}\leq\|u_{0}\|_{L^{\infty}_ {t}\dot{H}^{1}} \tag{3.8}\] Therefore, we conclude that \(u_{\varepsilon}^{(0)}\in X_{T}\). Moreover, \(u_{\varepsilon}^{(0)}\) is the unique solution for (3.5) by the uniqueness of the homogeneous heat equation theory. Next, we define \(u_{\varepsilon}^{(1)}\) as a solution of \[\begin{cases}&\partial_{t}u_{\varepsilon}^{(1)}-\varepsilon\triangle u_{ \varepsilon}^{(1)}=u_{\varepsilon}^{(0)}\times(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(0)}:=N(u_{\varepsilon}^{(0)})\\ &u_{\varepsilon}^{(1)}(0,\cdot)=u_{0}\end{cases} \tag{3.9}\] By the Duhamel's principle, we have \[u_{\varepsilon}^{(1)}=K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})+K_{ \varepsilon}\star u_{0}.\] First of all, we know that \[\begin{split}&\|K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})(t, \cdot)\|_{L^{\infty}_{x}}\\ &\leq\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}|u_{\varepsilon}^{(0) }\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}|(s,y)\,dy\,ds\\ &\leq\|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t}L^{\infty}_{x}} \int_{0}^{t}\frac{1}{\sqrt{4\pi\varepsilon(t-s)}}\ \|e^{-\frac{|x- |^{2}}{4\varepsilon(t-s)}}\|_{L^{2}_{y}}\ \|(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(0)}(s,\cdot)\|_{L^{2}_{y}}\,ds\\ &\lesssim\|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t}\dot{H}^{1}} \int_{0}^{t}\frac{1}{(t-s)^{\frac{1}{4}}}\,ds\\ &\lesssim t^{\frac{3}{4}}\ \|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t} \dot{H}^{1}}\end{split}\] Therefore, we have \[\max_{t\in[0,T]}\|u_{\varepsilon}^{(1)}\|_{L^{\infty}_{x}}\leq CT^{\frac{3}{4 }}\|u_{\varepsilon}^{(0)}\|_{L^{\infty}_{t}\dot{H}^{1}} \tag{3.10}\] For the \(C^{t}_{0}\dot{H}^{1}\) norm, we use Minkowski's inequality for integrals to derive: \[\begin{split}&\|K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})(t, \cdot)\|_{\dot{H}^{1}_{x}}\\ &=\|\nabla_{x}K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})(t, \cdot)\|_{L^{2}_{x}}\\ &\leq\int_{0}^{t}\|\nabla_{x}K_{\varepsilon}\star N(u_{\varepsilon }^{(0)})(t,s)\|_{L^{2}_{x}}\,ds.\end{split}\] Through Young's inequality, we have \[\begin{split}&\|\nabla_{x}K_{\varepsilon}\star N(u_{\varepsilon }^{(0)})(t,s)\|_{L^{2}_{x}}\\ &\leq\|\nabla_{x}K_{\varepsilon}(t-s)\|_{L^{1}_{x}}\|N(u_{ \varepsilon}^{(0)})(s)\|_{L^{2}_{x}}\end{split}\] We know that \[\|N(u_{\varepsilon}^{(0)})(s)\|_{L^{2}_{x}}\] \[= \int_{\mathbb{R}}|u_{\varepsilon}^{(0)}\times(-\Delta)^{\frac{1}{2} }u_{\varepsilon}^{(0)}|^{2}\,dx\] \[\leq \int_{\mathbb{R}}|u_{\varepsilon}^{(0)}|^{2}|(-\Delta)^{\frac{1}{ 2}}u_{\varepsilon}^{(0)}|^{2}\,dx+\int_{\mathbb{R}}|u_{\varepsilon}^{(0)} \cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}|^{2}\,dx\] \[\lesssim \|u_{\varepsilon}^{(0)}\|^{2}_{L^{\infty}_{t}\dot{H}^{1}_{x}},\] and \[\|\nabla_{x}K_{\varepsilon}(t-s)\|_{L^{1}_{x}}\] \[=\frac{1}{\sqrt{\pi\varepsilon(t-s)}}\frac{1}{2\varepsilon(t-s)} \ \int_{\mathbb{R}}|x-y|e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}\,dx\] \[\leq\frac{2}{\sqrt{\pi(t-s)}}.\] Thus we have \[\|K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})\|_{L^{\infty}_{t}\dot{H}^{1}_ {x}}\leq C\sup_{t\in[0,T]}\int_{0}^{t}\frac{1}{\sqrt{\pi(t-s)}}\,ds\leq C\sqrt {T} \tag{3.11}\] Therefore, we know that solution \(u_{\varepsilon}^{(1)}\in X_{T}\) as well. We then show that solution \(u_{\varepsilon}^{(1)}\) is the unique solution for (3.9) in \(X_{T}\) space. Assume we have another solution \(v_{\varepsilon}^{(1)}\in X_{T}\) for (3.9). We consider \(w_{\varepsilon}=u_{\varepsilon}^{(1)}-v_{\varepsilon}^{(1)}\) and we have \[\left\{\begin{array}{c}\partial_{t}w_{\varepsilon}-\varepsilon\triangle w_ {\varepsilon}=0\\ w_{\varepsilon}(\cdot,0)=0\end{array}\right. \tag{3.12}\] For the energy functional \(E(t)=\int_{\mathbb{R}}|w_{\varepsilon}|^{2}dx\). We have \[\frac{d}{dt}\int_{\mathbb{R}}|w_{\varepsilon}|^{2}dx=-\varepsilon\int_{ \mathbb{R}}|\nabla_{x}w_{\varepsilon}|^{2}dx\leq 0 \tag{3.13}\] Hence \[\frac{d}{dt}E(t)\leq 0 \tag{3.14}\] Therefore, we know that \(E(t)\equiv 0\) for all \(t\in[0,T]\). Thus \(w_{\varepsilon}(t,\cdot)=0\) for all \(t\in[0,T]\). So we have \(u_{\varepsilon}^{(1)}=v_{\varepsilon}^{(1)}\) on \([0,T]\times\mathbb{R}\). Therefore, \(u_{\varepsilon}^{(1)}\) is the unique solution for (3.9) in \(X_{T}\). For \(j\geq 2\), we define the general iterative scheme inductively as follows: \[\left\{\begin{array}{c}\partial_{t}u_{\varepsilon}^{(j)}-\varepsilon\triangle u _{\varepsilon}^{(j)}=u_{\varepsilon}^{(j-1)}\times(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(j-1)}:=N(u_{\varepsilon}^{(j-1)})\\ u_{\varepsilon}^{(j)}(0,\cdot)=u_{0}\end{array}\right.\] We have \[u_{\varepsilon}^{(j)}=K_{\varepsilon}\star N(u_{\varepsilon}^{(j-1)})+K \star u_{0}.\] Following the same procedure, we have a unique solution \(u_{\varepsilon}^{(j)}\in X_{T}\). Therefore, we conclude that \[u_{\varepsilon}^{(j)}\in X_{T}\quad\forall j\geq 1. \tag{3.15}\] Next, we want to show that \(\{u_{\varepsilon}^{(j)}\}\) is a Cauchy sequence in \(X_{T}\). For any \(k,l\) large enough, we have \[\|u_{\varepsilon}^{j}-u_{\varepsilon}^{k}\|_{L^{\infty}_{t}\dot{H}^{1}}= \|K_{\varepsilon}\star(N(u_{\varepsilon}^{(j-1)})-N(u_{\varepsilon }^{(k-1)}))\|_{L^{\infty}_{t}\dot{H}^{1}_{x}} \tag{3.16}\] \[\leq \|K\star((u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)})\times u _{\varepsilon}^{(j-1)})\|_{L^{\infty}_{t}\dot{H}^{1}_{x}}\] \[+\|K_{\varepsilon}\star(u_{\varepsilon}^{(k-1)}\times(-\Delta)^{ \frac{1}{2}}(u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}))\|_{L^{\infty}_ {t}\dot{H}^{1}_{x}}\] We estimate the two terms in the above equation separately. For the first term, we have \[\|K_{\varepsilon}\star((u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{ (k-1)})\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(j-1)})\|_{L^{\infty}_{t }\dot{H}^{1}} \tag{3.17}\] \[\leq\int_{0}^{T}\|\nabla_{x}K_{\varepsilon}(t-s)\|_{L^{1}_{x}} \|(u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)})\times(-\Delta)^{\frac{1} {2}}u_{\varepsilon}^{(j-1)}(\cdot,s)\|_{L^{2}_{x}}ds\] \[\leq C\sqrt{T}\|u_{\varepsilon}^{(j-1)}\|_{L^{\infty}_{t}\dot{H}^ {1}}\|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_{t}L^{ \infty}_{x}}\] Similarly, for the second term, we have \[\|K_{\varepsilon}\star(u_{\varepsilon}^{(k-1)}\times(-\Delta)^{ \frac{1}{2}}(u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}))\|_{L^{\infty}_ {t}\dot{H}^{1}_{x}} \tag{3.18}\] \[\leq\int_{0}^{T}\|\nabla_{x}K_{\varepsilon}(t-s)\|_{L^{1}_{x}}\| u_{\varepsilon}^{(k-1)}\times(-\Delta)^{\frac{1}{2}}(u_{\varepsilon}^{(j-1)}-u_{ \varepsilon}^{(k-1)})\|_{L^{2}_{x}}\] \[\leq C\sqrt{T}\|u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_{t}L^{ \infty}_{x}}\|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_ {t}\dot{H}^{1}}\] Hence, we know that \[\|u_{\varepsilon}^{(j)}-u_{\varepsilon}^{(k)}\|_{L^{\infty}_{t}\dot{H}^{1}} \leq C\sqrt{T}\|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_ {t}\dot{H}^{1}}, \tag{3.19}\] where \(C\) is a constant independent of \(\|u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_{t}L^{\infty}_{x}},\|u_{\varepsilon}^{ (j-1)}\|_{L^{\infty}_{t}\dot{H}^{1}}\). Moreover, \[\|u_{\varepsilon}^{(j)}-u_{\varepsilon}^{(k)}(t,\cdot)\|_{L^{ \infty}_{x}} \tag{3.20}\] \[\leq \int_{0}^{t}\frac{1}{\sqrt{4\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4 \varepsilon(t-s)}}|N(u_{\varepsilon}^{j})-N(u_{\varepsilon}^{(k)}|(y,s)\,dy \,ds\] \[\leq \int_{0}^{t}\frac{1}{\sqrt{4\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4 \varepsilon(t-s)}}|(u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)})\times(- \Delta)^{\frac{1}{2}}u_{\varepsilon}^{(j-1)}|(y,s)\,dy\,ds\] \[+\int_{0}^{t}\frac{1}{\sqrt{4\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4 \varepsilon(t-s)}}|u_{\varepsilon}^{(k-1)}\times(-\Delta)^{\frac{1}{2}}(u_{ \varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)})|(y,s)\,dy\,ds\] \[\leq \|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_ {t}L^{\infty}_{x}}\int_{0}^{t}\frac{1}{\sqrt{4\pi(t-s)}}\ \|e^{-\frac{|x-|^{2}}{4 \varepsilon(t-s)}}\|_{L^{2}_{y}}\ \|(-\Delta)^{\frac{1}{2}}v^{(j-1)}(s,\cdot)\|_{L^{2}_{y}}\,ds\] \[+\|u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_{t}L^{\infty}_{x}}\int_{ 0}^{t}\frac{1}{\sqrt{4\pi(t-s)}}\ \|e^{-\frac{|x-|^{2}}{4 \varepsilon(t-s)}}\|_{L^{2}_{y}}\ \|(-\Delta)^{\frac{1}{2}}(u_{\varepsilon}^{(j-1)}-u_{ \varepsilon}^{(k-1)})(s,\cdot)\|_{L^{2}_{y}}\,ds\] \[\leq C\|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{\infty}_ {t}\dot{H}^{1}}\int_{0}^{t}\frac{1}{(t-s)^{\frac{1}{4}}}\,ds\] \[\leq Ct^{\frac{3}{4}}\ \|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{ \infty}_{t}L^{\infty}_{x}}\] Hence, \[\|u_{\varepsilon}^{(j)}-u_{\varepsilon}^{(k)}\|_{L^{\infty}_{t}L^{\infty}_{x}} \leq CT^{\frac{3}{4}}\|u_{\varepsilon}^{(j-1)}-u_{\varepsilon}^{(k-1)}\|_{L^{ \infty}_{t}L^{\infty}_{x}} \tag{3.21}\] Therefore, we have \[\|u_{\varepsilon}^{(j)}-u_{\varepsilon}^{(k)}\|_{L^{\infty}_{t}L^{\infty}_{x} \cap C^{t}_{0}\dot{H}^{1}}\leq CT^{\alpha}\|u_{\varepsilon}^{(j-1)}-u_{ \varepsilon}^{(k-1)}\|_{L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}}, \tag{3.22}\] for some \(\alpha\in(0,1)\) and \(C\) depends on \(\|u_{\varepsilon}^{(j-1)}\|_{X_{T}},\|u_{\varepsilon}^{(k-1)}\|_{X_{T}}\). We choose \(T>0\) small enough so that \(CT^{\alpha}<1\). Then we have a contraction mapping from \(L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}\) to itself. Thus, we conclude that there exists a \(u_{\varepsilon}\in L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}\) s.t. \(u_{\varepsilon}^{(j)}\to u_{\varepsilon}\) in \(L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}\) as \(j\to\infty\). Moreover, \(u_{\varepsilon}\) is the solution for (3.1) in \(X_{T}\) and it has the form: \[u_{\varepsilon}=K_{\varepsilon}\star N(u_{\varepsilon})+K_{\varepsilon}\star u _{0}\] Furthermore, we have \[\|u_{\varepsilon}\|_{L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}} \leq CT^{\alpha}\|u_{0}\|_{L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^ {1}} \tag{3.23}\] Hence, we showed there exists a unique solution \(u_{\varepsilon}\) for (3.1) in \(L^{\infty}_{t}L^{\infty}_{x}\cap C^{t}_{0}\dot{H}^{1}([0,T]\times\mathbb{R})\). **3.2 Lemma**.: _For the solution \(u_{\varepsilon}\) we obtained, we have_ \[\lim_{|x|\to\infty}u_{\varepsilon}(t,x)=Q,\ \forall t\in[0,T). \tag{3.24}\] Proof.: We show the convergence iteratively for each \(u_{\varepsilon}^{(j)}\). Hence the limit \(u_{\varepsilon}\) also converges to \(Q\) as \(|x|\to\infty\). For \(t\in[0,T)\) and \(x\) large, we first have \[|x\cdot(u_{\varepsilon}^{(0)}-Q)| \leq\Big{|}\int_{\mathbb{R}}\frac{1}{\sqrt{4\varepsilon\pi t}}e^{ -\frac{|x-y|^{2}}{4\varepsilon t}}x(u_{0}(y)-Q)\,dy\Big{|}\] \[\leq\Big{|}\int_{\mathbb{R}}\frac{1}{\sqrt{4\varepsilon\pi t}}e^{ -\frac{|x-y|^{2}}{4\varepsilon t}}(x-y)(u_{0}(y)-Q)\,dy\Big{|} \tag{3.25}\] \[+\Big{|}\int_{\mathbb{R}}\frac{1}{\sqrt{4\varepsilon\pi t}}e^{ -\frac{|x-y|^{2}}{4\varepsilon t}}y(u_{0}(y)-Q)\,dy\Big{|} \tag{3.26}\] For (3.26), we have \[\int_{\mathbb{R}}\frac{1}{\sqrt{4\varepsilon\pi t}}e^{-\frac{|x- y|^{2}}{4\varepsilon t}}|y||u_{0}(y)-Q|\,dy\] \[\leq C\|y(u_{0}(y)-Q)\|_{L^{\infty}_{y}}.\] Since \(u_{0}\) is constant outside of a compact domain, we know that \(y(u_{0}(y)-Q)\) is bounded. Hence we have \[\|x\cdot(u_{\varepsilon}^{(0)}-Q)\|_{L^{\infty}_{x}}\leq C. \tag{3.27}\] So we know that \[\lim_{|x|\to\infty}u_{\varepsilon}^{(0)}=Q,\ \text{for all}\ t\in[0,T).\] Next, we consider \(u_{\varepsilon}^{(1)}\). We have \[\lim_{|x|\to\infty}|u_{\varepsilon}^{(1)}(t,x)-Q|\leq\lim_{|x|\to\infty}|K_{ \varepsilon}\star N(u_{\varepsilon}^{(0)})|+\lim_{|x|\to\infty}|K_{\varepsilon }\star u_{0}-Q|.\] We already know that \(\lim_{|x|\to\infty}|K_{\varepsilon}\star u_{0}-Q|=0\). We then show that \[\lim_{|x|\to\infty}K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})=0.\] We first show that \[|x\cdot K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})|\leq C,\text{ for large }x. \tag{3.28}\] We have \[|x\cdot K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})| =\Big{|}\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}x(u_{\varepsilon}^{(0 )}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)})(s,y)\,dy\,ds\Big{|}\] \[\leq\Big{|}\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}(x-y)(u_{\varepsilon}^ {(0)}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)})(s,y)\,dy\,ds\Big{|} \tag{3.29}\] \[+\Big{|}\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}y(u_{\varepsilon}^{(0 )}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)})(s,y)\,dy\,ds\Big{|} \tag{3.30}\] For (3.29), we have \[\Big{|}\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4 \varepsilon(t-s)}}(x-y)(u_{\varepsilon}^{(0)}\times(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(0)})(s,y)\,dy\,ds\Big{|}\] \[\leq\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\|e^{-\frac{ |x-y|^{2}}{4\varepsilon(t-s)}}(x-y)\|_{L_{y}^{2}}\|u_{\varepsilon}^{(0)}\|_{L_ {y}^{\infty}}\|(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\|_{L_{y}^{2}}\,ds\] \[\leq C\sqrt{t}\|u_{\varepsilon}^{(0)}\|_{\dot{H^{1}}}\] \[\leq C.\] Then we consider (3.30). We have \[\Big{|}\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\ \int_{\mathbb{R}}e^{-\frac{|x-y|^{2}}{4 \varepsilon(t-s)}}y(u_{\varepsilon}^{(0)}\times(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(0)})(s,y)\,dy\,ds\Big{|}\] \[\leq\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\|e^{-\frac{ |x-y|^{2}}{4\varepsilon(t-s)}}\|_{L_{y}^{1}}\|u_{\varepsilon}^{(0)}\|_{L_{y}^ {\infty}}\|y(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\|_{L_{y}^{\infty}}\,ds\] \[\leq C\sqrt{t}\|y(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\|_{ L_{y}^{\infty}}\] So we reduce the problem to show that \[\|y(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}(y)\|_{L_{y}^{\infty}}\leq C. \tag{3.31}\] We use integration by parts to derive \[|x\partial_{x}u_{\varepsilon}^{(0)}(t,x)| =|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}x\partial_{x} e^{-\frac{|x-y|^{2}}{4\varepsilon t}}u_{\varepsilon}^{(0)}(y)\,dy|\] \[=|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}x\partial_{y }e^{-\frac{|x-y|^{2}}{4\varepsilon t}}u_{0}(y)\,dy|\] \[=|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}e^{-\frac{|x -y|^{2}}{4\varepsilon t}}x\partial_{y}u_{0}(y)\,dy|\] \[\leq|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}e^{-\frac {|x-y|^{2}}{4\varepsilon t}}(x-y)\partial_{y}u_{0}(y)\,dy|\] \[+|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}e^{-\frac{|x -y|^{2}}{4\varepsilon t}}y\partial_{y}u_{0}(y)\,dy|\] So we have \[|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}e^{-\frac{|x -y|^{2}}{4\varepsilon t}}(x-y)\partial_{y}u_{0}(y)\,dy|\] \[\leq C\|u_{0}\|_{\dot{H}^{1}}\|e^{-\frac{|x-y|^{2}}{4\varepsilon t }}(x-y)\|_{L^{2}_{y}} \tag{3.32}\] \[\leq C.\] For the second term, we have \[|\frac{1}{\sqrt{4\varepsilon\pi t}}\int_{\mathbb{R}}e^{-\frac{|x-y |^{2}}{4\varepsilon t}}y\partial_{y}u_{0}(y)\,dy|\] \[\leq C\|y\partial_{y}u_{0}\|_{L^{\infty}_{y}}\|e^{-\frac{|x-y|^{2 }}{4\varepsilon t}}\|_{L^{1}_{y}} \tag{3.33}\] \[\leq C\|y\partial_{y}u_{0}\|_{L^{\infty}_{y}}\] \[\leq C,\] since \(\partial_{y}u_{0}\) is compact supported. Hence we have \[|x\partial_{x}u_{\varepsilon}^{(0)}(t,x)|\leq C,\] We can iteratively consider the quantity \(x^{n}\partial_{x}u_{\varepsilon}^{(0)}(t,x)\) for \(n\geq 1\) and we have \[|x^{n}\partial_{x}u_{\varepsilon}^{(0)}(t,x)|\leq C. \tag{3.34}\] So we know that \(\partial_{x}u_{\varepsilon}^{(0)}(t,x)\) is decay arbitrarily fast as \(|x|\to\infty\). We write \[(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}=(-\triangle)^{-\frac{1}{2}} \partial_{x}\partial_{x}u_{\varepsilon}^{(0)}. \tag{3.35}\] Note here that \((-\triangle)^{-\frac{1}{2}}\partial_{x}\) is a Hilbert transform via a Fourier multiplier _is_gn\((\xi)\), so we have \[(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}=\frac{1}{\pi}\text{p.v. }\int_{-\infty}^{\infty}\frac{\partial_{x}u_{\varepsilon}^{(0)}(t,y)}{x-y}\,dy. \tag{3.36}\] Therefore, for \(x\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}(t,x)\), we have \[\begin{split}|x\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}(t, x)|&=\Big{|}\frac{1}{\pi}\text{ p.v. }\int_{-\infty}^{\infty}\frac{x\partial_{y}u_{\varepsilon}^{(0)}(t,y)}{x-y}\, dy\Big{|}\\ &\leq C\Big{|}\text{p.v. }\int_{-\infty}^{\infty}\frac{1}{(x-y)x^{n}} \,dy\Big{|}\\ &\leq C.\end{split} \tag{3.37}\] Moreover, we can also conclude that \(|x^{n}\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}(t,x)|\) is bounded for large \(x\). Hence \((-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}(t,x)\) is decay arbitrarily fast as \(|x|\to\infty\). Thus we have \[\lim_{|x|\to 0}u_{\varepsilon}^{(1)}=Q.\] From above argument, we know that \(\lim_{|x|\to\infty}u_{\varepsilon}^{(j)}=Q\) holds whence \[\|x\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(j-1)}(x)\|_{L_{x}^{\infty}} \leq C. \tag{3.38}\] We show (3.38) by iteration again. We first show that \[\|x\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(1)}\|_{L_{x}^{\infty}}\leq C. \tag{3.39}\] We know that \[(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(1)}\simeq K_{\varepsilon}\star(- \Delta)^{\frac{1}{2}}(N(u_{\varepsilon}^{(0)}))+(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}^{(0)}.\] With (3.37), we only need to show that \[\|x\cdot K_{\varepsilon}\star(-\Delta)^{\frac{1}{2}}(N(u_{\varepsilon}^{(0)} ))\|_{L_{x}^{\infty}}\leq C.\] We have \[(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon}^{(0)})(t,x)=\frac{1}{\pi}\text{ p.v. }\int_{-\infty}^{\infty}\frac{\partial_{y}N(u_{\varepsilon}^{(0)})(t,y)}{x-y} \,dy\] We know that \[\partial_{y}N(u_{\varepsilon}^{(0)})=\partial_{y}u_{\varepsilon}^{(0)}\times (-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}+u_{\varepsilon}^{(0)}\times \partial_{y}(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\] As showed in (3.34) and (3.37), we know that \(\partial_{y}u_{\varepsilon}^{(0)}\) and \((-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\) decay arbitrarily fast as \(|x|\to\infty\). Hence \(\partial_{y}N(u_{\varepsilon}^{(0)})\) is also decay arbitrarily fast as \(|x|\to\infty\). So we can conclude that \[|x^{n}\cdot(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon}^{(0)})|\leq C,\text{ for }n\geq 1. \tag{3.40}\] Thus we have that \[|x\cdot(K_{\varepsilon}\star(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon }^{(0)}))|\] \[\leq|\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\int_{ \mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}x(-\Delta)^{\frac{1}{2}}N( u_{\varepsilon}^{(0)})(s,y)\,dyds|\] \[\leq|\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\int_{ \mathbb{R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}(x-y)(-\Delta)^{\frac{1}{2} }N(u_{\varepsilon}^{(0)})(s,y)\,dyds| \tag{3.41}\] \[+|\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\int_{\mathbb{ R}}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}y(-\Delta)^{\frac{1}{2}}N(u_{ \varepsilon}^{(0)})(s,y)\,dyds| \tag{3.42}\] For (3.41), we have \[|\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\int_{\mathbb{R}}e^ {-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}(x-y)(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon }^{(0)})(s,y)\,dyds|\] \[\leq\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\|e^{-\frac {|x-y|^{2}}{4\varepsilon(t-s)}}(x-y)\|_{L_{y}^{2}}\|u_{\varepsilon}^{(0)}\|_{L _{y}^{\infty}}\|(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(0)}\|_{L_{y}^{2}}\,ds\] \[\leq C\sqrt{t}\|u_{\varepsilon}^{(0)}\|_{\dot{H^{1}}}\] \[\leq C.\] For (3.42), we have \[|\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\int_{\mathbb{R }}e^{-\frac{|x-y|^{2}}{4\varepsilon(t-s)}}y(-\Delta)^{\frac{1}{2}}N(u_{ \varepsilon}^{(0)})(s,y)\,dyds|\] \[\leq\int_{0}^{t}\frac{1}{\sqrt{4\varepsilon\pi(t-s)}}\|e^{-\frac {|x-y|^{2}}{4\varepsilon(t-s)}}\|_{L_{y}^{1}}\|u_{\varepsilon}^{(0)}\|_{L_{y}^ {\infty}}\|y(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon}^{(0)})\|_{L_{y}^{\infty} }\,ds\] \[\leq C\sqrt{t}\|y(-\Delta)^{\frac{1}{2}}N(u_{\varepsilon}^{(0)}) \|_{L_{y}^{\infty}}\] \[\leq C.\] Hence we have \[|x\cdot(-\Delta)^{\frac{1}{2}}(K_{\varepsilon}\star N(u_{\varepsilon}^{(0)})( t,x)|\leq C.\] So we showed that (3.39) holds. Then we use the same argument to show that \[\|x\cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}^{(j)}\|_{L_{x}^{\infty}}\leq C,\] holds for all \(j\geq 1\) to conclude the Lemma. ## 4. Global Well-Posedness for the Regularized Equation We consider the solution \(u_{\varepsilon}\) for the regularized equation (3.1). Although the solution \(u_{\varepsilon}\) no longer map into \(S^{2}\), it will still be bounded in \(L_{t}^{\infty}L_{x}^{\infty}[0,T]\times\mathbb{R}\). We consider \(v_{\varepsilon}=u_{\varepsilon}\cdot u_{\varepsilon}\). We have \(v_{\varepsilon}\) satisfies the following equation: \[\partial_{t}(u_{\varepsilon}\cdot u_{\varepsilon}) =2u_{\varepsilon}\cdot(\varepsilon\triangle u_{\varepsilon}+u_{ \varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}) \tag{4.1}\] \[=2\varepsilon\ u_{\varepsilon}\cdot\triangle u_{\varepsilon}\] \[=\varepsilon\triangle(u_{\varepsilon}\cdot u_{\varepsilon})-2 \varepsilon|\nabla u_{\varepsilon}|^{2}\] Hence \[\begin{cases}&\partial_{t}v_{\varepsilon}-\varepsilon\triangle v_{\varepsilon} =-2\varepsilon|\nabla u_{\varepsilon}|^{2}\leq 0\\ &v_{\varepsilon}(0,\cdot)=u_{0}\cdot u_{0}=1\end{cases} \tag{4.2}\] We want to show that we have a priori bound for \(v_{\varepsilon}\) in \(L_{t}^{\infty}L_{x}^{\infty}([0,T]\times\mathbb{R})\). **4.1 Proposition**.: _Local Maximal Princinple Let \(\Omega\subseteq\mathbb{R}\) be an open and bounded set, and \(u\in C^{2}(\Omega_{T})\cap C^{0}(\overline{\Omega}_{T})\), where \(\Omega_{T}=[0,T]\times\Omega\) satisfies:_ \[\partial_{t}u-\triangle u\leq 0\quad\text{in}\quad\Omega_{T} \tag{4.3}\] _Then we have_ \[\sup_{\overline{\Omega}_{T}}u=\sup_{\partial\Omega_{T}}u, \tag{4.4}\] _where \(\partial\Omega_{T}:=([0,T]\times\partial\Omega)\cup(\{0\}\times\Omega)\)._ Proof.: It suffices to show the result for all \(T^{\prime}<T\). For each \(T^{\prime}\), we assume the maximum is attained at \((t_{0},x_{0})\). We first consider the case when \(\triangle u>\partial_{t}u\) in \(\Omega_{T}\). We show that an interior maximum cannot be attained. Assume \((t_{0},x_{0})\notin\partial\Omega_{T}\), then at \((t_{0},x_{0})\), we have \[\begin{cases}\partial_{t}u(t_{0})\geq 0\\ \triangle u(t_{0},x_{0})\leq 0\end{cases} \tag{4.5}\] Hence, we have \(\triangle u(t_{0},x_{0})\leq\partial_{t}u(t_{0})\). This contradicts the assumption that \(\triangle u>\partial_{t}u\) in \(\Omega_{T}\). Hence, we have \((t_{0},x_{0})\in\partial\Omega_{T}\). For the \(\triangle u=\partial_{t}u\), we consider \(u_{\lambda}(t,x)=u(t,x)-\lambda t\), for \(\lambda>0\). We see that \[\partial_{t}u_{\lambda}-\triangle u_{\lambda}=\partial_{t}u-\triangle u- \lambda<0\quad\text{in}\quad\Omega_{T}\] Hence by the strict case above, we obtain \[\sup_{\overline{\Omega}_{T}}u_{\lambda}=\sup_{\partial\Omega_{T}}u_{\lambda}.\] Then we can conclude that \[\sup_{\overline{\Omega}_{T}}u\leq\sup_{\overline{\Omega}_{T}}u_{\lambda}+ \lambda T=\sup_{\partial\Omega_{T}}u_{\lambda}+\lambda T\leq\sup_{\partial \Omega_{T}}u+\lambda T\] Let \(\lambda\to 0\), we have \[\sup_{\overline{\Omega}_{T}}u=\sup_{\partial\Omega_{T}}u.\] **4.2 Theorem**.: _Global Maximal Principle For \(T>0\), and a smooth function \(u_{\varepsilon}:[0,T]\times\mathbb{R}\to\mathbb{R}^{3}\) solves (3.1). We further assume that there's a point \(Q\in S^{2}\), s.t. \(u_{\varepsilon}(t,x)\to Q\) as \(|x|\to\infty\) for every \(t>0\)._ _Then \(v_{\varepsilon}:=u_{\varepsilon}\cdot u_{\varepsilon}\) satisfies (4.2) and we have_ \[\max_{t\in[0,T]}\|v_{\varepsilon}\|_{L^{\infty}}\leq C \tag{4.6}\] Proof.: For any \(R>0\), we consider the time slab \(\Omega_{T,R}=(0,T)\times B_{R}\), where \(B_{R}=(-R,R)\). On \(\Omega_{T,R}\), we know that \(v_{\varepsilon}\) satisfies \[\partial_{t}v_{\varepsilon}-\varepsilon\triangle v_{\varepsilon}\leq 0.\] Hence by the local maximum principle, we have \[|v_{\varepsilon}|\leq\max_{\partial\Omega_{T,R}}|v_{\varepsilon}|=1 \tag{4.7}\] Furthermore, We know that \(u_{\varepsilon}(t,x)\to Q\) as \(|x|\to\infty\), hence \(v_{\varepsilon}(t,x)\to 1\) as \(|x|\to\infty\) for every \(t>0\). Therefore, for any \(\lambda>0\), there existed a \(R>0\), s.t. \[|v_{\varepsilon}|\leq 1+\lambda\quad\text{on }[0,T]\times(\mathbb{R}\setminus B _{R}). \tag{4.8}\] Let \(\lambda\to 0\), we conclude that \[\max_{t\in[0,T]}\|v_{\varepsilon}\|_{L^{\infty}_{x}(\mathbb{R})}\leq 1 \tag{4.9}\] **4.3 Theorem**.: _For \(u_{\varepsilon}\) defined by Theorem 3.1, we can extend the solution to a global solution \(u_{\varepsilon}\in L^{\infty}_{t}L^{\infty}_{x}\cap C^{0}_{t}([0,\infty),\dot{H} ^{1}(\mathbb{R}))\)._ Proof.: We consider the energy term \[E(u_{\varepsilon})=\frac{1}{2}\int_{\mathbb{R}}|\nabla u_{\varepsilon}|^{2}\,dx. \tag{4.10}\] We have \[\begin{split}&\frac{d}{dt}\Big{(}\frac{1}{2}\int_{\mathbb{R}}| \nabla u_{\varepsilon}|^{2}\,dx\Big{)}\\ &=\int\nabla\partial_{t}u_{\varepsilon}\cdot\nabla u_{\varepsilon }\,dx\\ &=-\varepsilon\int|\triangle u_{\varepsilon}|^{2}dx-\int(u_{ \varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon})\cdot\triangle u_{ \varepsilon}\,dx\end{split} \tag{4.11}\] As \(u_{\varepsilon}\) is bounded by Theorem 4.2, we use the Cauchy-Schwarz inequality to conclude that \[\begin{split}&|\int(u_{\varepsilon}\times(-\Delta)^{\frac{1}{2}}u _{\varepsilon})\cdot\triangle u_{\varepsilon}\,dx|\\ &\leq\varepsilon\int|\triangle u_{\varepsilon}|^{2}dx+\frac{C}{ \varepsilon}\int|\nabla u_{\varepsilon}|^{2}dx,\end{split} \tag{4.12}\] where \(C\) is a constant depending on \(\|u_{\varepsilon}(0,\cdot)\|_{L^{\infty}}\). Therefore, we have \[\partial_{t}E(u_{\varepsilon})\leq\frac{C}{\varepsilon}E(u_{\varepsilon}).\] By Gronwall inequality, we conclude that \[E(u_{\varepsilon}(t))\leq\frac{C}{\varepsilon}e^{\frac{C}{\varepsilon}t}\quad \forall t\geq 0. \tag{4.13}\] Therefore, we know that \[\|u\|_{X_{T}}\leq\frac{C}{\varepsilon}e^{\frac{C}{\varepsilon}t} \tag{4.14}\] Let's denote \(T_{\max}\) as the maximum existence time of the solution \(u_{\varepsilon}\). We then show that \(T_{\max}=\infty\) by contradiction. Assume that \(T_{\max}<\infty\), for solution \(u_{\varepsilon}\) on \([0,T_{\max}]\times\mathbb{R}\), we then consider the Cauchy problem with initial data \(\widetilde{u}_{\varepsilon}(0,\cdot)=u_{\varepsilon}(T_{\max}-\delta)\) for a small \(\delta>0\). The local existence of solution from Theorem 3.1 implies that there exists a unique solution \(\widetilde{u}_{\varepsilon}(t)\) in \(L^{\infty}_{t}L^{\infty}_{x}\cap C^{0}_{t}([T_{\max}-\delta,\widetilde{T}] \times\mathbb{R})\) for some \(\widetilde{T}>T_{\max}\) and small \(\delta>0\). The uniqueness of the solution implies that \(u_{\varepsilon}\) and \(\widetilde{u}\) are the same on \([T_{\max}-\delta,T_{\max}]\times\mathbb{R}\). Thus the maximum existence time \(T_{\max}=\infty\). So the solution \(u_{\varepsilon}\) is a global solution. Moreover, the global \(\dot{H}^{1}\) can be further extend to a global \(\dot{H}^{\frac{1}{2}}\) solution. **4.4 Theorem**.: _For \(T,\varepsilon>0\), and a smooth initial data \(u_{0}\in\dot{H}^{1}\cap\dot{H}^{\frac{1}{2}}(\mathbb{R},S^{2})\) which is constant outside of a compact set. The Cauchy problem of (3.1) admits a unique solution \(u_{\varepsilon}\in L^{\infty}_{t}L^{\infty}_{x}\cap C^{0}_{t}([0,\infty),\dot {H}^{\frac{1}{2}}(\mathbb{R}))\) such that \(\lim_{|x|\to\infty}\;u(t,x)=Q\) for a fixed \(Q\in S^{2}\)._ Proof.: We now consider the case where the initial data \(u_{0}\) is in \(\dot{H}^{\frac{1}{2}}\). For the solution \(u_{\varepsilon}=K_{\varepsilon}\star N(u_{\varepsilon})+K_{\varepsilon}\star u_{0}\), we first show that \(K_{\varepsilon}\star u_{0}\) is in \(C^{0}_{t}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R}))\). We have \[\begin{split}&\|K_{\varepsilon}\star u_{0}(t,\cdot)\|_{L^{\infty}_{t} \dot{H}^{\frac{1}{2}}_{x}}\\ &=\||\xi|^{\frac{1}{2}}\widehat{K}_{\varepsilon}(t,\xi)\widehat{u }_{0}(\xi)\|_{L^{\infty}_{t}L^{2}_{\xi}}\\ &=\||\xi|^{\frac{1}{2}}e^{-\varepsilon|\xi|^{2}t}\widehat{u}_{0} (\xi)\|_{L^{\infty}_{t}L^{2}_{\xi}}\\ &\leq\|u_{0}\|_{\dot{H}^{\frac{1}{2}}}\end{split} \tag{4.15}\] Next, we consider the \(K_{\varepsilon}\star N(u_{\varepsilon})\) part. We first use the interpolation theorem to show that it is in \(C^{0}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R}))\). We know that \[\|f\|_{\dot{H}^{\frac{1}{2}}}\leq\|f\|^{\frac{1}{2}}_{L^{2}}\|f\|^{\frac{1}{2}} _{\dot{H}^{1}},\] thus we only need to show that \(K_{\varepsilon}\star N(u_{\varepsilon})\in L^{2}_{x}\). We have \[\begin{split}&\|K_{\varepsilon}\star N(u_{\varepsilon})(t, \cdot)\|_{L^{2}_{x}}\\ &=\|\int_{0}^{t}\int_{\mathbb{R}}K_{\varepsilon}(t-s,x-y)(u\times (-\Delta)^{\frac{1}{2}}u)(s,y)\,dyds\|_{L^{2}_{x}}\\ &\leq\int_{0}^{t}\|K_{\varepsilon}(t-s)\|_{L^{1}_{x}}\|N(u_{ \varepsilon})(s)\|_{L^{2}_{x}}\ ds\\ &\leq t\|u_{\varepsilon}\|_{L^{\infty}_{t}\dot{H}^{1}_{x}}\end{split}\] Hence we know that \(K_{\varepsilon}\star N(u_{\varepsilon})(t,\cdot)\in L^{2}_{x}(\mathbb{R})\). So we conclude that \(K_{\varepsilon}\star N(u_{\varepsilon})(t,\cdot)\in\dot{H}^{\frac{1}{2}}_{x} (\mathbb{R})\). Thus we know that \(u_{\varepsilon}\) is in \(C^{0}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}_{x}(\mathbb{R}))\). For a solution \(u_{\varepsilon}(t)\in\dot{H}^{\frac{1}{2}}_{x}\) of (3.1), we consider the critical energy \[E_{c}(u_{\varepsilon}(t))=\frac{1}{2}\int_{\mathbb{R}}|(-\Delta)^{\frac{1}{4} }u_{\varepsilon}(t)|^{2}\,dx. \tag{4.16}\] We have \[\begin{split}&\frac{d}{dt}\frac{1}{2}\int_{\mathbb{R}}|(-\Delta)^{ \frac{1}{4}}u_{\varepsilon}|^{2}\\ &=\int\partial_{t}u_{\varepsilon}\cdot(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}\,dx\\ &=\int\varepsilon\triangle u\cdot(-\Delta)^{\frac{1}{2}}u_{ \varepsilon}\,dx\\ &=-\varepsilon\int|(-\triangle)^{\frac{3}{4}}u_{\varepsilon}|^{2} \,dx\\ &\leq 0\end{split} \tag{4.17}\] Hence we know that \[\sup_{t\in[0,\infty)}\|u_{\varepsilon}\|_{\dot{H}^{\frac{1}{2}}}\leq\|u_{0}\| _{\dot{H}^{\frac{1}{2}}}. \tag{4.18}\] Hence the local solution \(u_{\varepsilon}\) is also a global solution of (3.1) in \(C^{0}_{t}([0,\infty),\dot{H}^{\frac{1}{2}}_{x}(\mathbb{R}))\). Furthermore, we also know that \(u_{\varepsilon}(t,\cdot)\in L^{2}_{x,loc}(\mathbb{R})\) by the following proposition. **4.5 Proposition** (Proposition 1.37 in [1]).: _For \(s\in(0,1)\), and \(u\in\dot{H}^{s}(\mathbb{R}^{n})\). Then \(u\in L^{2}_{loc}(\mathbb{R}^{n})\)._ ## 5. Weak Solution We know that for a global weak solution of (3.1) \(u_{\varepsilon}\), it satisfies \[\begin{split}&-\int_{0}^{\infty}\int_{\mathbb{R}}u_{ \varepsilon}\cdot\varphi_{t}dxdt-\int_{\mathbb{R}}u_{0}\varphi(x)dx\\ &=\varepsilon\int_{0}^{\infty}\int_{\mathbb{R}}u_{\varepsilon} \cdot\triangle\varphi dxdt+\int_{0}^{\infty}\int_{\mathbb{R}}(-\Delta)^{ \frac{1}{4}}(u_{\varepsilon}\times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u_{ \varepsilon}dxdt,\end{split} \tag{5.1}\] for any \(\varphi\in C^{\infty}_{c}\cap\mathcal{S}(\mathbb{R},\mathbb{R}^{3})\). We want to show that as \(\varepsilon\to 0\) (up to a subsequence), we have a weak solution \(u_{\star}\in L^{2}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R}))\) for (1.1) s.t. \[-\int_{0}^{\infty}\int_{\mathbb{R}}u_{\star}\cdot\varphi_{t}dxdt-\int_{ \mathbb{R}}u_{0}\varphi(x)dx=\int_{0}^{\infty}\int_{\mathbb{R}}(-\Delta)^{ \frac{1}{4}}(u_{\star}\times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u_{\star}dxdt. \tag{5.2}\] We consider a monotone sequence of \(T_{n}:=[0,n]\) s.t. \(T_{n}\to\infty\) as \(n\to\infty\), and \(U_{n}\subseteq\mathbb{R}\) be a sequence of open interval s.t. \(U_{n}\to\mathbb{R}\) as \(n\to\infty\). On each local domain \(T_{n}\times U_{n}\), from Theorem 4.4, we know there exists a solution \(u_{n,\varepsilon}\in L^{\infty}_{t,x}\cap C^{0}_{t}([0,T_{n}],\dot{H}^{\frac{ 1}{2}}(U_{n}))\). We will show that there exists a corresponding weak solution \(u_{n,\star}\) for (3.1) s.t. \(u_{n,\varepsilon}\rightharpoonup u_{n,\star}\) in \(L^{2}_{t}(T_{n},\dot{H}^{\frac{1}{2}}(U_{n}))\). Then we use a Contour argument to show that there exists a \(u_{\star}\) that is a weak solution for (1.1) on \([0,\infty)\times\mathbb{R}\) as defined in (1.2). ### Local Weak Solution For each local domain \(T_{n}\times U_{n}\), we want to show that as \(\varepsilon\to 0\) (up to a subsequence), we have \[\int_{T_{n}}\int_{U_{n}}u_{\varepsilon}\cdot\varphi_{t}dxdt\to \int_{T_{n}}\int_{U_{n}}u_{\star}\cdot\varphi_{t}dxdt, \tag{5.3}\] \[\varepsilon\int_{T_{n}}\int_{U_{n}}u_{\varepsilon}\cdot\triangle \varphi dxdt\to 0, \tag{5.4}\] and \[\int_{T_{n}}\int_{U_{n}}(-\Delta)^{\frac{1}{4}}(u_{\varepsilon}\times(-\Delta )^{\frac{1}{4}}\varphi)\cdot u_{\varepsilon}dxdt\to\int_{T_{n}}\int_{U_{n}}(- \Delta)^{\frac{1}{4}}(u_{\star}\times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u_{ \star}dxdt. \tag{5.5}\] We first introduce the following compactness lemma for the \(\dot{H}^{\frac{1}{2}}(\mathbb{R})\) space. **5.1 Lemma**.: _(Theorem 7.1 in [7]) Let \(s\in(0,1)\), \(p\in[1,\infty)\), and \(q\in[1,p)\), \(\Omega\subset\mathbb{R}^{n}\) be a bounded extension domain of \(W^{s,p}\) and \(\mathcal{F}\) be a bounded subset of \(L^{p}(\Omega)\). Suppose that_ \[\sup_{f\in\mathcal{F}}\|f\|_{\dot{W}^{s,p}}<\infty\] _Then \(\mathcal{F}\) is pre-compact in \(L^{q}(\Omega)\)._ From the above lemma, we know that \(u_{\varepsilon}(t,\cdot)\) is pre-compact in \(L^{2}(U_{n})\) for any \(t\in T_{n}\). Hence we know that there exist a \(u_{\star}(t,\cdot)\in\dot{H}^{\frac{1}{2}}(U_{n})\) s.t. \(u_{\varepsilon}(t,\cdot)\to u_{\star}(t,\cdot)\) in \(L^{2}(U_{n})\) for any \(t\in T_{n}\) as \(\varepsilon\to 0\). In order to show (5.3), we further split \(u_{\varepsilon}\) in the frequency domain. Given a \(N>0\), we have \[u_{\varepsilon}=u_{<N,\varepsilon}+u_{\geq N,\varepsilon}.\] We consider the low-frequency part first. We will show that \(u_{<N,\varepsilon}\) has better space-time regularity which implies the strong convergence of \(u_{<N,\varepsilon}\) in \(L^{2}_{t,x}(T_{n}\times U_{n})\). **5.2 Lemma**.: _For \(T>0\), \(U\subseteq\mathbb{R}\) be a bounded domain and \(N>0\). For the collections of solutions \(u_{\varepsilon}\in C^{t}_{0}([0,T],\dot{H}^{\frac{1}{2}}(U))\) of equation (3.1), we know there exists a subsequence s.t. \(u_{<N,\varepsilon_{k}}\to u_{<N,\star}\) in \(L^{2}_{t,x}([0,T]\times U)\)._ Proof.: For \(u_{\varepsilon}\) satisfies (3.1), we have \[P_{<N}\partial_{t}u_{\varepsilon}-\varepsilon P_{<N}\triangle u=P_{<N}(u_{ \varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon}) \tag{5.6}\] Hence, we have \[\begin{split}&\|P_{<N}\partial_{t}u_{\varepsilon}\|_{L^{2}_{t}L^{2}_ {x}}\\ &\leq\varepsilon\|P_{<N}\triangle u_{\varepsilon}\|_{L^{2}_{t}L^{ 2}_{x}}+\|P_{<N}(u_{\varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon })\|_{L^{2}_{t}L^{2}_{x}}\end{split} \tag{5.7}\] We know that \[\begin{split}&\|P_{<N}\triangle u_{\varepsilon}\|_{L^{2}_{t}L^{2}_{x} (T\times U)}\\ &\leq N^{2}\|u_{\leq N,\varepsilon}\|_{L^{2}_{t}L^{2}_{x}(T \times U)}\\ &\leq C.\end{split} \tag{5.8}\] Next, we consider the nonlinear part. We use Bernstein's inequality to obtain \[\begin{split}&\left\|P_{<N}(u_{\varepsilon}\times(-\triangle)^{1/2}u_ {\varepsilon})\right\|_{L^{2}_{x}}\\ &\leq C2^{\frac{N}{2}}\cdot\left\|P_{<N}(u_{\varepsilon}\times(- \triangle)^{1/2}u_{\varepsilon})\right\|_{L^{1}_{x}}.\end{split} \tag{5.9}\] We further split this into two terms: \[\begin{split}& P_{<N}(u_{\varepsilon}\times(-\triangle)^{1/2}u_ {\varepsilon})\\ =& P_{<N}(u_{\varepsilon}\times P_{<N+10}(-\triangle)^{1/2}u_ {\varepsilon})\\ &+P_{<N}(u_{\varepsilon}\times P_{\geq N+10}(-\triangle)^{1/2}u_ {\varepsilon})\end{split} \tag{5.10}\] We estimate the first term by \[\begin{split}&\|P_{<N}(u_{\varepsilon}\times P_{<N+10}(-\triangle)^{ 1/2}u_{\varepsilon})\|_{L^{2}_{t}L^{1}_{x}}\\ &\lesssim N2^{N}\|u_{\varepsilon}\|^{2}_{L^{2}_{t}L^{2}_{x}} \end{split} \tag{5.11}\] For the second term, we write further \[P_{<N}(u_{\varepsilon}\times P_{\geq N+10}(-\triangle)^{1/2}u_{ \varepsilon}) \tag{5.12}\] \[=\sum_{k_{1}=k_{2}+O(1)\geq N}P_{<N}(P_{k_{1}}u_{\varepsilon}\times P _{k_{2}}(-\triangle)^{1/2}u_{\varepsilon}).\] For a fixed time \(t\), we have \[\sum_{k_{1}=k_{2}+O(1)\geq N}\big{\|}(P_{k_{1}}u_{\varepsilon} \times P_{k_{2}}(-\triangle)^{1/2}u_{\varepsilon})\big{\|}_{L^{1}_{x}} \tag{5.13}\] \[\leq\sum_{k_{1}=k_{2}+O(1)\geq N}\big{\|}(-\triangle)^{\frac{1}{ 4}}u_{k_{1},\varepsilon}\big{\|}_{L^{2}_{x}}\cdot\big{\|}(-\triangle)^{\frac{1 }{4}}u_{k_{2},\varepsilon}\big{\|}_{L^{2}_{x}}\] \[\leq\sum_{k\geq N}\big{\|}(-\triangle)^{1/4}u_{k,\varepsilon} \big{\|}_{L^{2}_{x}}^{2}\] \[\leq C\|u_{0}\|_{\dot{H}^{\frac{1}{2}}}.\] Then we can conclude that \[\|P_{<N}(u_{\varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{\varepsilon})\|_{L^{2 }_{t}L^{2}_{x}(T\times U)}\leq C, \tag{5.14}\] where \(C\) is a constant depending on \(N\) and \(u_{0}\) but independent of \(\varepsilon\). Therefore, we know that \(\|P_{<N}\partial_{t}u_{\varepsilon}\|\in L^{2}_{t}L^{2}_{x}([0,T]\times U)\). Since our solution \(u_{\varepsilon}\) is also in \(\dot{H}^{1}\), we know that \(\nabla_{t,x}u_{<N,\varepsilon}\) are uniformly bounded in \(L^{2}_{t,x}(T_{n}\times U_{n})\). Moreover, \(\dot{H}^{1}\) compactly embeds into \(L^{2}\) on \(U_{n}\), we conclude that there exists a subsequence \(\{u_{<N,\varepsilon_{k}}\}\) such that \[u_{<N,\varepsilon_{k}}\to u_{<N,\star}\quad\text{in }L^{2}_{t,x}(T\times U) \text{ as }\varepsilon_{k}\to 0.\] For the large frequency part, we have \[\|u_{\geq N,\varepsilon}\|_{L^{2}_{t,x}(T_{n}\times U_{n})}\leq\frac{|T_{n}|} {N^{\frac{1}{2}}}\|u_{\varepsilon}\|_{L^{\infty}_{t}(T_{n})\dot{H}^{\frac{1}{ 2}}(U_{n})}\leq\frac{|T_{n}|}{N^{\frac{1}{2}}}\|u_{0}\|_{\dot{H}^{\frac{1}{2} }(U_{n})}, \tag{5.15}\] which is a Cauchy sequence in terms of \(N\). We can also conclude a similar result for \(u_{>N,\star}\). Therefore, we can further subtract a subsequence of \(\varepsilon\) depending on \(N\), for instance, \(\varepsilon_{N}=\frac{1}{N}\to 0\) as \(N\to\infty\) to obtain that \[\|u_{\varepsilon_{N}}-u_{\star}\|_{L^{2}_{t,x}} \leq\|u_{\varepsilon_{N}}-u_{<N,\varepsilon_{N}}\|_{L^{2}_{t,x}}+ \|u_{<N,\varepsilon_{N}}-u_{<N,\star}\|_{L^{2}_{t,x}}+\|u_{<N,\star}-u_{\star} \|_{L^{2}_{t,x}} \tag{5.16}\] \[\leq\|u_{\geq N,\varepsilon_{N}}\|_{L^{2}_{t,x}}+\|u_{<N, \varepsilon_{N}}-u_{<N,\star}\|_{L^{2}_{t,x}}+\|u_{\geq N,\star}\|_{L^{2}_{t,x}}\] \[\to 0,\quad\text{as }N\to\infty.\] Thus we have \[u_{\varepsilon}\to u_{\star}\quad\text{in }L^{2}_{t,x}(T_{n}\times U_{n}). \tag{5.17}\] So (5.3) holds locally on \(T_{n}\times U_{n}\). From the maximum principle Theorem 4.2, we know that (5.4) holds. For the last term (5.5), we first use the cancellation property that \[((-\Delta)^{\frac{1}{4}}u_{\varepsilon}\times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u _{\varepsilon}=0,\] to reformulate (5.5) as \[\begin{split}&\int_{T_{n}}\int_{\mathbb{R}}(u_{\varepsilon}\times(- \Delta)^{\frac{1}{2}}u_{\varepsilon})\cdot\varphi dxdt\\ &=\int_{T_{n}}\int_{\mathbb{R}}(\varphi\times u_{\varepsilon}) \cdot(-\Delta)^{\frac{1}{2}}u_{\varepsilon}dxdt\\ &=\int_{T_{n}}\int_{\mathbb{R}}(-\Delta)^{\frac{1}{4}}(u_{ \varepsilon}\times\varphi)\cdot(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\,dxdt \\ &=\int_{T_{n}}\int_{\mathbb{R}}((-\Delta)^{\frac{1}{4}}(u_{ \varepsilon}\times\varphi)-((-\Delta)^{\frac{1}{4}}u_{\varepsilon}\times \varphi))\cdot(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\,dxdt\end{split} \tag{5.18}\] We know that for a fixed \(\varphi\in C_{c}^{\infty}\), the frequency \(\eta\) of \(\widehat{\varphi}\) decays fast for large \(|\eta|>N_{\varepsilon}\). We choose a \(N>N_{\varepsilon}\), and then we split the (5.18) into two parts: \[\begin{split}&\int_{T_{n}}\int_{\mathbb{R}}((-\Delta)^{\frac{1}{4}} (u_{\varepsilon}\times\varphi)-((-\Delta)^{\frac{1}{4}}u_{\varepsilon}\times \varphi))\cdot(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\,dxdt\\ &=\int_{T_{n}}\int_{\mathbb{R}}((-\Delta)^{\frac{1}{4}}(u_{<N, \varepsilon}\times\varphi)-((-\Delta)^{\frac{1}{4}}u_{<N,\varepsilon}\times \varphi))\cdot(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\,dxdt\\ &+\int_{T_{n}}\int_{\mathbb{R}}((-\Delta)^{\frac{1}{4}}(u_{\geq N,\varepsilon}\times\varphi)-((-\Delta)^{\frac{1}{4}}u_{\geq N,\varepsilon} \times\varphi))\cdot(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\,dxdt\end{split} \tag{5.19}\] For a given \(t\in T_{n}\), we first consider the small frequency part (5.19). We further separate it into two cases: \(|\eta|\leq|\xi|<N\) and \(|\xi|\leq|\eta|<N\). We have \[\begin{split}&\|(-\Delta)^{\frac{1}{4}}(u_{\varepsilon,<N} \times\varphi)-((-\Delta)^{\frac{1}{4}}u_{\varepsilon,<N}\times\varphi)\|_{L _{x}^{2}}\\ &\lesssim\sum_{|\eta|<|\xi|<N}\|(|\xi+\eta|^{\frac{1}{2}}-|\xi|^{ \frac{1}{2}})\widehat{u}_{\varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L ^{2}}\\ &+\sum_{|\xi|<|\eta|<N}\|(|\xi+\eta|^{\frac{1}{2}}-|\xi|^{\frac{1}{ 2}})\widehat{u}_{\varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L^{2}}\end{split} \tag{5.21}\] We estimate the first part as \[\begin{split}&\sum_{|\eta|<|\xi|<N}\|(|\xi+\eta|^{\frac{1}{2}}-| \xi|^{\frac{1}{2}})\widehat{u}_{\varepsilon}(\xi)\star\widehat{\varphi}(\eta) \|_{L_{x}^{2}}\\ &\lesssim\sum_{|\eta|<|\xi|<N}\|\frac{|\eta|}{|\xi|}|\xi|^{\frac{1 }{2}}\widehat{u}_{\varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L_{x}^{2}} \\ &\lesssim\sum_{|\eta|<|\xi|<N}\|\xi|^{\frac{1}{2}}\widehat{u}_{<N, \varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L^{2}}\\ &\lesssim\|u_{<N,\varepsilon}\|_{L_{x}^{2}}\|\varphi\|_{L_{x}^{1}} \end{split} \tag{5.22}\] For the second part, we have \[\begin{split}&\sum_{|\xi|<|\eta|<N}\|(|\xi+\eta|^{\frac{1}{2}}-|\xi|^ {\frac{1}{2}})\widehat{u}_{\varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L^ {2}}\\ &\lesssim\sum_{|\xi|<|\eta|<N}\||\eta|^{\frac{1}{2}}\widehat{u}_{ \varepsilon}(\xi)\star\widehat{\varphi}(\eta)\|_{L^{2}}\\ &\lesssim\|u_{\varepsilon,<N}\|_{L^{2}_{x}}\|\varphi\|_{L^{1}_{ x}}\end{split} \tag{5.23}\] Combine Lemma 5.2 with estimates (5.22) and (5.23), we have \[\begin{split}&\|(-\Delta)^{\frac{1}{4}}(u_{<N,\varepsilon}\times \varphi)-(-\Delta)^{\frac{1}{4}}u_{<N,\varepsilon}\times\varphi-\big{(}(- \Delta)^{\frac{1}{4}}(u_{<N,\star}\times\varphi)-(-\Delta)^{\frac{1}{4}}u_{<N,\star}\times\varphi\big{)}\|_{L^{2}_{t,x}(T_{n}\times U_{n})}\\ &=\|(-\Delta)^{\frac{1}{4}}((u_{<N,\varepsilon}-u_{<N,\star}) \times\varphi)-(-\Delta)^{\frac{1}{4}}(u_{<N,\varepsilon}-u_{<N,\star}) \times\varphi\|_{L^{2}_{t,x}(T_{n}\times U_{n})}\\ &\lesssim\|u_{<N,\varepsilon}-u_{<N,\star}\|_{L^{2}_{t,x}(T_{n} \times U_{n})}\ \|\varphi\|_{L^{1}}\to 0\quad\text{as $\varepsilon\to 0$}\end{split} \tag{5.24}\] By the a priori bound (4.18), we know that up to a subsequence, \[(-\Delta)^{\frac{1}{4}}u_{\varepsilon}\rightharpoonup(-\Delta)^{\frac{1}{4}}u _{\star}\quad\text{in $L^{2}$}\] Hence, on each compact domain \(T_{n}\times U_{n}\), we have \[\int_{T_{n}}\int_{U_{n}}(u_{<N,\varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{<N,\varepsilon})\cdot\varphi dxdt\to\int_{T_{n}}\int_{U_{n}}(u_{<N,\star}\times (-\Delta)^{\frac{1}{2}}u_{<N,\star})\cdot\varphi dxdt \tag{5.25}\] Next, we consider the large frequency part (5.20). We have a better cancellation here. \[\begin{split}&\|(-\Delta)^{\frac{1}{4}}(u_{>N,\varepsilon} \times\varphi)-((-\Delta)^{\frac{1}{4}}u_{>N,\varepsilon}\times\varphi)\|_{L ^{2}_{x}}\\ &\lesssim\sum_{|\eta|<N<|\xi|}\|(|\xi+\eta|^{\frac{1}{2}}-|\xi|^{ \frac{1}{2}})\widehat{u}(\xi)\star\widehat{\varphi}(\eta)\|_{L^{2}}\\ &\lesssim\sum_{|\eta|<N<|\xi|}\|\frac{|\eta|}{|\xi|^{\frac{1}{2}}} \widehat{u}(\xi)\star\widehat{\varphi}(\eta)\|_{L^{2}}\\ &\lesssim\frac{1}{N}\ \sum_{|\eta|<N<|\xi|}\||\xi|^{\frac{1}{2}} \widehat{u}(\xi)\star|\eta|\widehat{\varphi}(\eta)\|_{L^{2}}\\ &\lesssim\frac{1}{N}\|u_{>N,\varepsilon}\|_{\dot{H}^{\frac{1}{2}} }\lesssim\frac{1}{N}\|u_{0}\|_{\dot{H}^{\frac{1}{2}}}\end{split} \tag{5.26}\] Integrating over \(T_{n}\), we have \[\begin{split}&\|(-\Delta)^{\frac{1}{4}}(u_{>N,\varepsilon}\times \varphi)-((-\Delta)^{\frac{1}{4}}u_{>N,\varepsilon}\times\varphi)\|_{L^{2}_{t,x}(T_{n}\times U_{n})}\\ &\lesssim\frac{|T_{n}|}{N}\|u_{0}\|_{\dot{H}^{\frac{1}{2}}}, \end{split} \tag{5.27}\] is a Cauchy sequence in terms of \(N\). Hence, we can further choose a subsequence of \(\varepsilon_{N}\to 0\) as \(N\to\infty\), such that \[\int_{T_{n}}\int_{U_{n}}(u_{\varepsilon}\times(-\Delta)^{\frac{1}{2}}u_{ \varepsilon})\cdot\varphi dxdt\to\int_{T_{n}}\int_{U_{n}}(u_{\star}\times(- \Delta)^{\frac{1}{2}}u_{\star})\cdot\varphi dxdt\] Thus, we conclude that, up to a subsequence, (5.5) holds. Hence we know that on each local domain \(T_{n}\times U_{n}\), we have a weak solution \(u_{\star}\) for the half-wave map equation (1.1). ### Global Weak Solution For each local doman \(\Omega_{n}=T_{n}\times U_{n}\), s.t. \(\Omega_{n}\to\mathbb{R}^{1+1}\) as \(n\to\infty\), we know that for each \(\Omega_{n}\), there exist a subsequence \(\varepsilon_{n,k}\to 0\), s.t. \(u_{\varepsilon_{n,k}}\) weakly converging to a \(u_{n,\star}\) in \(L^{2}_{t}(T_{n},\dot{H}^{\frac{1}{2}}(U_{n}))\). We then use Cantor's diagonal argument to conclude that there exists a diagonal sequence \(\varepsilon_{n,n}\), s.t. \(u_{\varepsilon_{n,n}}\) weakly converging to a \(u_{\star}\) in \(L^{2}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R}))\). For the first \(\Omega_{1}\), we pick a subsequence \(\varepsilon_{1,n}\), s.t. \(u_{\varepsilon_{1,n}}\) weakly converging to \(u_{1,\star}\) in \(L^{2}_{t}(T_{1},\dot{H}^{\frac{1}{2}}(U_{1}))\). Next, we consider a subsequence \(\varepsilon_{2,n}\) of \(\varepsilon_{1,n}\), s.t. \(u_{\varepsilon_{2,n}}\) weakly converging to \(u_{2,\star}\) in \(L^{2}_{t}(T_{2},\dot{H}^{\frac{1}{2}}(U_{2}))\). We keep doing this process for all \(n\in\mathbb{N}\). Finally, we pick a diagonal sequence \(\varepsilon_{n,n}\) so that \(u_{\varepsilon_{n,n}}\) weakly converging to \(u_{\star}\) in \(L^{2}_{t,loc}([0,\infty),\dot{H}^{\frac{1}{2}}(\mathbb{R}))\). Therefore, we obtained our global weak solution \(u_{\star}\). Lastly, we verify that \(u_{\star}\) maps into \(S^{2}\). We consider the equation (4.2). By the heat kernel, we have \[v_{\varepsilon}=-\varepsilon K_{\varepsilon}\star|\nabla u_{\varepsilon}|^{2} +1. \tag{5.28}\] Since \(u_{\varepsilon}(t)\in\dot{H}^{1}_{x}\), we know that \(\nabla u_{\varepsilon}(t)\in L^{2}_{x}\) for all \(t\in[0,\infty)\). So as \(\varepsilon\to 0\), we know that \(v_{\varepsilon}(t)\to 1\) for all \(t\in[0,\infty)\). From (5.17), we know that \(u_{\varepsilon}\to u_{\star}\) strongly in \(L^{2}_{t,x}(T_{n}\times U_{n})\). Hence, we have \[u_{\varepsilon}\to u_{\star}\text{ almost everywhere on }T_{n}\times U_{n}. \tag{5.29}\] With the same diagonal argument for \(u_{\varepsilon}\) above, we have the diagonal subsequence \(\varepsilon_{n,n}\to 0\), s.t. \[u_{\varepsilon_{n,n}}\cdot u_{\varepsilon_{n,n}}\to u_{\star}\cdot u_{\star}= 1\text{ almost everywhere on }\mathbb{R}^{1+1},\] which implies that \(u_{\star}\) maps into \(S^{2}\).
2303.02936
UniHCP: A Unified Model for Human-Centric Perceptions
Human-centric perceptions (e.g., pose estimation, human parsing, pedestrian detection, person re-identification, etc.) play a key role in industrial applications of visual models. While specific human-centric tasks have their own relevant semantic aspect to focus on, they also share the same underlying semantic structure of the human body. However, few works have attempted to exploit such homogeneity and design a general-propose model for human-centric tasks. In this work, we revisit a broad range of human-centric tasks and unify them in a minimalist manner. We propose UniHCP, a Unified Model for Human-Centric Perceptions, which unifies a wide range of human-centric tasks in a simplified end-to-end manner with the plain vision transformer architecture. With large-scale joint training on 33 human-centric datasets, UniHCP can outperform strong baselines on several in-domain and downstream tasks by direct evaluation. When adapted to a specific task, UniHCP achieves new SOTAs on a wide range of human-centric tasks, e.g., 69.8 mIoU on CIHP for human parsing, 86.18 mA on PA-100K for attribute prediction, 90.3 mAP on Market1501 for ReID, and 85.8 JI on CrowdHuman for pedestrian detection, performing better than specialized models tailored for each task.
Yuanzheng Ci, Yizhou Wang, Meilin Chen, Shixiang Tang, Lei Bai, Feng Zhu, Rui Zhao, Fengwei Yu, Donglian Qi, Wanli Ouyang
2023-03-06T07:10:07Z
http://arxiv.org/abs/2303.02936v4
# UniHCP: A Unified Model for Human-Centric Perceptions ###### Abstract Human-centric perceptions (e.g., pose estimation, human parsing, pedestrian detection, person re-identification, etc.) play a key role in industrial applications of visual models. While specific human-centric tasks have their own relevant semantic aspect to focus on, they also share the same underlying semantic structure of the human body. However, few works have attempted to exploit such homogeneity and design a general-propose model for human-centric tasks. In this work, we revisit a broad range of human-centric tasks and unify them in a minimalist manner. We propose UniHCP, a **Un**ified Model for **H**uman-**C**entric **P**erceptions, which unifies a wide range of human-centric tasks in a simplified end-to-end manner with the plain vision transformer architecture. With large-scale joint training on 33 human-centric datasets, UniHCP can outperform strong baselines on several in-domain and downstream tasks by direct evaluation. When adapted to a specific task, UniHCP achieves new SOTAs on a wide range of human-centric tasks, e.g., 69.8 mIoU on CIHP for human parsing, 86.18 mA on PA-100K for attribute prediction, 90.3 mAP on Market1501 for ReID, and 85.8 JJ on CrowdHuman for pedestrian detection, performing better than specialized models tailored for each task. The code and pretrained model are available at [https://github.com/OpenGVLab/UniHCP](https://github.com/OpenGVLab/UniHCP). ## 1 Introduction Research on human-centric perceptions has come a long way with tremendous advancements in recent years. Many methods have been developed to enhance the performance of pose estimation [67], human parsing [52], pedestrian detection [6], and many other human-centered tasks. These significant progress play a key role in advancing the applications of visual models in numerous fields, such as sports analysis [12], autonomous driving [105], and electronic re-tailing [31]. Although different human-centric perception tasks have their own relevant semantic information to focus on, those semantics all rely on the same basic structure of the human body and the attributes of each body part [69, 89]. In light of this, there have been some attempts trying to exploit such homogeneity and train a shared neural network jointly with distinct human-centric tasks [32, 33, 52, 54, 68, 77, 83, 95, 106]. For instance, human parsing has been trained in conjunction with human keypoint detection [52, 68, 106], pedestrian attribute recognition [95], pedestrian detection [54] or person re-identification [32] (ReID). The experimental results of these works empirically validate that some human-centric tasks may benefit each other when trained together. Motivated by these works, a natural expectation is that a more versatile all-in-one model could be a feasible solution for general human-centric perceptions, which can utilize the homogeneity of human-centric tasks for improving performance, enable fast adaption to new tasks, and decrease the burden of memory cost in large-scale multitask system deployment compared with specific models to specific tasks. Figure 1: UniHCP unifies 5 human-centric tasks under one model and is trained on a massive collection of human-centric datasets. However, unifying distinct human-centric tasks into a general model is challenging considering the data diversity and output structures. From the data's perspective, images in different human-centric tasks and different datasets have different resolutions and characteristics (e.g., day and night, indoor and outdoor), which calls for a robust representative network with the capability to accommodate them. From the perspective of output, the annotations and expected outputs of different human-centric tasks have distinct structures and granularities. Although this challenge can be bypassed via deploying separate output heads for each task/dataset, it is not scalable when the number of tasks and datasets is large. In this work, we aim to explore a simple, scalable formulation for unified human-centric system and, for the first time, propose a Unified model for Human-Centric Perceptions (UniHCP). As shown in Figure.1, UniHCP unifies and simultaneously handles five distinct human-centric tasks, namely, pose estimation, semantic part segmentation, pedestrian detection, ReID, and person attribute recognition. Motivated by the extraordinary capacity and flexibility of the vision transformers [49, 101], a simple yet unified encoder-decoder architecture with the plain vision transformer is employed to handle the input diversity, which works in a simple feedforward and end-to-end manner, and can be shared across all human-centric tasks and datasets to extract general human-centric knowledge. To generate the output for different tasks with the unified model, UniHCP defines Task-specific Queries, which are shared among all datasets with the same task definition and interpreted into different output units through a Task-guided Interpreter shared across different datasets and tasks. With task-specific queries and the versatile interpreter, UniHCP avoids the widely used task-specific output heads, which minimizes task-specific parameters for knowledge sharing and make backbone-encoded features reusable across tasks. Own to these designs, UniHCP is suitable and easy to perform multitask pretraining at scale. To this end, we pre-trained an UniHCP model on a massive collection of 33 labeled human-centric datasets. By harnessing the abundant supervision signals of each task, we show such a model can simultaneously handle these in-pretrain tasks well with competitive performance compared to strong baselines relying on specialized architectures. When adapted to a specific task, both in-domain and downstream, our model achieves new SOTAs on several human-centric task benchmarks. In summary, the proposed model has the following properties: * Unifying five distinct human-centric tasks and handling them simultaneously. * Shared encoder-decoder network based on plain transformer. * Simple task-specific queries identifying the outputs. * Maximum weight sharing (99.97% shared parameters) with a task-guided interpreter. * Trainable at scale and demonstrates competitive performance compared to task-specialized models. The following sections are organized as follows: Section 2 reviews the related works with focuses on Human-Centric perception and unified models. Section 3 describes the proposed model. Section 4 provides implementation details together with empirical results and ablation studies. Finally, we conclude the paper in Section 5. ## 2 Related Works ### Human-Centric Perceptions Human-centric perceptions are essential for substantial real-world applications. Depending on the targeted visual concept, the way of decoding output from image features varies across tasks. Specifically, pose estimation and pedestrian detection are both localization tasks that can be solved by either regression-based methods [41, 103] or heatmap-based methods [37, 38, 93]. Human parsing, as a fine-grained segmentation problem, is usually solved by per-pixel classification. While contour-based methods [70, 94] can also obtain segmentation masks, it requires instance-level mask annotations, which are not always available. PAR is treated as a multi-label classification task [116], and ReID is treated as a feature learning task [81]. Recently, several transformer-based solutions have been proposed for these human-centric tasks, with attention block designs on both backbone [23, 96, 100] and decoding network [66, 50, 60, 69, 110, 46]. However, these methods involve _different_ task-specific designs and thus cannot be integrated into one model seamlessly. Built upon the general success of these works, we take a further step and unify human-centric tasks under the _same_ architecture based on plain vision transformer. ### Unified Models A general-purpose model that can handle different tasks in a unified manner has long been a coveted alternative to models specifically tailored for different tasks. Pioneering works regarding Natural Language Processing (NLP) [71], vision-language [65], and basic vision tasks [34, 73] have shown the effectiveness of such kind of unified cross-task models. ExT5 [3] and OFA [88] further provide a degree of promise for the performance benefits of large-scale multitask co-training. Among models supporting visual tasks, UniHead [51] and UViM [35] propose a unified architecture for several vision tasks. However, they are only trained and evaluated in a single-task manner. For methods supporting multitask co-training, Uni-Perceiver [118] focuses on tasks in which the desired output is inherently language or labels, which does not fit human-centric tasks. While UniT [25], OFA [88], Unified-IO [64], and Pix2Seq v2 [8] further extend the support for detection, keypoint detection, segmentation, and many other visual tasks, they rely on _independent decoder heads_[25, 88] or _autoregressive_ modeling [8, 64]. These works do not focus on human-centric vision tasks. Differently, our work introduces a _shared decoder head_ (task-guided interpreter) in a _parallelly feedforward_ manner for human-centric vision tasks, which is simple yet maximizes the parameter sharing among different tasks. In the case of human-centric tasks, many works have shown great success by co-training a pair of human-centric tasks [32, 33, 52, 54, 68, 77, 83, 95, 106]. However, there is no work exploring a general unified model that can handle all representative human-centric tasks. Our work is the first attempt at designing, training, and evaluating a unified human-centric model with a large-scale multitask setting. ## 3 UniHCP To share the most knowledge among various human-centric tasks, we attempt to maximize weight sharing among all tasks in UniHCP. Specifically, our UniHCP, as shown in Figure 2, consists of three components: (1) A task-agnostic transformer encoder \(E\) to extract image features. (2) A transformer decoder \(D\) that attends to task-specific information according to task-specific queries \(\{\mathbf{Q}^{t}\}\), where \(t\) denotes a specific task. (3) A task-guided interpreter \(\mathcal{I}\) produces output units, in which we decompose the output of multiple human-centric perception tasks into sharable units of diverse granularities, _i.e.,_ feature representation, local probability map, global probability, bounding box coordinates. Since only the queries to the decoders are not shared among tasks, we can learn human-centric knowledge across different granularities by the designed interpreters and achieve maximum parameter sharing among all tasks, i.e., **99.97%** shared parameters, as shown in Table 1. The pipeline for our UniHCP is described as follows. _Step 1_: Given an image \(\mathbf{X}\) sampled from the dataset in task \(t\), extract encoded features \(\mathbf{F}\) by the task-agnostic transformer encoder \(E\) (Sec. 3.1). _Step 2_ : A transformer decoder \(D\) with task-specific queries \(\mathbf{Q}^{t}\) extracts task-specific features from encoded features \(\mathbf{F}\) (Sec. 3.2). _Step 3_: Generate output units according to the queried task, i.e., attended features \(\mathbf{Y}_{f}\), local probability map \(\mathbf{Y}_{m}\), global probability \(\mathbf{Y}_{p}\) and bounding box coordinates \(\mathbf{Y}_{bbox}\) by a task-guided interpreter \(\mathcal{I}\) (Sec. 3.3). For example, for human parsing, two units: local probability map \(\mathbf{Y}_{m}\) (for semantic part segmentation) and global probability \(\mathbf{Y}_{p}\) (for existence of body part in the image), are generated. _Step 4:_ Calculate the loss of the corresponding task for optimizing the encoder \(E\), the decoder \(D\), the task-specific queries \(\mathbf{Q}^{t}\) and task-guided interpreter \(\mathcal{I}\) by backward propagation (Sec. 3.4). \begin{table} \begin{tabular}{l c c c} \hline \hline & Layers & Dimension & Params \\ \hline Encoder & 12 & 768 & 91.1M \\ Decoder & 9 & 256 & 14.5M \\ Task-guided Interpreter & & & 3.5M \\ \hline Task-specific queries & & 256 & \textless{}0.03M \\ \hline Total & & & 109.1M \\ Task-agnostic params / total params & & & 99.97\% \\ \hline \hline \end{tabular} \end{table} Table 1: Network details of UniHCP Figure 2: UniHCP handles a massive collection of human-centric tasks uniformly by task-specific queries and a task-guided interpreter, all predictions are yielded in parallel through a simple encoder-decoder transformer architecture. ### Task-agnostic Transformer Encoder UniHCP uses a plain Vision Trasnformer [15] (ViT) as the encoder. To handle input images of different resolutions, we use a shared learnable positional embedding with the size of \(84\times 84\) and interpolate it based on the spatial size of the input image after patch projection. The encoded feature \(\mathbf{F}\) can be mathematically calculated as \[\mathbf{F}=E(\mathbf{X},\mathbf{P}_{E}), \tag{1}\] where \(\mathbf{P}_{E}\) is the positional embedding after interpolation and \(E\) denotes the task-agnostic transformer encoder. ### Decoder with Task-specific Queries To obtain the most discriminative feature for each task while maximizing knowledge sharing, we design task-specific queries to guide the transformer decoder only attending to task-relevant information. **Task-specific Queries.** Task queries for task \(t\) are denoted as \[\mathbf{Q}^{t}=[\mathbf{q}_{1}^{t},\mathbf{q}_{2}^{t},...,\mathbf{q}_{N^{t}}^ {t}], \tag{2}\] where \(N^{t}\) denotes the number of queries representing \(N^{t}\) different semantic meanings in task \(t\). For pedestrian attribute recognition, pose estimation, human parsing, and ReID, the number of queries respectively equals to the number of attributes, the number of pose joints, the number of semantic parsing classes, and the length of desired ReID features. For pedestrian detection, we follow the implementation in [90], with details provided in the supplementary material. We randomly initialize the task-specific query \(\mathbf{Q}^{t}\) as learnable embeddings \(\mathbf{Q}_{0}^{t}\) and refine it with the following decoder blocks. Following the common practice as in [10, 84, 90], all \(\mathbf{Q}^{t}\) are also associated with a positional embedding \(\mathbf{Q}_{p}^{t}\), which has the same dimension as \(\mathbf{Q}^{t}\) and is not shared across tasks. Different from \(\mathbf{Q}^{t}\) that will be progressively refined in the decoder blocks, \(\mathbf{Q}_{p}^{t}\) is shared across decoder blocks. For tasks other than pedestrian detection, \(\mathbf{Q}_{p}^{t}\) is simply a learnable positional embedding that is randomly initialized. For pedestrian detection, we have \[\mathbf{Q}_{p}^{t}=proj(\mathcal{A}_{\mathbf{Q}}), \tag{3}\] where \(\mathcal{A}_{\mathbf{Q}}\in\mathbb{R}^{N^{t}\times 2}\) refers to \(N^{t}\) learnable anchor points that are initialized with a uniform distribution following [90], and \(proj\) is a projection from coordinates to positional embeddings (more details about the projector are elaborated in the supplementary materials). **Decoder.** The transformer decoder aims to attend to task-specific features according to the task queries. We follow the standard design of transformer decoders [84]. In the decoder, each transformer block \(D_{l}\) for \(l=1,2,...,L\) consists of a cross-attention module, a self-attention module, and a feed-forward module (FFN), where \(L\) denotes the number of transformer blocks. We place cross-attention before self-attention as adopted by [10, 40]. For each block \(D_{l}\), we attend to task-specific information from the encoded feature by task queries, which can be formulated as \[\mathbf{Q}_{l}^{t}=D_{l}(\mathbf{Q}_{l-1}^{t},\mathbf{Q}_{p}^{t},\mathbf{F}, \mathbf{F}_{p}), \tag{4}\] \[\text{where }\mathbf{F}_{p}=proj(\mathcal{A}_{\mathbf{F}}), \tag{5}\] \(\mathcal{A}_{\mathbf{F}}\in\mathbb{R}^{H_{\mathbf{F}}W_{\mathbf{F}}\times 2}\) is the coordinates with respect to the original image for all feature tokens in \(\mathbf{F}\in R^{H_{\mathbf{F}}\times W_{\mathbf{F}}}\). For the cross-attention in the decoder \(D_{l}\), the query is \(\hat{\mathbf{Q}}_{l}^{t}=\mathbf{Q}_{l-1}^{t}+\mathbf{Q}_{p}^{t}\), the key is \(\hat{\mathbf{K}}=\mathbf{F}^{\prime}+\mathbf{F}_{p}\), and the value is \(\hat{\mathbf{V}}=\mathbf{F}^{\prime}\), where \(\mathbf{F}^{\prime}\) is linearly projected from the features of the encoder \(\mathbf{F}\) to align channel dimensions. The result of cross-attention is then passed for self-attention in \(D_{l}\). ### Task-guided Interpreter Task-guided interpreter \(\mathcal{I}\) interprets query tokens \(\mathbf{Q}^{t}\) into four output units subject to the request of a specific task. As shown in Figure 3, these four output units are as follows: \[\text{feature vector unit}:\mathbf{Y}_{f}\in\mathbb{R}^{N^{t} \times C} \tag{6}\] \[\text{global probability unit}:\mathbf{Y}_{p}\in\mathbb{R}^{N^{t} \times 1}\] \[\text{local probability map unit}:\mathbf{Y}_{m}\in\mathbb{R}^{N^{t} \times H^{\prime}\times W^{\prime}}\] \[\text{bounding box unit}:\mathbf{Y}_{bbox}\in\mathbb{R}^{N^{t} \times 4},\] where \(C\) is the output dimension of the decoder, \(H^{\prime}\times W^{\prime}\) denotes the desired resolution for the local probability map. Figure 3: Task-guided interpreter. \(\otimes\) denotes a dynamic convolution module [9] that takes the projected query feature as the kernel and takes the tokens \(\mathbf{F}\) from the encoder as the feature map, where \(\mathbf{F}\) is upscaled to the desired resolution \(H^{\prime}\times W^{\prime}\), \(\oplus\) denotes addition, for which the inputs are the projected query feature in the format of \([\nabla cx,\nabla cx,h,w]\) and \(\mathcal{A}_{\mathbf{Q}}\), which contains the anchor point \([cx,cy]\) (see supplementary materials for details). Given task \(t\) and output interpreter \(\mathcal{I}\), the output of the Uni-HCP is defined as follows: \[\{\mathbf{Y}_{u}|g_{u}^{\mathbf{t}_{t}}=1,u\in\{f,p,m,bbox\}\}=\mathcal{I}( \mathbf{Q}^{t},\mathbf{g}^{\mathbf{t}_{t}}), \tag{7}\] where \(\mathbf{t}_{t}\in\{reid,\dots,pose\}\) denotes the task type of task \(t\), \(\mathbf{g}^{\mathbf{t}}=\{g_{t}^{\mathbf{t}}\}\) is a set of task-specific binary gates (\(g\in\{0,1\}\)) that represents the desired output units for task type \(\mathbf{t}\). **Guidance from tasks to output units.** For human parsing, local probability map (for semantic part segmentation) and global probability (for existence of body part in the image) are activated, corresponding to \(g_{m}^{seg}=1\) and \(g_{p}^{seg}=1\) respectively. For person ReID, feature vectors are used, corresponding to \(g_{f}^{reid}=1\). For pose estimation, \(g_{m}^{pose}=1\) (for localizing key points) and \(g_{p}^{pose}=1\) (for existence of keypoints in the image). For detection, \(g_{bbox}^{det}=1\) (for bounding box prediction) and \(g_{p}^{det}=1\) (for existence of object). For pedestrian attribute prediction, \(g_{p}^{par}=1\) (for existence of attributes in the image). Therefore, the output unit of global probabilities is shared among pose estimation, human parsing, pedestrian detection, and attribute recognition. The output unit of local probability maps is shared among pose estimation and human parsing. **Discussion.** The task-guided interpreter interprets each query token independently. Previous works focused on autoregressive decoding with tokenization [64, 8] or task-specific heads [99, 25] to handle different output units required by specific tasks. In contrast, the task-guided interpreter can handle tasks involving a varying number of classes, yield all results in parallel, and do not require task-specific heads. This is achieved by two designs in our UniHCP framework: 1) Class/instance information is self-contained in queries. As mentioned in Section 3.2, a query represents a particular semantic class in pose estimation, attribute prediction, human parsing, and pedestrian detection. We only need to retrieve a scalar probability value from a query to obtain the confidence information for a particular class/human instance. 2) Outputs of the same modality share the same output unit. For example, the heatmap for a particular joint in pose estimation and the heatmap for a particular body part in human parsing have the same dimension. Although these outputs have different meanings, experimental results in Section 4.3 show that it is suitable to obtain them through the same output unit and fully let the task-specific queries handle the differences in preferred information to be represented. ### Objective Functions In this section, we will introduce the objective functions for training diverse human-centric tasks together and illustrate how these objectives are related to the output units defined in Eq. 6. Unless otherwise specified, we omit the GT inputs in loss functions for brevity. **Overall Objective Function.** Given a collection of datasets \(\mathcal{D}=\{\mathcal{D}|\mathbf{t}_{\mathcal{D}}\in\{reid,\dots,pose\}\}\), where \(\mathbf{t}_{\mathcal{D}}\) denotes the task type of dataset \(\mathcal{D}\), we also note \(t_{\mathcal{D}}\) as the task of dataset \(\mathcal{D}\), we have the overall loss defined as: \[\mathcal{L}=\sum_{\mathcal{D}\in\mathcal{D}}w_{\mathcal{D}}\mathcal{L}_{ \mathbf{t}_{\mathcal{D}}}(\mathcal{I}(\mathbf{Q}^{t_{\mathcal{D}}},\mathbf{g} ^{\mathbf{t}_{\mathcal{D}}})), \tag{8}\] where \(w_{\mathcal{D}}\) is the loss weight for dataset \(\mathcal{D}\), which is calculated based on the task type and batch size (calculations are elaborated in supplementary materials). ReID.Person ReID is a feature learning task for extracting identification information. Therefore, we directly supervised the features after the decoder by identity annotations. Specifically, for ReID task, the extracted feature is a simple concatenation of all feature vectors \(\mathbf{Y}_{f}=[y_{f}^{1};\dots;y_{f}^{N^{t}}]\), where \(N^{t}=6\) by default. The loss function is a combination of ID loss [114] and triplet loss [58] written as follows: \[\mathcal{L}_{reid}=\mathcal{L}_{ID}(\mathbf{Y}_{f})+\mathcal{L}_{triplet}( \mathbf{Y}_{f}). \tag{9}\] **PAR.** Pedestrian attribute recognition only predicts whether an attribute exists in the global image. Therefore, we only supervise the output unit of global probabilities \(\mathbf{Y}_{p}\) from the task-guided interpreter. Specifically, following the common practice [82, 46], we adopt the weighted binary cross-entropy loss. Given the probability predictions \(\mathbf{Y}_{p}\) associated with \(N^{t}\) attributes, we have: \[\mathcal{L}_{par} =\sum_{n=1}^{N_{t}}w_{n}(y_{n}\log(y_{p}^{n})+(1-y_{n})\log(1-y_{p }^{n})), \tag{10}\] \[w_{n} =y_{n}e^{1-\gamma_{n}}+(1-y_{n})e^{\gamma_{n}},\] where \(y_{n}\) denotes the annotation of \(n\)-th attribute and \(\gamma_{n}\) denotes the positive example ratio of \(n\)-th attribute. Human Parsing.Human parsing can be considered as semantic segmentation of human part. We view the presence of semantic classes as predictable attributes since the semantic classes are not always present in an image. Therefore, the global probability \(\mathbf{Y}_{p}\) and local probability map \(\mathbf{Y}_{m}\) are selected from the output units to describe whether a semantic part exists on the image level (global) and pixel level (local), respectively. Given a query \(\mathbf{q}_{l}\) defined in Eq. 2 which corresponds to a semantic class in human parsing, we adopt the binary cross entropy loss as \(\mathcal{L}_{par}\) in pedestrian attribute recognition to constrain the global probability \(\mathbf{Y}_{p}\), and a combination of binary cross-entropy loss and dice loss [10] to supervised local probability map \(\mathbf{Y}_{m}\) as follows: \[\mathcal{L}_{seg}=\lambda_{par}\mathcal{L}_{par}(\mathbf{Y}_{p})+\mathcal{L}_{ bce}(\mathbf{Y}_{m})+\mathcal{L}_{dice}(\mathbf{Y}_{m}),\] where \(\lambda_{par}\) denotes the loss weight for \(\mathcal{L}_{par}(\mathbf{Y}_{p})\). Pose Estimation.We follow the common top-down setting for pose estimation, i.e., predicting keypoints based on the cropped human instances. We predict the heatmap w.r.t. the keypoints via mean-squared error. Similar to human parsing formulation, we also select the global probability \(\mathbf{Y}_{p}\) and local probability map \(\mathbf{Y}_{m}\) to predict whether a keypoint exists in the image level and pixel level, respectively. Mathematically, we have: \[\mathcal{L}_{pose}=\lambda_{par}\mathcal{L}_{par}(\mathbf{Y}_{p})+\mathcal{L}_ {mse}(\mathbf{Y}_{m}). \tag{11}\] Pedestrian Detection.Pedestrian Detection is a local prediction task but in a sparse manner. Following the widely adopted designs in end-to-end transformer-based detection [7, 110], ground-truth for \(N^{t}\) query features in \(\mathbf{Q}_{l}\) are determined by optimal bipartite matching between all \(N^{t}\) predictions and GT boxes. Given output units \(\mathbf{Y}_{p}\) and \(\mathbf{Y}_{bbox}\), we adopt the identical cost formulation and loss as in [110], \[\begin{split}\mathcal{L}_{peddet}=&\lambda_{cls} \mathcal{L}_{cls}(\mathbf{Y}_{p})+\lambda_{iou}\mathcal{L}_{iou}(\mathbf{Y}_{ bbox})+\\ &\lambda_{L1}\mathcal{L}_{L1}(\mathbf{Y}_{bbox}).\end{split} \tag{12}\] where \(\mathcal{L}_{cls},\mathcal{L}_{iou}\) and \(\mathcal{L}_{L1}\) are focal loss [56], GIoU loss [72], and \(L1\) loss, respectively. Their corresponding loss weights \(\lambda\) are also identically set as in [110]. ## 4 Experiments ### Implementation details Datasets.To enable general human-centric perceptions, we pretrain the proposed UniHCP at scale on a massive and diverse collection of human-centric datasets. Specifically, the training splits of 33 publically available datasets are gathered to form the training set for UniHCP, including nine datasets for pose estimation and six datasets for ReID, Human Parsing, Attribute Prediction, Pedestrain Detection, seraprately. For ReID, there are two different sub-tasks: general ReID and cloth-changing ReID, where the difference is whether cloth-change is considered for person ReID. We empirically found it is best to view them as different tasks and solve them with different task queries. Hence, we treat these two sub-tasks as different tasks and give them separate queries. We carefully follow the de-duplication practices as introduced in [102] to remove the samples that could appear in the evaluation datasets. We also remove images whose groundtruth labels are not given, leading to 2.3M distinct training samples in total. For evaluation, apart from the available validation or test splits of the 33 training sets, we also included several out-of-pretrain downstream datasets for each type of human-centric task. More details about dataset setups can be found in supplementary materials. Training.We use the standard ViT-B [15] as the encoder network and initialize it with the MAE pretrained [22] weights following [49, 96]. For the main results, we use a batch size of 4324 in total, with the dataset-specific batch size being proportional to the size of each dataset. Unless otherwise specified, the image resolution used in pretraining is \(256\times 192\) for pose estimation and attribute prediction, \(256\times 128\) for ReID, \(480\times 480\) for human parsing, and a maximum height/width of 1333 for pedestrian detection. For computational efficiency, each GPU only runs one specific task, and each task can be evenly distributed to multiple GPUs whereas a single GPU is not capable of handling its workloads. To further save the GPU memory during the training time, we adopt the gradient checkpointing [4] in the encoder forward pass among all tasks and additionally use accumulative gradients for detection tasks. Due to the high GPU-memory demand of detection datasets, the batch size for the detection task is timed by 0.6. We use Adafactor [75] optimizer and follow the recommended modifications [101] for adopting it to ViT, we set \(\beta_{1}=0.9\), \(\beta_{2}\) clipped at \(0.999\), disables the parameter scaling and decoupled weight decay to 0.05. We linearly warm up the learning rate for the first 1500 iterations to 1e-3, after which the learning rate is decayed to 0 following a cosine decay scheduler. We also use a drop-path rate of 0.2 and layer-wise learning rate decay [49, 96] of 0.75 in the ViT-B encoder. For the main results, the whole training process takes 105k iterations which are approximately 130 epochs for detection datasets and 200 epochs for other datasets. The whole training takes 120 hours in total on 88 NVIDIA V100 GPUs. \begin{table} \begin{tabular}{l l l} \hline \hline Task Type & Datasets & Number of samples \\ \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & CUHK03 [47] & \multirow{2}{*}{268,002} \\ & DGMarket [113] & \\ (6 datasets) & PRCC [97] & \\ &... & \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & COCO-Pose [57] & \multirow{3}{*}{1,261,749} \\ & AI Challenger [92] & \\ (9 datasets) & PoseTrack [1] & \\ &... & \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & LIP [20] & \multirow{3}{*}{384,085} \\ & CHIP [19] & \\ \cline{1-1} & DeepFashion2 [18] & \\ \cline{1-1} &... & \\ \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & PA-100K [62] & \multirow{3}{*}{242,880} \\ & RAPv2 [39] & \\ \cline{1-1} (6 datasets) & UAV-Human [45] & \\ \cline{1-1} &... & \\ \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & COCO-Person [57] & \multirow{3}{*}{170,687} \\ \cline{1-1} & CrowdHuman [74] & \\ \cline{1-1} (6 datasets) & WiderPedestrian [63] & \\ \cline{1-1} &... & \\ \hline \hline \end{tabular} \end{table} Table 2: Representative datasets used in multitask co-training. ### Main Results To demonstrate the capability of UniHCP as a unified model for human-centric perceptions, we first evaluate our UniHCP on thirteen datasets that appear in the pretraining stage (in Section 4.2.1), _e.g._, CIHP. Furthermore, we employ five datasets whose training splits are not included in the pretraining stage to evaluate the cross-datasets transferability of UniHCP (in Section 4.2.2). We also demonstrate that UniHCP has the potential to efficiently transfer to new datasets that do not appear in pretraining with only a few images (in Section 4.2.3). For detailed evaluation configuration, please refer to the supplementary. help of any additional camera information and training images during evaluation. For pedestrian detection, our UniHCP achieves **+1.8%** JI performance gain compared with Iter-Deformable-DETR [110] and on-par performance with the Iter-Sparse-RCNN [110] on mAP. These strong performances on diverse datasets across five tasks demonstrate the feasibility and powerfulness of the unified human-centric model and large-scale pretraining. #### 4.2.2 Cross-datasets Transfer Results As the task-guided interpreter formulates all the requests of human-centric tasks into four output units, human-centric knowledge learned behind these units can be easily transferred to unseen datasets. We conduct evaluations on another five datasets which do not appear during pretraining to evaluate the transferability of UniHCP. UniHCP is fine-tuned to adapt to new datasets except for SenseReID, on which the performance is tested by direct evaluation. As shown in Table 8, UniHCP outperforms existing SOTAs in 4 out of 5 datasets. Specifically, UniHCP achieves **+0.35%** pACC, **+11.4%** top-1, **-1.6%** heavy occluded MR\({}^{-2}(\downarrow)\), **+0.1%** PCKh, and **+1.71%** mA on ATR, SenseReID, Caltech, MPII, and PETA, respectively. On MPII, UniHCP achieves on-par performance with multi-datasets trained SOTA while improving single-dataset trained SOTA by **+0.9%** PCKh. Notably, even without finetuning, UniHCP achieves a **-8.8%** heavy occluded MR\({}^{-2}(\downarrow)\) performance gain on single-dataset trained SOTA. Consistent improvements on transfer tasks provide strong support to the decent transferability of UniHCP. #### 4.2.3 Data-Efficient Transferring As UniHCP achieves SOTAs on full-data finetuning setting, we further evaluate its potential for transferring to new datasets with extremely scarce training images, _e.g._, only one image per class for training. As summarized in Table 9, by conducting prompt tuning with one image per class, UniHCL achieves **93.65%** pACC on ATR for parsing and **83.8%** PCKh on MPII for pose estimation, respectively. For prompt tuning on ATR, we follow [61]. For prompt tuning on MPII, we only update queries and their associate position embeddings. The prompt tuning results are close to that of the full-data finetuning setting and suppress the results of finetuning the whole model with one image per class for a large margin. Moreover, UniHCP with prompt tuning shows much lower standard deviations than one-shot finetuning on human parsing and pose estimation tasks, verifying that UniHCP learns generic human-centric representation which is beneficial for data-efficient transferring with low computation cost. ### Ablation Study on Weight Sharing As UniHCP achieves desirable performance on various human-centric tasks while sharing most parameters among different tasks, one problem remains whether more task-specific parameters benefit learning. To answer the question, we ablate three weight sharing variants of UniHCP during pretraining using a 60k-iteration training schedule with 1k batch size. Results in Table 10(b) show that compared to the original UniHCP _i.e._, the _Baseline_), unifying task-guided interpreters among all tasks resulted in an average performance on par with using specific heads while reducing about **30%** of the parameters. We also note that using task-specific or task-type-specific decoders and interpreters leads to an obvious (**-6.8%** and **-2.4%**, respectively) performance drop on average when compared to the original UniHCP (see results in Table 10(b) and (c)). We speculate that in these ablation settings, complementary human-centric knowledge can not be properly shared among tasks, which leads to performance drops on most tasks. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{\begin{tabular}{c} Total \\ params. \\ \end{tabular} } & Shared & \multicolumn{3}{c}{Shared module} & Avg. \\ & & & Encoder & Decoder & \begin{tabular}{c} Task-guided \\ Interpret \\ \end{tabular} & \\ \hline Baseline & 109.32M & 109.08M & ✓ & ✓ & ✓ & 67.4 \\ (a) & 156.17M & 105.60M & ✓ & ✓ & & 67.4 \\ (b) & 489.67M & 91.07M & ✓ & & 60.6 \\ (c) & 170.83M & 109.08M & ✓ & by \(\mathbf{t}_{t}\) & by \(\mathbf{t}_{t}\) & 65.0 \\ \hline \hline \end{tabular} \end{table} Table 10: Comparison of different parameter-sharing schemes. We report the average scores of direct evaluation results on in-pretrain human-centric datasets. “by \(\mathbf{t}_{t}\)” denotes sharing decoder and interpreter across task types \(\mathbf{t}_{t}\). For more detailed results on each dataset, please refer to the supplementary. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{ \begin{tabular}{c} Learnable \\ params ratio \\ \end{tabular} } & \multicolumn{2}{c}{Parsing} & Pose \\ \cline{2-5} & & ATR/pACC & MPII/PCKh \\ \hline One-shot finetuning & 100\% & 90.49\(\pm\)1.22 & 70.6\(\pm\)7.53 \\ One-shot prompt tuning & \(<\)1\% & 93.65\(\pm\)0.77 & 83.8\(\pm\)5.08 \\ \hline Full-data finetuning & 100\% & 97.74 & 93.2 \\ \hline \hline \end{tabular} \end{table} Table 9: One-shot human parsing and human pose estimation transfer results under different tuning settings. Every method uses only 1 image per class to transfer. We repeat each experiment for 10 times and report the mean and standard deviation. ## 5 Conclusions In this work, we present a Unified Model for Human-Centric Perceptions (UniHCP). Based on a simple query-based task formulation, UniHCP can easily handle multiple distinctly defined human-centric tasks simultaneously. Extensive experiments on diverse datasets demonstrate that UniHCP pretrained on a massive collection of human-centric datasets delivers a competitive performance compared with task-specific models. When adapted to specific tasks, UniHCP obtains a series of SOTA performances over a wide spectrum of human-centric benchmarks. Further analysis also demonstrate the capability of UniHCP on parameter and data-efficient transfer and the benefit of weight sharing designs. We hope our work can motivate more future works on developing general human-centric models. **Acknowledgement.** This paper was supported by the Australian Research Council Grant DP200103223, Australian Medical Research Future Fund MRFAI000085, CRC-P Smart Material Recovery Facility (SMRF) - Curby Soft Plastics, and CRC-P ARIA - Bionic Visual-Spatial Prosthesis for the Blind.
2307.01710
A Scalable Arrangement Method for Aperiodic Array Antennas to Reduce Peak Sidelobe Level
Peak sidelobe level reduction (PSLR) is crucial in the application of large-scale array antenna, which directly determines the radiation performance of array antenna. We study the PSLR of subarray level aperiodic arrays and propose three array structures: dislocated subarrays with uniform elements (DSUE), uniform subarrays with random elements (USRE), dislocated subarrays with random elements (DSRE). To optimize the dislocation position of subarrays and random position of elements, the improved Bat algorithm (IBA) is applied. To draw the comparison of PSLR effect among these three array structures, we take three size of array antennas from small to large as examples to simulate and calculate the redundancy and peak sidelobe level (PSLL) of them. The results show that DSRE is the optimal array structure by analyzing the dislocation distance of subarray, scanning angle and applicable frequency. The proposed design method is a universal and scalable method, which is of great application value to the design of large-scale aperiodic array antenna.
Jiao Zhang, Hongtao Zhang, Xuelei Chen, Fengquan Wu, Yufeng Liu, Wenmei Zhang
2023-07-04T13:25:10Z
http://arxiv.org/abs/2307.01710v1
# A Scalable Arrangement Method for Aperiodic Array Antennas to Reduce Peak Sidelobe Level ###### Abstract Peak sidelobe level reduction (PSLR) is crucial in the application of large-scale array antenna, which directly determines the radiation performance of array antenna. We study the PSLR of subarray level aperiodic arrays and propose three array structures: dislocated subarrays with uniform elements (DSUE), uniform subarrays with random elements (USRE), dislocated subarrays with random elements (DSRE). To optimize the dislocation position of subarrays and random position of elements, the improved Bat algorithm (IBA) is applied. To draw the comparison of PSLR effect among these three array structures, we take three size of array antennas from small to large as examples to simulate and calculate the redundancy and peak sidelobe level (PSLL) of them. The results show that DSRE is the optimal array structure by analyzing the dislocation distance of subarray, scanning angle and applicable frequency. The proposed design method is a universal and scalable method, which is of great application value to the design of large-scale aperiodic array antenna. D Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS 2022.Dust Number ## 1 Introduction In the design of array antenna, the reduction of PSLL is very important because it will directly affect the power characteristics and anti-interference performance of the system [1]. The PSLL, usually referring to the normalized maximum sidelobe level value, is the secondary lobe level in the radiation pattern. When the PSLL of array antenna is too high, it will affect the main lobe and cause interference and loss to the energy of the whole array antenna. At the same time, its echo will also interfere with the radar system. Therefore, in order to improve the overall performance of array antenna, it is necessary to reduce PSLL. There are several ways to reduce PSLL in aperiodic array antennas. Elements optimization is the most basic method, which is realized by optimizing each array element in a completely random way or following some certain pattern. Smolders [2] has shown a concept of random sequential rotation to provide well-controlled sidelobes in a regular periodic array grid, which has similar ability as randomly-spaced array antennas. Lucas [3] has reported a Fermat's spiral based array antenna to suppress secondary radiation lobes, which has similar sidelobe level and bandwidth as those obtained by computationally expensive non-linear optimization methods. The aperiodic weighted array antennas proposed in literature [4] have excellent bandwidth characteristics which can effectively minimize the PSLL. Haupt [5] has used genetic algorithms by encoding parameters containing location information as binary strings, and the array distribution with the lowest maximum relative sidelobe level is obtained after finite iteration optimization. Due to the outstanding characteristics of aperiodic array antenna, such as wide bandwidth, high resolution and wide scanning angle, it has been widely used in satellite communication, remote sensing [6]-[8] and medical imaging [9, 10]. Aperiodic array antennas with optimized elements can greatly reduce PSLL, but for large and medium-sized array, the array structure with high degree of freedom makes it difficult to realize wave control and power feeding. An improved idea is to design subarray level aperiodic array antennas [11]-[14], which is a compromise between random array and periodic array. For large scale antennas, a group of antennas can be regarded as an element, namely a subarray, and these subarrays can be placed randomly to greatly reduce the difficulty of analysis. Due to the identity of subarray structures, subarray level aperiodic array [11] can reduce the complexity of design and manufacture, and its PSLR ability is comparable to that of random array. Both literatures [11, 12] designed the aperiodic arrays with the combination of aperiodic subarray (dislocation) and uniform array elements. The subarray was integrated with multi-channel transceiver module with periodic active channel arrangement, which can greatly simplify the manufacturing technology and effectively reduce the highest sidelobe level. Literatures [13, 14] designed the aperiodic arrays, which is combined of aperiodic subarray (rotating) and non-uniform array elements. Kiersten [13] have studied an aperiodic array composed of aperiodic subarrays (rotating) and non-uniform elements, in which the bandwidth capability and the sidelobe level (SLL) of various arrays were characterized by probability, and their capability of low-cost applications was analyzed. The research shows that the array with rotating random subarrays had good antenna characteristics and the feasibility of low-cost manufacturing. Junming [14] have done some further studies on the array antenna with rotating random subarrays, such as, the number of array elements, the density of array elements and the PSLL, and have done some comparisons with those of pure random array and periodic array antenna, such as, the bandwidth, directivity and design calculation complexity, which further highlights the advantages of subarray-level aperiodic array in design and manufacture. In this paper, we consider a scalable arrangement method for large and medium-sized subarray level aperiodic array antennas, and propose three aperiodic array structures: DSUE, USRE, and DSRE. The array structure is optimized by IBA to reduce PSLL. The characteristics of different arrays are analyzed by redundancy theory. The influences of the number of elements and subarrays, the dislocation distance of subarrays, scanning characteristics and applicable frequency width on PSLL are analyzed. The simulation results show that DSRE not only has the best PSLR effect, but also has lower computational complexity. The structure of this paper is arranged as follows. Section II describes the structure of subarray level aperiodic array antennas. Section III introduces array synthesis, application details of the IBA and redundancy theory. Section IV analyzes the PSLR effect of the array in terms of the number of elements and subarrays, subarray dislocation distance, scanning characteristics and applicable frequency range. Section V contains the conclusion and discussion. ## II Subarray level aperiodic array design We propose three kinds of aperiodic rectangular grid array antennas, each of which is composed of the same subarrays. Fig. 1 is the analysis model diagram of three array structures when the number of elements in the subarray is \(M\times M\) (4\(\times\)4 in the figure) and the number of subarrays is \(N\times N\) (7\(\times\)7 in the figure). The whole array is composed of the same kind of subarrays. Three kinds of array antennas are formed, namely, DSUE, USRE and DSRE. The size of the subarray is \(L\)'. The subarray is placed in a square grid with side length \(L\) because of the need to vacate the dislocation distance of subarray. The whole array consists of squares grids of the same size which are uniformly and closely arranged, and the distance between the side of subarray and the same side of square grid is \(\Delta_{s}/2\). \(\Delta_{s}\) is the longitudinal or transverse Figure 1: Analysis model diagram of three aperiodic array structures with relevant Cartesian (\(x\),\(y\),\(z\)) and spherical (\(r\),\(\phi\)) coordinate systems. _mm:_ the subarray in the \(m\)-th row and \(n\)-th column; \(d\): element spacing; \(L\)’: The size of subarray; \(L\): the size of grid; \(\Delta_{s}\): the longitudinal or transverse dislocation distance of a subarray; \(d_{\rm subarray}\) and \(d_{\rm sub}\): represent the dislocation distance of subarrays in \(x\) and \(y\) directions respectively. Figure 2: Structure diagrams of full arrays with 26\(\times\)28 elements. The average array element spacing is slightly greater than one wavelength. (a) subarrays and array elements are uniform (b)DSUE (c) USRE (d)DSRE dislocation distance of a subarray. There is no strict standard for its value, which needs to be determined according to the designed array. For the array with the average element spacing greater than one wavelength in this paper, \(\Delta_{s}\) generally takes slightly less than one wavelength as the optimal value. The influence of dislocation position is briefly analyzed in the IV section. Taking 28\(\times\)28 array elements as an example, Fig. 2 (a) is the periodic array structure diagram with uniformly arranged array elements, and Fig. 2 (b), (c) and (d) are the array structure diagrams of DSUE, USRE and DSRE respectively. Among them, the structure of DSUE array has been discussed in literature [11]. The existing literature data show that USRE and DSRE arrays are rarely studied. When the number of elements in the array is fixed, the number of subarrays is inversely proportional to the number of elements in the subarray. For the array with \(28\times 28\) elements, it can be divided into \(7\times 7\), \(4\times 4\) and \(2\times 2\) subarrays, and the corresponding number of elements in the subarray is \(4\times 4\), \(7\times 7\) and \(14\times 14\) respectively. The primary purpose of this paper is to compare the PSLR characteristics of these three arrays. It is very important to ensure the same resolution (related to array aperture) of arrays with different structures. At the same time, when changing the size of subarrays, the difference between array element spacing and subarray dislocation distance should not be too large. Therefore, we make them change proportionally, as shown in the following formula, and the array apertures are the same \[\left\{\begin{matrix}k-\frac{\Delta_{s}\sigma_{7}-\sigma}{d_{4-4}}-\frac{ \Delta_{s}\kappa+4}{d_{7-5}}-\frac{\Delta_{s}\lambda+2}{d_{4-14}+14}\\ L_{a},7\times 7}-L_{a},4\cdot 4-L_{a},2\cdot 2\end{matrix}\right. \tag{1}\] where \(k\) is the proportional constant, \(\Delta_{s}\) is the subarray dislocation distance, \(L_{a}\) is the array aperture, the corresponding subscript is the number of subarrays, \(d\) is the element spacing, and its subscript is the number of elements in the subarray. For the random distribution of subarrays, the subarray length \(L^{\prime}\) of different arrays is also calculated by the uniform spacing of array elements. ## III Analysis Method ### Array Synthesis Using Subarrays The total radiation pattern function of the planar array can be expressed as [15] \[F(\theta,\,\varphi)=f_{c}(\theta,\,\varphi)\,\,\mathrm{AF}_{\mathrm{array}}( \theta,\,\varphi) \tag{2}\] where \(f_{c}(\theta\,\,\varphi)\) is the elemental pattern and \(\mathrm{AF}_{\mathrm{array}}(\theta,\,\varphi)\) is the array factor of the planar array. Because the element radiation pattern changes slowly with the angle, and the PSLL reduction level of the array radiation pattern is close to that of the array factor, only the array factor is analyzed in this paper. According to the analysis model of array structure in Fig. 1, the array factor can be expanded as follows \[\mathrm{AF}_{\mathrm{array}}(\theta,\,\varphi) = \sum_{m,n}\sum_{m^{\prime}n^{\prime}}I_{m,n^{\prime}}e^{A[(m+m^{ \prime}+m^{\prime})n_{n}+(m+m^{\prime}+m^{\prime})n_{n^{\prime}}]} \tag{3}\] where, \[\begin{array}{c}s_{mn}=x_{m}+\Delta_{\mathrm{scm}}\\ s_{jn}=y_{n}+\Delta_{\mathrm{scm}}\\ e_{zm^{\prime}}=m^{\prime}x_{o}+\Delta_{\mathrm{scm}n^{\prime}}\\ e_{ym^{\prime}}=n^{\prime}y_{o}+\Delta_{\mathrm{scm}n^{\prime}}\\ u_{x}=sin\,\theta\,cos\,\varphi-sin\,\theta_{o}\,cos\,\varphi_{o}\\ u_{y}=sin\,\theta\,sin\,\varphi-sin\,\theta_{o}\,sin\,\varphi_{o}\\ u=sin\,\theta\,cos\,\varphi,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ as array spacing and array aperture will be considered in the optimization of the algorithm. The steps and parameters of IBA are as follows: _Step1:_ Initialize the population and parameters. The initial population can be increased to improve the accuracy of the initial solution. On the premise that the initial position solution is reliable, the flight speed \(v\) of the bat should be appropriately reduced at this position. It prevents flying too fast from leaving the optimal solution range. Table 1 shows the specific parameter settings. When the initial population is generated, it is necessary to judge the array element spacing of the initial population and modify the position of the array elements that do not meet the constraint conditions. The standard for judging the array element spacing is the minimum array element spacing \(d_{\min 2}\). \(\lambda/2\), and the settings of multiple constraint conditions are as follows \[\begin{cases}\begin{array}{cc}s.t&d_{\min}\leq d_{x,inf^{\prime}}\cdot d_{x,inf^{\prime}}\leq d_{\max}\\ &\hskip-28.452756pt-\Delta_{s}/2\leq\Delta_{s,min}\leq\Delta_{s}/2\\ x_{a,min}\leq L_{a},&y_{a,min}\leq L_{a},&d_{a,min}\leq 2L_{a}^{1/2}\end{array}\end{cases} \tag{7}\] where \(\Delta_{s,min}\) is the dislocation distance of subarray. \(x_{a,min},y_{a,min}\) indicate the constraint size of the total array in the \(x\) and \(y\) directions, \(L_{a}=N(Md+\Delta_{s})\) is the array aperture, and \(d_{a,min}\) is the spacing of elements in the array. Under the condition of satisfying the array aperture and the array element spacing, the constraint spacing of elements in the subarray can be expressed as \[\begin{cases}d_{x,inf^{\prime}}\cdot d_{x,inf^{\prime}=x}-\left[(x_{x,inf^{ \prime}}\cdot x_{x,inf^{\prime}=x})^{2}+(y_{x,inf^{\prime}}\cdot y_{x,inf^{ \prime}=x})^{2}\right]^{\frac{1}{2}}\\ d_{\min}=\frac{\lambda}{2},&d_{\max}=2L_{a}^{1/2}\end{cases} \tag{8}\] where \(x_{x,min}\), \(y_{x,min}\) represent the position of elements in the subarray in the \(x\) and \(y\) directions, and \(d_{x,min}\) denotes the spacing of any two elements in the subarray. When all array elements meet the constraint conditions, the initial population is generated and the initial solution is obtained. _Step2:_ Set the fitness function, substitute the initial population into the fitness function, and start iteration to generate a preliminary global optimal position solution. The fitness function is expressed as follows \[f\left(\lambda\right)=\min\{\text{fitness}(\lambda)\}=\max\left\{\frac{\text{ AF}_{a}(u,v)}{\text{AF}_{\min}}\right\} \tag{9}\] where \(\text{AF}_{a}(u,v)\) is the sidelobe area of radiation pattern, and the objective function will locate the PSLL in the whole airspace. If a bat finds the best foraging position \(g_{*}\), it will attract other bats to fly in search of food. Each bat is associated with a velocity \(v_{i}^{\prime}\) and a position \(x_{i}^{\prime}\) at iteration \(t\). To update velocities and positions of all bats, Equations (10)-(13) are used. \[f_{i}=f_{\min}+(f_{\max}\cdot f_{\min})\times\text{rand(0,1)} \tag{10}\] \[f_{i}^{\prime}=\frac{c+v_{i}^{\prime}-1}{c+v_{i}^{\prime}}\times f_{i}^{\prime} \times(1+C_{i}\times\frac{g_{*}*x_{i}^{\prime}-1}{\left|\mathbf{g}_{*}-x_{i}^{ \prime}-1}\right|+z) \tag{11}\] \[v_{i}^{\prime}=\omega\times v_{i}^{\prime\,1}+\left(x_{i}^{\prime,1}-\mathbf{ g}_{*}\right)\times f_{i}^{\prime} \tag{12}\] \[\mathbf{x}_{i}^{\prime}=\mathbf{x}_{i}^{\prime\,1}+\mathbf{v}_{i}^{\prime} \tag{13}\] where \(f_{\min}\) and \(f_{\max}\) are the minimum and the maximum frequency. Their values are 0 and 1, respectively, depending on the specific environment. \(c=340\)\(m/s\) is the speed of sound in the air. \(C_{i}\) denotes the doppler effect compensation rate. \(\omega\) is the inertia weight. Literature [21] has shown that the adaptive local search strategy with increased Doppler effect and Doppler effect compensation rate has a good effect of increasing population diversity. This paper also introduces it to improve the performance of the algorithm. _Step3:_ For the local search part, once a solution is selected among the current optimal solutions, a new solution is generated using a random walk \[x_{\text{new}}=x_{\text{old}}+N[0,1]\times A^{\prime} \tag{14}\] where \(A^{\prime}\) is the average loudness of all bats. The new solution should also meet the constraint conditions. _Step4:_ The fitness function is used to evaluate the local new solution. When the new solution is better than the current optimal solution, the loudness and pulse emission rate are updated by the following equations \[A_{i}^{\prime\,1}=\alpha A_{i}^{\prime\,i} \tag{15}\] \[r_{i}^{\prime\,1}=r_{i}^{\,0}\left[1-\exp\left(-\gamma t\right)\right] \tag{16}\] where \(\alpha\) and \(\gamma\) are attenuation factor and pulse rate factor. The initial loudness \(A_{i}\) can be [1,2], while the initial emission rate \(r_{i}^{\,0}\) can be [0,1]. _Step5:_ Update the current global optimal solution. If the loop reaches the maximum number of iterations, the optimal solution and the optimal fitness function value are output. If not, it returns to step2 to continue the iteration. ### _Redundancy Theory_ The minimum redundant array (MRA) is widely used in microwave radiometer [24, 25], radio astronomy [26], adaptive beamforming [27] and MIMO radar [28]. Any antenna pair in the array will form a baseline vector, and redundancy means that the baseline vectors are the same. The process of searching for MRA is to find the array structure with minimum redundancy. Theoretically, a planar array with \(N\) elements can have up to _N(N-1)_ non-redundant baselines. However, the actual number of non-redundant baselines is usually less than this value because the planar array elements are difficult to be completely random. Generally, the redundancy of an array is measured by \(R\), which is defined as the ratio of the ideal number of non-redundant baselines _Sa=N(N-1)_ to the actual number of non-redundant baselines _Sa_. The randomness of array element arrangement in aperiodic array is closely related to the PLSR effect. The redundancy can represent the uniformity of the array, and we can infer that the redundancy of the array can reflect its PSLR effect to some extent. It is not a mature theory to reduce PSLL by optimizing array redundancy, but the close mapping relationship between redundancy and aperiodic arrays with different uniformity can be seen through simulation in this paper. ## IV Simulation results and analysis ### _Three kinds of arrays PSLR characteristic_ In order to simplify the analysis, we take arrays with 12\(\times\)12, 18\(\times\)18 and 28\(\times\)28 elements as examples. Three kinds of subarrays can be found for each case. For example, an array with 28\(\times\)28 elements can be divided into 2\(\times\)2, 4\(\times\)4 and 7\(\times\)7 subarrays. The following analysis methods and conclusions can generally be extended to other similar arrays with different numbers of array elements. For all arrays in this paper, the positions of array elements are optimized by IBA at the frequency of 10 GHz. The average spacing of array elements is \(d_{uv}\geq\lambda\). Fig. 3(a) is the comparison diagram of PSLR effect of three arrays which have 6\(\times\)6 (array elements 12\(\times\)12), 6\(\times\)6 (array elements 18\(\times\)18) and 7\(\times\)7 (array elements 28\(\times\)28) subarrays. The dislocation distance between these subarrays is \(\Delta_{v}=0.87\lambda\). When the number of subarrays is large, the PSLR effect of DSUE arrays is better than that of USNE arrays. This is because the dislocations of DSUE arrays have more non-uniformity than the randomness of USRE arrays when the number of elements in subarrays is relatively small. DSRE arrays can greatly break the periodicity of arrays, and their PSLR effect is superior to other arrays. Fig. 3(b) and Fig. 3(c) illustrate the PSLR effect of three arrays when the numbers of subarrays are moderate and small, respectively. In order to ensure the same array aperture for the same number of array elements, the relationship between dislocation distance and element spacing should satisfy equation (1). In the case of 28\(\times\)28 elements, the dislocation distances are \(\Delta_{v,\delta,\delta v}=0.93\lambda\) and \(\Delta_{v,2\times 2}=0.99\lambda\), respectively. When the numbers of subarrays are less than the numbers of elements in the corresponding subarrays, the PSLR effect of USRE arrays is better than that of DSUE arrays, and that of DSRE arrays is still the best. Fig. 4 shows the two-dimensional radiation patterns of the array structures corresponding to Fig. 2 sequentially. As shown in Fig. 4(a), the GL has the same amplitude as the main lobe. The peak sidelobe levels (PSLLs) and positions are also indicated in the radiation patterns. It can be clearly seen that due to the non-uniform arrangement of array elements, the peak sidelobe positions are distributed in the visible range of the entire airspace. The corresponding relationship between PSLL and redundancy of arrays with different number of array elements is shown in Tables 2, 3 and 4. As can be seen from these tables, the three kinds of aperiodic arrays with different subarray divisions have different non-uniformity. The array structure with high degree of non-uniformity has less redundancy. Redundancy can effectively reflect the non-uniformity of arrays, which is helpful for researchers to grasp the randomness of arrays in the design process. In literature [14], the array with rotating subarrays composed of 144 elements is optimized by traditional genetic algorithm (GA) at the frequency of 10 GHz. To highlight the proficiency of IBA, particle swarm optimization (PSO) is also selected to optimize DSRE\({}_{4+}\) array with 12\(\times\)12 elements. The initial population number is set to 200. When the average relative change in fitness function values over 10 generations is less than 1\(\times\) 10\({}^{-4}\), the optimization process is terminated. The optimization results of IBA are obtained when \(\nu\)=0.4, \(\omega\)=0.75. The PSO optimization results are calculated when the inertia constant \(\omega\) = 0.7, the acceleration constant \(c_{1}\)=\(c_{2}\)=1.5 and the maximum particle velocity \(Vmax\)=0.12. As shown in Fig. 5(a), the convergence curves of these two algorithms are compared with those of GA applied to tiled array in literature [14]. It can be seen that IBA is superior to the other two algorithms in terms of convergence speed and accuracy. In this paper, IBA is calculated by Intel core i7 8700 processor. Compared with literature [14], the calculation time of IBA over element number is shorter, as shown in Fig. 5(b), which is closely related to the number of iterations and convergence characteristics of the algorithm. As shown in Table 2, the PSLL of DSRE arrays with 12\(\times\)12 elements worsen as the number of subarrays increases. The array length in literature [14] is 8\(\lambda\), and the average array spacing is less than 1\(\lambda\). The results show that the PSLL is about -11 dB. In this paper, in the case of the array length greater than 8\(\lambda\) and the average array spacing slightly greater than 1\(\lambda\), the worst PSLL is -11.62 dB, which is the result of PSLR with the largest number of subarrays (4\(\times\)4). When the number of subarrays is 2\(\times\)2, the PSLL can be suppressed to -12.29 dB. This is obviously superior to the subarray rotation structure proposed in literature [14]. As the number of array elements increases, the optimization time does not increase linearly, which depends on the accuracy of the initial population location of the algorithm. Once the population is located near the optimal position, the fitness function will converge rapidly, thus shortening the optimization time. According to the subarray division scheme, the optimization of array elements arrangement in each subarray only needs to be calculated once, and the dislocation optimization of all subarrays needs to be considered. Thus, the number of design variables is less than that of completely random arrays. Therefore, no matter the optimization time or the number of design variables, the DSRE array has low computational complexity. In summary, the PSLR effect of DSUE and USRE arrays varies with the number of subarrays. No matter for which array structure, DSRE array is the best choice in the array design with higher requirements for PLSR, even for a small number of random elements in the subarray. Compared with other aperiodic arrays, such as subarray rotating random array, DSRE array also has good performance. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Number of array elements & \multicolumn{8}{c|}{12\(\times\)12} \\ \hline Number of subarrays & \multicolumn{8}{c|}{2\(\times\)2} & \multicolumn{8}{c|}{3\(\times\)3} & \multicolumn{8}{c|}{4\(\times\)4} \\ \hline Array structure & DSRE & USRE & DSUE & DSRE & USRE & DSUE & DSRE & USRE & DSUE \\ \hline PSLL (dB) & -14.35 & -13.03 & -5.47 & -13.83 & -12.36 & -7.35 & -13.13 & -9.44 & -11.30 \\ \hline Number of non-redundant baselines & 96938 & 79844 & 12854 & 96848 & 52518 & 13748 & 95222 & 24678 & 40624 \\ \hline Redundancy (\(R\)) & 1.08 & 1.31 & 8.14 & 1.08 & 1.99 & 7.61 & 1.10 & 4.24 & 2.58 \\ \hline \end{tabular} \end{table} TABLE IV: Corresponding between PSLL and redundancy of Arrays with 28-28 elements ### Influence of Subarray Dislocation Distance on Pslr It is very important to select the optimal dislocation distance of subarrays to improve the performance of array antennas in the subarray level aperiodic array. This section will mainly study the influence of dislocation distance on the PLSR effect. The array with 28\(\times\)28 elements is still considered. From the perspective of engineering application, there are too many randomly arranged elements in the 2\(\times\)2 subarrays, and the design of a single subarray is relatively complex and has low application value. So, it will not be analyzed here. This section discusses the PSLR effect of three kind of array structures under different dislocation distances with 7\(\times\)7 and 4\(\times\)4 subarrays. In Fig. 6, the subarray dislocation distance is \(0.6\lambda\leq\Delta_{\rm s}\leq 1.4\lambda\). The array aperture is the same under the same subarray dislocation distance. The average array element spacing is no longer proportional to the subarray dislocation distance, but is determined by the array aperture. According to the overall trend in Fig. 6, the PSLR effect is DSRE\({}_{4\cdot 4}\)-DSRE\({}_{7\cdot 7}\)-\(>\)USRE\({}_{4\cdot 4}\)-\(>\)DSUE\({}_{7\cdot 7}\)-\(>\)USRE\({}_{7\cdot 7}\)-\(>\)DSUE\({}_{4\cdot 4}\). For the proposed structures, the PSLR effect is the best when the subarray dislocation distance is slightly less than a wavelength or the average array element spacing. The PSLLs of DSRE\({}_{4\cdot 4}\), DSRE\({}_{7\cdot 7}\) and USRE\({}_{4\cdot 4}\)-arrays show a stable trend with the increase of \(\Delta_{\rm s}\). As shown in Table 4, the redundancy of these three arrays increases gradually. From the analysis of array structures, the non-uniformity of DSRE\({}_{4\cdot 4}\) and DSRE\({}_{7\cdot 7}\) arrays are affected by subarray dislocation and the non-uniform arrangement of array elements, with the latter being the dominant factor affecting the non-uniformity of array. Therefore, the more the array elements are arranged non-uniformly, the stronger the PSLR ability becomes. For USRE\({}_{4\cdot 4}\) and DSUE\({}_{7\cdot 7}\), the former has 7\(\times\)7 non-uniformly arranged array elements, while the latter has 7\(\times\)7 dislocated subarrays. The number of non-uniform influencing factors is the same, but the PSLR ability of the former is better than that of the latter. The reason is that the non-uniform arrangement of array elements plays a main role in influencing the non-uniformity of array. The same is true for USRE\({}_{7\cdot 7}\) and DSUE\({}_{4\cdot 4}\). When the number of single non-uniform influencing factors is different, such as USRE\({}_{7\cdot 7}\) and DSUE\({}_{7\cdot 7}\), the former has 4\(\times\)4 non-uniformly arranged array elements, and the latter has 7\(\times\)7 dislocated subarrays. In this case, the larger number of dislocated subarrays dominate the non-uniformity of the array, so the PSLR effect of the latter is better than that of the former. The PSLR effect of all array structures in Fig. 6 basically weaken with the increase of \(\Delta_{\rm s}\), especially USRE\({}_{7\cdot 7}\) and DSUE\({}_{7\cdot 7}\). For arrays with a single non-uniform factor, it will inevitably lead to the increase of the subarrays spacing and the elements spacing with the increase of \(\Delta_{\rm s}\). The increase of either of them will worsen the PSLL of the array pattern. We can see that DSUE\({}_{4\cdot 4}\) has a different change rule. The reason is that the number of dislocated subarrays is quite small, which is the only factor affecting the non-uniformity of the array. This results in its unspecific variation rule with \(\Delta_{\rm s}\). ### Scan of Array The scanning characteristic is an important index to evaluate the performance of array antenna. The phased array antenna with wide-angle scanning capability is widely used in radar, direction finding and remote sensing. This section will further verify the wide scanning characteristic of the array structures proposed in this paper. For the array antenna with 28\(\times\)28 elements, Fig. 7 is the full-space three-dimensional scanning patterns and plane maps of DSERE array with 7\(\times\)7 subarrays when scanning angles are 15\({}^{\circ}\) in the \(u\) axis and 15\({}^{\circ}\) in the \(v\) axis. For a small scanning angle of 15\({}^{\circ}\), the PSLL of the array radiation pattern is below -13.5 dB, and the SLL does not increase obviously. Fig. 8 shows the full-space three-dimensional scanning patterns and plane maps of DSUE array and USRE array with 7\(\times\)7 subarrays when scanning angles are 15\({}^{\circ}\) in the \(u\) axis and 15\({}^{\circ}\) in the \(v\) axis. The \(u\)-axis and the \(v\)-axis can refer to the labels in Fig. 7. Table 5 lists the corresponding PSLL values in Fig. 7 and 8. Compared with the radiation pattern of DSEE array, it can be found that the scanning characteristics of DSUE array and USRE array are poor even at small scanning angles. With the increase of the scanning angle, the scanning characteristics are even worse, because the radiation energy is concentrated in Figure 6: Influence of subarray dislocation distance on PSLLs. Figure 7: **30 scanning patterns and plane maps of DSEE array with 7\(\times\)7 subarrays (a) \(u\) = 15\({}^{\circ}\) (b) \(v\) = 15\({}^{\circ}\).** some fixed areas, resulting in the increase of PSLL. However, due to the high non-uniformity of array elements, DSRE array can effectively disperse the radiation energy, and thus reduce the PSLL. For a larger scanning range, DSRE array can also maintain good radiation characteristics. Fig. 9 shows the scanning patterns and plane maps at 30\({}^{\circ}\) and 60\({}^{\circ}\) in \(u\) axis and \(v\) axis respectively. And their PSLL values are shown in Table 6. It can be seen from the figure that at larger scanning angles of 30\({}^{\circ}\) and 60\({}^{\circ}\), the PSLL and the average SLL do not increase significantly with the increase of scanning angle. For a larger scanning range, DSRE array can also maintain good radiation characteristics. Fig. 9 shows the scanning patterns and plane maps at 30\({}^{\circ}\) and 60\({}^{\circ}\) in \(u\) axis and \(v\) axis respectively. And their PSLL values are shown in Table 6. It can be seen from the figure that at larger scanning angles of 30\({}^{\circ}\) and 60\({}^{\circ}\), the PSLL and the average SLL do not increase significantly with the increase of scanning angle. ### _Frequency Characteristic_ Aperiodic array antennas have very stable modes over a wide frequency range, making them very suitable for operation in large bandwidths. This section will analyze the PSLR effect of \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}{c} Frequency \\ (GHz) \\ \end{tabular} & 6.12 & 8.11 & 10.00 & 12.00 & 14.29 & 15.79 & 17.65 \\ \hline PSLL(dB) & -7.17 & -12.94 & -12.76 & -12.72 & -9.74 & -9.71 & -9.82 \\ \hline PSLL(dB) & -5.24 & -10.49 & -10.55 & -8.88 & -7.66 & -5.25 & -5.14 \\ \hline PSLL(dB) & -7.28 & -13.51 & -13.53 & -13.49 & -10.12 & -10.14 & -10.13 \\ \hline \end{tabular} \end{table} TABLE VII: PSLL OF 7\(\times\)7 SUBARRAY FOR DIFFERENT FREQUENCIES \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Array structure & \multicolumn{2}{c|}{DSRE} & \multicolumn{2}{c|}{DSUE} & \multicolumn{2}{c|}{USRE} \\ \hline Scanning & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} \\ angles in \(u\) axis & & & & & & \\ \hline Scanning & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} \\ angles in \(v\) axis & & & & & & \\ \hline PSLL (dB) & -13.51 & -13.50 & -12.75 & -12.74 & -8.88 & -8.87 \\ \hline \end{tabular} \end{table} TABLE V: PSLL OF DISRE ARRAY FOR DIFFERENT SCANNING ANGLES \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Array structure & \multicolumn{2}{c|}{DSRE} & \multicolumn{2}{c|}{DSUE} & \multicolumn{2}{c|}{USRE} \\ \hline Scanning & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} \\ angles in \(u\) axis & & & & & & \\ \hline Scanning & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} & \multirow{2}{*}{\(0^{\circ}\)} & \multirow{2}{*}{\(15^{\circ}\)} \\ angles in \(v\) axis & & & & & & \\ \hline PSLL (dB) & -13.51 & -13.50 & -12.75 & -12.74 & -8.88 & -8.87 \\ \hline \end{tabular} \end{table} TABLE VI: PSLL OF DISRE ARRAY FOR DIFFERENT SCANNING ANGLES Figure 8: 3D scanning patterns and plane maps of DSUE array and USRE array with 7\(\times\)7 subarrays (a) DSUE, \(u\) = 15\({}^{\circ}\) (b)DSUE, \(v\) = 15\({}^{\circ}\) (c) USRE, \(u\) = 15\({}^{\circ}\) (d)USRE, \(v\) = 15\({}^{\circ}\). Figure 10: The PSLR effect of three structures with 28\(\times\)28 elements in the frequency range (6GHz-20GHz). (a) YX7 subarrays (b) 4X4 subarrays. three array structures at different frequencies and provide some design suggestions. Fig. 10 shows the PSLR effect varying with frequency for 28\(\times\)28 elements with 7\(\times\)7 and 4\(\times\)4 subarrays respectively. The array structures are optimized at the frequency of 10 GHz. Table 7 and 8 show the PSLL values at different frequencies in Fig. 10 (a) and (b). From the perspective of the PSLR effect, the PSLL of the three arrays in Fig. 10 (a) are stable at low frequency band, and mutate at high frequency band. The lobe levels of DSUE array and DSRE array tend to be stable at high frequency, while the lobe levels of USRE array rise sharply with increasing frequency. This is because the adaptive frequency width of the arrays is related to their non-uniformity. The higher the non-uniformity of aperiodic array is, the more stable its radiation pattern is over a wider frequency range, and the PSLL mutation is reduced. The PSLL of the three array structures in Fig. 10 (b) show similar frequency characteristics as those in Fig. 10 (a). The PSLLs of DSUE array with 4\(\times\)4 subarrays and USRE array with 7\(\times\)7 subarrays show a similar tendency as the frequency increases, while the curves of USRE array with 4\(\times\)4 subarrays and DSUE array with 7\(\times\)7 subarrays look similar. It is worth mentioning that the PSLR ability of DSRE array with 4\(\times\)4 subarrays remain stable with the increase of frequency. The PSLL begins to increase at the frequency of 27 GHz which is beyond the frequency range shown in the figure. So, it can be seen that the adaptive frequency width of DSRE array with 4\(\times\)4 subarrays is very wide. It also has been shown in Fig. 6 that this array has the best PSLR effect. For the arrays proposed in this paper, the higher the non-uniformity of the array, the wider the adaptive frequency width. ## V Conclusion In this paper, three aperiodic arrangement methods of subarray-level rectangular grid array antennas are proposed. The IBA is used to optimize array structures through the dislocation position of subarrays and random position of elements in the subarray. The PSLR effect of three kinds of arrays with different number of elements and subarrays are analyzed, and the non-uniformity of arrays are characterized by their redundancy. The results show that the PSLR effect of DSRE arrays is better than those of DSUE arrays and USRE arrays. The array with 28\(\times\)28 elements is taken as an example to study the PSLL in terms of dislocation distance of subarray, scanning angle and applicable frequency. The dislocation distance of subarrays slightly less than one wavelength is the best choice. The DSRE arrays can maintain good PSLR ability at the large scanning angle of up to 60\({}^{\circ}\), which also have wider adaptive frequency width. The proposed design method of DSRE array is a universal and scalable method. For example, the array with 28\(\times\)28 elements can be treated as a new subarray to further expand the scale of aperiodic array antenna. By inducing the rotation degree of freedom, such as subarray rotation or Fermat spiral, we can combine rotation, dislocation and random elements approaches to study the RSLR effect. This is of great value to the design and application of large-scale aperiodic array antennas.
2306.01439
Interpretable and Explainable Logical Policies via Neurally Guided Symbolic Abstraction
The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent's behaviour, especially when working on the image level. Therefore, neuro-symbolic RL aims at creating policies that are interpretable in the first place. Unfortunately, interpretability is not explainability. To achieve both, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents. Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes.
Quentin Delfosse, Hikaru Shindo, Devendra Dhami, Kristian Kersting
2023-06-02T10:59:44Z
http://arxiv.org/abs/2306.01439v2
# Interpretable and Explainable Logical Policies ###### Abstract The limited priors required by neural networks make them the dominating choice to encode and learn policies using reinforcement learning (RL). However, they are also black-boxes, making it hard to understand the agent's behaviour, especially when working on the image level. Therefore, neuro-symbolic RL aims at creating policies that are interpretable in the first place. Unfortunately, interpretability is not explainability. To achieve both, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE). NUDGE exploits trained neural network-based agents to guide the search of candidate-weighted logic rules, then uses differentiable logic to train the logic agents. Our experimental evaluation demonstrates that NUDGE agents can induce interpretable and explainable policies while outperforming purely neural ones and showing good flexibility to environments of different initial states and problem sizes. ## 1 Introduction Deep reinforcement learning (RL) agents use neural networks to take decisions from the unstructured input state space without manual engineering (Mnih et al., 2015). However, these black-box policies lack _interpretability_(Rudin, 2019), _i.e._ the capacity to articulate the thinking behind the action selection. They are also not robust to environmental changes (Pinto et al., 2017; Wulfmeier et al., 2017). Although performing object detection and policy optimization independently can get over these issues (Devin et al., 2018), doing so comes at the cost of the aforementioned issues when employing neural networks to encode the policy. As logic constitutes a unified symbolic language that humans use to compose the reasoning behind their behavior, logic-based policies can tackle the interpretability problems for RL. Recently proposed Neural Logic RL (NLRL) agents (Jiang and Luo, 2019) construct logic-based policies using differentiable rule learners called \(\partial\)_ILP_(Evans and Grefenstette, 2018), which can then be integrated with gradient-based optimization methods for RL. It represents the policy as a set of weighted rules, and performs policy gradients-based learning to solve RL tasks which require relational reasoning. It successfully produces interpretable rules, which describe each action in terms of its preconditions and outcome. However, the number of potential rules grows exponentially with the number of considered actions, entities, and their relations. NLRL is a memory-intensive approach, _i.e._ it generates a set of potential simple rules based on rule templates and can only be evaluated on simple abstract environments, created for the occasion. This approach can generate many newly invented predicates without their specification of meaning (Evans and Grefenstette, 2018), making the policy challenging to interpret for complex environments. Moreover, the function of _explainability_ is absent, _i.e._ the agent cannot explain the importance of each input on its decision. Explainable agents should adaptively produce different explanations given different input states. A question thus arises: _How can we build interpretable and explainable RL agents that are robust to environmental changes?_ To this end, we introduce Neurally gUided Differentiable loGic policiEs (NUDGE), illustrated in Fig. 1, that embody the advantages of logic: they are easily adaptable to environmental changes, composable, _interpretable_ and _explainable_ (because of our differentiable logic module). Given an input state, NUDGE extracts entities and their relations, converting raw states to a logic representations. This probabilistic relational states are used to deduce actions, using differentiable forward reasoning (Evans and Grefenstette, 2018; Shindo et al., 2023). NUDGE produces a policy that is both _interpretable_, _i.e._ provides a policy as a set of weighted interpretable rules that can be read out by humans, and _explainable_, _i.e._ explains which input is important using gradient-based attribution methods (Sundararajan et al., 2017) over logical representations. To achieve an efficient policy learning on NUDGE, we provide an algorithm to train NUDGE agents based on the PPO actor-critic framework. Moreover, we propose a novel rule-learning approach, called _Neurally-Guided Symbolic Abstraction_, where the candidate rules for the logic-based agents are obtained efficiently by being guided by neural-based agents. NUDGE distillates abstract representations of neural policies in the form of logic rules. Rules are assigned with their weights, and we perform gradient-based optimization using the PPO actor-critic framework. Overall, we make the following contributions: 1. We propose NUDGE2: differentiable logical policies that learn interpretable rules and produce explanations for their decisions in complex environments. NUDGE uses neurally-guided symbolic abstraction to efficiently find a promising ruleset using pretrained neural-based agents guidance. Footnote 2: Code publicly available: [https://github.com/k4ntz/LogicRL](https://github.com/k4ntz/LogicRL). 2. We empirically show that NUDGE agents: (i) can compete with neural-based agents, (ii) adapt to environmental changes, and (iii) are interpretable and explainable, _i.e._ produce interpretable policies as sets of weighted rules and provide explanations for their action selections. 3. We evaluate NUDGE on \(2\) classic Atari games and on \(3\) proposed object-centric logically challenging environments, where agents need relational reasoning in dynamic game-playing scenarios. We start off by introducing the necessary background. Then we explain NUDGE's inner workings and present our experimental evaluation. Before concluding, we touch upon related work. ## 2 Background We now describe the necessary background before formally introducing our NUDGE method. **Deep Reinforcement Learning**. Reinforcement Learning problems are modelled as Markov decision process, \(\mathcal{M}=<\mathcal{S}\), \(\mathcal{A},P,R>\), where, at every timestep \(t\), an agent is in a state \(s_{t}\in\mathcal{S}\), takes action \(a_{t}\in\mathcal{A}\), receives a reward \(r_{t}=R(s_{t},a_{t})\) and a transition to the next state \(s_{t+1}\), according to environment dynamics \(P(s_{t+1}|s_{t},a_{t})\). Deep agents attempt to learn a parametric policy, \(\pi_{\theta}(a_{t}|s_{t})\), in order to maximize the return (_i.e._\(\sum_{t}\gamma^{t}r_{t}\)). In RL problems, the desired input to output (_i.e._ state to action) distribution is not directly accessible, as RL agents only Figure 1: **Overview of NUDGE. Given a state (depicted in the image), NUDGE computes the action distribution using relational state representation and differentiable forward reasoning. NUDGE provides _interpretable_ and _explainable_ policies, _i.e._ derives policies as sets of interepretable weighted rules, and can produce explanations using gradient-based attribution methods.** observe returns. The value \(V_{\pi_{\theta}}(s_{t})\) (resp. Q-value \(Q_{\pi_{\theta}}(s_{t},a_{t})\)) function provides the return of the state (resp. state/action pair) following the policy \(\pi_{\theta}\). Policy-based methods directly optimize \(\pi_{\theta}\) using the noisy return signal, leading to potentially unstable learning. Value-based methods learn to approximate the value functions \(\hat{V}_{\phi}\) or \(\hat{Q}_{\phi}\), and implicitly encode the policy, _e.g._ by selecting the actions with the highest Q-value with a high probability (Mnih et al., 2015). To reduce the variance of the estimated Q-value function, one can learn the advantage function \(\hat{A}_{\phi}(s_{t},a_{t})=\hat{Q}_{\phi}(s_{t},a_{t})-\hat{V}_{\phi}(s_{t})\). An estimate of the advantage function can be computed as \(\hat{A}_{\phi}(s_{t},a_{t})=\sum_{i=0}^{k-1}\gamma^{i}r_{t+i}+\gamma^{k}\hat{V }_{\phi}(s_{t+k})-\hat{V}_{\phi}(s_{t})\)(Mnih et al., 2016). The Advantage Actor-critic (A2C) methods both encode the policy \(\pi_{\theta}\)(_i.e._ actor) and the advantage function \(\hat{A}_{\phi}\)(_i.e._ critic), and use the critic to provide feedback to the actor, as in (Konda and Tsitsiklis, 1999). To push \(\pi_{\theta}\) to take actions that lead to higher returns, gradient ascent can be applied to \(L^{PG}(\theta)=\hat{\mathbb{E}}[\log\pi_{\theta}(a\mid s)\hat{A}_{\phi}]\). Proximal Policy Optimization (PPO) algorithms ensure minor policy updates that avoid catastrophic drops (Schulman et al., 2017), and can be applied to actor-critic methods. To do so, the main objective constraints the policy ratio \(r(\theta)=\frac{\pi_{\theta}(a|s)}{\pi_{\theta\text{ad}}(a|s)}\), following \(L^{PR}(\theta)=\hat{\mathbb{E}}[\min(r(\theta)\hat{A}_{\phi},\text{clip}(r( \theta),1-\epsilon,1+\epsilon)\hat{A}_{\phi})]\), where \(\text{clip}\) constrains the input with the bound \([1-\epsilon,1+\epsilon]\). PPO actor-critic algorithm's global objective is \(L(\theta,\phi)=\hat{\mathbb{E}}[L^{PR}(\theta)-c_{1}L^{VF}(\phi)]\), with \(L^{VF}(\phi)\!=\!(\hat{V}_{\phi}(s_{t})-V(s_{t}))^{2}\) being the value function loss. An entropy term can also be added to this objective to encourage exploration. **First-Order Logic (FOL).** A _Language_\(\mathcal{L}\) is a tuple \((\mathcal{P},\mathcal{D},\mathcal{F},\mathcal{V})\), where \(\mathcal{P}\) is a set of predicates, \(\mathcal{D}\) a set of constants, \(\mathcal{F}\) a set of function symbols (functors), and \(\mathcal{V}\) a set of variables. A _term_ either is a constant (_e.g._obj1,agent), a variable (_e.g._01), or a term which consists of a function symbol. An _atom_ is a formula \(\texttt{p}(\texttt{t}_{1},\ldots,\texttt{t}_{\texttt{n}})\), where p is a predicate symbol (_e.g._closeby) and \(\texttt{t}_{1},\ldots,\texttt{t}_{\texttt{n}}\) are terms. A _ground atom_ or simply a _fact_ is an atom with no variables (_e.g._closeby(obj1,obj2)). A _literal_ is an atom (\(A\)) or its negation (\(\neg A\)). A _clause_ is a finite disjunction (\(\vee\)) of literals. A _ground clause_ is a clause with no variables. A _definite clause_ is a clause with exactly one positive literal. If \(A,B_{1},\ldots,B_{n}\) are atoms, then \(A\vee\neg B_{1}\vee\ldots\vee\neg B_{n}\) is a definite clause. We write definite clauses in the form of \(A\) :- \(B_{1},\ldots,B_{n}\). Atom \(A\) is called the _head_, and set of negative atoms \(\{B_{1},\ldots,B_{n}\}\) is called the _body_. We call definite clauses as _rules_ for simplicity in this paper. **Differentiable Forward Reasoning** is a data-driven approach of reasoning in FOL (Russell and Norvig, 2003). In forward reasoning, given a set of facts and a set of rules, new facts are deduced by applying the rules to the facts. Differentiable forward reasoning (Evans and Grefenstette, 2018; Shindo et al., 2023) is a differentiable implementation of the forward reasoning with fuzzy operations. ## 3 Neurally Guided Logic Policies Fig. 2 illustrates an overview of RL on NUDGE. They consist of a _policy reasoning_ module and a _policy learning_ module. NUDGE performs end-to-end differentiable policy reasoning based on forward reasoning, which computes action distributions given input states. On top of the reasoning module, policies are learned using neurally-guided symbolic abstraction and an actor-critic framework. ### Policy Reasoning: Selecting Actions using Differentiable Forward Reasoning. To realize NUDGE, we introduce a language to describe actions and states in FOL. Using it, we introduce differentiable policy reasoning using forward chaining reasoning. #### 3.1.1 Logic Programs for Actions In RL, _states_ and _actions_ are key components since the agent performs the fundamental iteration of observing the state and taking an action to maximize its expected return. To achieve an efficient computation on first-order logic in RL settings, we introduce a simple language suitable for reasoning about states and actions. We split the predicates set \(\mathcal{P}\) into two different sets, _i.e._, _action predicates_ (\(\mathcal{P}_{A}\)), which define the actions and _state predicates_ (\(\mathcal{P}_{S}\)) used for the observed states. If an atom \(A\) consists of an action predicate, \(A\) is called an _action atom_. If atom \(A\) consists of a state predicate, \(A\) is called _state atom_. **Definition 1**: _Action-state Language is a tuple of \((\mathcal{P}_{A},\mathcal{P}_{S},\mathcal{D},\mathcal{V})\), where \(\mathcal{P}_{A}\) is a set of action predicates, \(\mathcal{P}_{S}\) is a set of state predicates, \(\mathcal{D}\) is a set of constants for entities, and \(\mathcal{V}\) is a set of variables._ For example, for _getout_ illustrated in Fig. 1, we have actual actions: **left**, **right**, **jump**, and **idle**. We define action predicates \(\mathcal{P}_{A}=\{\texttt{left}^{(1)},\texttt{left}^{(2)},\texttt{right}^{(1)},\texttt{right}^{(2)},\texttt{right}^{(2)},\texttt{jump}^{(1)},\texttt{idle}^{( 1)}\}\) and state predicates \(\mathcal{P}_{S}=\{\texttt{type},\texttt{closeby}\}\). To encode different reasons for a given game action, we define several action predicates for (_e.g._\(\texttt{right}^{(1)}\) and \(\texttt{right}^{(2)}\) for **right**) explicitly. By using these predicates, we can compose action atoms, _e.g._\(\texttt{right}^{(1)}\)(agent), and state atoms, _e.g._\(\texttt{type}\)(obj1,agent). Note that an action predicate can also be a state predicate, _e.g._ in multiplayer settings. Now, we define rules to describe actions in the action-state language. **Definition 2**: _Let \(X_{A}\) be an action atom and \(X_{S}^{(1)},\ldots,X_{S}^{(n)}\) be state atoms. An action rule is a rule, written as \(X_{A}\): -\(X_{S}^{(1)},\ldots,X_{S}^{(n)}\)._ For example, for action **right**, we define an action rule as: \[\texttt{right}^{(1)}(\texttt{agent})\texttt{:-type}(\texttt{01},\texttt{ agent}),\texttt{type}(\texttt{02},\texttt{key}),\texttt{-has\_key}(\texttt{01}), \texttt{on\_right}(\texttt{02},\texttt{01}).\] which can be interpreted as _"The agent should go right if the agent does not have the key and the key is located on the right of the agent."_. Having several action predicates for an actual action (in the game) allows us to define several action rules that describe different reasons for the action. #### 3.1.2 Differentiable Logic Policies We denote the set of actual actions by \(\mathcal{A}\), the set of action rules by \(\mathcal{C}\), the set of all of the facts by \(\mathcal{G}=\mathcal{G}_{A}\cup\mathcal{G}_{S}\) where \(\mathcal{G}_{A}\) is a set of action atoms and \(\mathcal{G}_{S}\) is a set of state atoms. \(\mathcal{G}\) contains all of the facts produced by a given FOL language. We here consider ordered sets, _i.e._ each element has its index. We also denote the size of the sets as: \(A=|\mathcal{A}|\), \(C=|\mathcal{C}|\), \(G=|\mathcal{G}|\), and \(G_{A}=|\mathcal{G}_{A}|\). We propose _Differentiable Logic Policies_, which perform differentiable forward reasoning on action rules and produce the probability distribution over actions. The policy computation consists of _three_ components: (1) the relational perception module, (2) the differentiable forward-reasoning module, and (3) the action-extraction module. To this end, the policy \(\pi_{(\mathcal{C},\mathbf{W})}\) parameterized by a set of action rules \(\mathcal{C}\) and rule weights \(\mathbf{W}\) is computed as follows: \[\pi_{(\mathcal{C},\mathbf{W})}(s_{t})=p(a_{t}|s_{t})=f^{act}\left(f_{( \mathcal{C},\mathbf{W})}^{\mathit{reason}}\left(f_{\Theta}^{\mathit{perceive}}( s_{t})\right)\right), \tag{1}\] Figure 2: **NUDGE-RL. Policy Reasoning (bottom):** NUDGE agents incorporate end-to-end _reasoning_ architectures from raw input based on differentiable forward reasoning. In the reasoning step, **(1)** the raw input state is converted into a logical representation, _i.e._ a set of atoms with probabilities. **(2)** Differentiable forward reasoning is performed using weighted action rules. **(3)** The final distribution over actions is computed using the results of differentiable reasoning. **Policy Learning (top):** Using the guidance of a pretrained neural policy, a set of candidate action rules is searched by _neurally-guided symbolic abstraction_, where promising action rules are produced. Then, randomly initialized weights are assigned to the action rules and are optimized using the critic of an actor-critic agent. with \(f^{\mathit{receive}}_{\_}\): \(\mathbb{R}^{N}\rightarrow[0,1]^{G}\) a perception function that maps the raw input state \(s_{\_}t\in\mathbb{R}^{N}\) into a set of probabilistic atoms, \(f^{\mathit{reason}}_{\_}{(\mathcal{C},\mathbf{W})}\): \([0,1]^{G}\rightarrow[0,1]^{G_{\_}A}\) a differentiable forward reasoning function parameterized by a set of rules \(\mathcal{C}\) and rule weights \(\mathbf{W}\), and \(f^{\mathit{act}}\colon[0,1]^{G_{\_}A}\rightarrow[0,1]^{A}\) an action-selection function, which computes the probability distribution over the action space. **Relational Perception.** NUDGE agents take an object-centric state representation as input, obtained by _e.g._ using object detection (Redmon et al., 2016) or discovery (Lin et al., 2020; Delfosse et al., 2022) methods. These models return the detected objects and their attributes (_e.g._ class and positions). They are then converted into a probabilistic logic form with their relations, _i.e._ a set of facts with their probabilities. An input state \(s_{\_}t\in\mathbb{R}^{N}\) is converted to a _valuation vector_\(\mathbf{v}\in[0,1]^{G}\), which maps each fact to a probabilistic value. For example, let \(\mathcal{G}=\{\mathtt{type}(\mathtt{obj1},\mathtt{agent}),\mathtt{type}( \mathtt{obj2},\mathtt{enemy}),\mathtt{closeby}(\mathtt{obj1},\mathtt{obj2}), \mathtt{jump}(\mathtt{agent})\}\). A valuation vector \([0.8,0.6,0.3,0.0]^{\top}\) maps each fact to a corresponding probabilistic value. NUDGE performs differentiable forward reasoning by updating the initial valuation vector \(\mathbf{v}^{(0)}\) for \(T\) times to \(\mathbf{v}^{(T)}\). Initial valuation vector \(\mathbf{v}^{(0)}\) is computed as follows. For each ground state atom \(\mathtt{p}(\mathtt{t}_{\_}1,\ldots,\mathtt{t}_{\_}n)\in\mathcal{G}_{\_}S\), _e.g._\(\mathtt{closeby}(\mathtt{obj1},\mathtt{obj2})\), a differentiable function is called to compute its probability, which maps each term \(\mathtt{t}_{\_}1,\ldots,\mathtt{t}_{\_}n\) to vector representations according to the interpretation, _e.g._\(\mathtt{obj1}\) and \(\mathtt{obj2}\) are mapped to their positions, then perform binary classification using the distance between them. For action atoms, zero is assigned as its initial probability (_e.g._ for \(\mathtt{jump}^{(1)}(\mathtt{agent})\)). **Differentiable Forward Reasoning.** Given a set of candidate action rules \(\mathcal{C}\), we create the reasoning function \(f^{\mathit{reason}}_{\_}{(\mathcal{C},\mathbf{W})}:[0,1]^{G}\rightarrow[0,1]^{G _{\_}A}\), which takes the initial valuation vector and induces action atoms using weighted action rules. We assign weights to the action rules of \(\mathcal{C}\) as follows: We fix the target programs' size, \(M\), and select \(M\) rules out of \(C\) candidate action rules. To do so, we introduce \(C\)-dimensional weights \(\mathbf{W}=[\mathbf{w}_{\_}1,\ldots,\mathbf{w}_{\_}M]\) where \(\mathbf{w}_{\_}i\in\mathbb{R}^{C}\) (_cf._ Fig. 6 in the appendix). We take the _softmax_ of each weight vector \(\mathbf{w}_{\_}i\in\mathbf{W}\) to select \(M\) action rules in a differentiable manner. We perform \(T\)-step forward reasoning using action rules \(\mathcal{C}\) with weights \(\mathbf{W}\). We compose the differentiable forward reasoning function following Shindo et al. (2023). It computes soft logical entailment based on efficient tensor operations. Our differentiable forward reasoning module computes new valuation \(\mathbf{v}^{(T)}\) including all induced atoms given weighted action rules \((\mathcal{C},\mathbf{W})\) and initial valuation \(\mathbf{v}^{(0)}\). Finally, we compute valuations on action atoms \(\mathbf{v}_{\_}A\in[0,1]^{G_{\_}A}\) by extracting relevant values from \(\mathbf{v}^{(T)}\). We provide details in App. E. **Compute Action Probability.** Given valuations on action atoms \(\mathbf{v}_{\_}A\), we compute the action distribution for actual actions. Let \(\mathbf{a}_{\_}i\in\mathcal{A}\) be an actual action, and \(v^{\prime}_{\_}1,\ldots,v^{\prime}_{\_}n\in\mathbf{v}_{\_}A\) be valuations which are relevant for \(\mathbf{a}_{\_}i\) (_e.g._ valuations of \(\mathtt{right}^{(1)}(\mathtt{agent})\) and \(\mathtt{right}^{(2)}(\mathtt{agent})\)) in \(\mathbf{v}_{\_}A\) for the action **right**. We assign scores to each action \(\mathbf{a}_{\_}i\) based on the _log-sum-exp_ approach of Cuturi and Blondel (2017): \(\mathit{val}(\mathbf{a}_{\_}i)=\gamma\log\sum_{\_}{1\leq i\leq n}\exp(v^{ \prime}_{\_}i/\gamma)\), that smoothly approximates the maximum value of \(\{v^{\prime}_{\_}1,\ldots,v^{\prime}_{\_}n\}\). \(\gamma>0\) is used as a smoothing parameter. The action distribution is then obtained by taking the _softmax_ over the evaluations of all actions. ### Policy Learning So far, we have considered that candidate rules for the policy are given, requiring human experts to handcraft potential rules. To avoid this, template-based rule generation (Evans and Grefenstette, 2018; Jiang and Luo, 2019) can be applied, but the number of generated rules increases exponentially with the number of entities and their potential relations. This technique is thus difficult to apply to complex environments where the agents need to reason about many different relations of entities. To mitigate this problem, we propose an efficient learning algorithm for NUDGE that consists of _two_ steps: neurally-guided symbolic abstraction and gradient-based optimization. First, NUDGE obtains symbolic abstract representations of given neural policy. We select a set of candidate rules for the policy by neurally-guided top-\(k\) search, _i.e._, we generate a set of promising rules using neural policies as oracles to evaluate each rule. Then we assign randomized weights for the generated rules and perform differentiable reasoning. We optimize rule weights based on actor-critic methods to maximize the return. We now describe each step in detail. #### 3.2.1 **Neurally Guided Symbolic Abstraction** Given a well-performing neural policy \(\pi_{\_}\theta\), the promising action rules for an RL task entail the same actions as the neural policy. We generate such rules by performing top-\(k\) search-based abstraction, which uses the neural policy to evaluate rules efficiently. The inputs are initial rules \(\mathcal{C}_{0}\), neural policy \(\pi_{\theta}\). We start with elementary action rules and refine them to generate better action rules. \(\mathcal{C}_{to\_open}\) is a set of rules to be refined, and initialized as \(\mathcal{C}_{0}\). For each rule \(C_{i}\in\tilde{\mathcal{C}}_{to\_open}\), we generate new rules by refining them as follows. Let \(C_{i}=X_{A}\gets X_{S}^{(1)},\ldots,X_{S}^{(n)}\) be an already selected general action rule. Using a randomly picked ground or non-ground state atom \(Y_{S}\) (\(\neq X_{S}^{(i)}\)\(\forall i\in[1,...,n]\)), we refine the selected rule by adding a new state atom to its body, obtaining: \(X_{A}\gets X_{S}^{(1)},\ldots X_{S}^{(n)},Y_{S}\). We evaluate each newly generated rule to select promising rules. We use the neural policy \(\pi_{\theta}\) as a guide for the rule evaluation, _i.e._ rules that entail the same action as the neural policy \(\pi_{\theta}\) are promising action rules. Let \(\mathcal{X}\) be a set of states. Then we evaluate rule \(R\) as \[\mathit{eval}(R,\pi_{\theta})=\frac{1}{N(R,\mathcal{X})}\sum_{s\in\mathcal{X} }\pi_{\theta}(s)^{\top}\cdot\pi_{(\mathcal{R},\mathbf{1})}(s), \tag{2}\] where \(N(R,\mathcal{X})\) is a normalization term, \(\pi_{\mathcal{R},\mathbf{1}}\) is the differentiable logic policy with rules \(\mathcal{R}=\{R\}\) and rule weights \(\mathbf{1}\), which is an \(1\times 1\) identity matrix (for consistent notation), and \(\ \cdot\\) is the dot product. Intuitively, \(\pi_{(\mathcal{R},\mathbf{1})}\) is the logic policy that has \(R\) as its only action rule. If \(\pi_{(\mathcal{R},\mathbf{1})}\) produces a similar action distribution as that produced by neural policy \(\pi_{\theta}\), we regard rule \(R\) as a promising rule. The similarity score is computed using the dot product between the two action distributions. We compute the similarity scores for each state \(s\in\mathcal{X}\), and sum them up to compute the score for \(R\). The normalization term helps NUDGE avoid scoring just simple rules as promising rules. To compute the normalization term, we consider groundings of rule \(R\), _i.e._ we remove variables on the rule by substituting constants. We consider all of the possible groundings for each rule. Let \(\mathcal{T}\) be a set of all of the substitutions for variables to ground rule \(R\). For each \(\tau\in\mathcal{T}\), we get a ground rule \(R\tau=X_{A}\tau\,\text{:}\,\text{-}X_{S}^{(1)}\tau,\ldots,X_{S}^{(n)}\tau\), where \(X\tau\) represents the result of applying substitution \(\tau\) to atom \(X\). Let \(\mathcal{J}=\{j_{1},\ldots,j_{n}\}\) be indices of the ground atoms \(X_{S}^{(1)}\tau,\ldots,X_{S}^{(n)}\tau\) in ordered set of ground atoms \(\mathcal{G}\). Then, the normalization term is computed as: \[N(R,\mathcal{X})=\sum_{\tau\in\mathcal{T}}\sum_{s\in\mathcal{X}}\prod_{j\in \mathcal{J}}=\mathbf{v}_{s}^{(0)}[j], \tag{3}\] where \(\mathbf{v}_{s}^{(0)}\) is an initial valuation vector for state \(s\), _i.e._\(f_{\Theta}^{perceive}(s)\). Eq. 3 quantifies how often the body atoms of ground rule \(R\tau\) are activated on the given set of states \(\mathcal{X}\). Simple rules with fewer atoms in their body tend to have large values, and thus their evaluation scores in Eq. 2 tend to be small. After scoring all of the new rules, NUDGE select top-\(k\) rules to refine them in the next step. To this end, all of the top-\(k\) rules in each step will be returned as the candidate ruleset \(\mathcal{C}\) for the policy (_cf._ App. A for more detail about our algorithm). NUDGE has thus produced candidate action rules \(\mathcal{C}\), that will be associated with \(\mathbf{W}\) to form untrained policy \(\pi_{(\mathcal{C},\mathbf{W})}\) described in Sec. 3.1.2. #### 3.2.2 Learning Rule Weights using actor-critic In the following, we consider a pretrained actor-critic agent, with \(v_{\boldsymbol{\phi}}\) its differentiable state-value function parameterized by \(\phi\) (critic). Given a set of action rules \(\mathcal{C}\), let \(\pi_{(\mathcal{C},\mathbf{W})}\) be a differentiable logic policy. NUDGE learn the weights of the action rules in the following steps. For each non-terminal state \(s_{t}\) of each episode, we store the actions sampled from the policy (\(a_{t}\sim\pi_{(\mathcal{C},\mathbf{W})}(s_{t})\)) and the next states \(s_{t+1}\). We update the value function and the policy as follows: \[\delta =r+\gamma v_{\boldsymbol{\phi}}(s_{t+1})-v_{\boldsymbol{\phi}}(s_{t}) \tag{4}\] \[\boldsymbol{\phi} =\boldsymbol{\phi}+\delta\nabla_{\boldsymbol{\phi}}v_{\boldsymbol{ \phi}}(s_{t})\] (5) \[\mathbf{W} =\mathbf{W}+\delta\nabla_{\mathbf{W}}\ln\pi_{(\mathcal{C},\mathbf{ W})}(s_{t}). \tag{6}\] The logic policy \(\pi_{(\mathcal{C},\mathbf{W})}\) thus learn to maximize the expected return, bootstrapped by the use of a pretrained neural critic. Moreover, to ease interpretability, NUDGE can prune the unused action rules (_i.e._ with low weights) by performing top-\(k\) selection on the optimized rule weights after learning. ## 4 Experimental Evaluation We here compare neural agents' performances to NUDGE ones, and show NUDGE _interpretable_ policies and ability to report the importance of each input on their decisions, _i.e.__explainable_. We use DQN agents (on Atari environments) and PPO actor-critic (on logic-ones) as neural baselines, for comparison and PPO as pretrained agents to guide the symbolic abstraction. Our experimental evaluation considers that all agent types receive object-centric descriptions of the environments. For clarity, we annotate action predicates on action rules with specific names on purpose, _e.g._right_key instead of \(\mathtt{right}^{(1)}\) for the rule when the rule describes an action **right** motivated to get the key. We intend to compare agents with object-centric information bottlenecks. We thus had to extract object-centric states of the Atari environments (using information from the RAM). As Atari games do not embed logic challenges, but are rather designed to test the reflexes of human players, we also created \(3\) logic-oriented environments. We thus have modified environments from the Procgen [12] environments that are open-sourced along with our evaluation to have object-centric representations and fewer objects. Our environments are easily hackable. We provide variations of these environments also to evaluate the ease of adaptation of every agent type. In **GetOut**, the goal is to obtain a key, and then go to a door, while avoiding a moving enemy. **GetOut+** is a more complex variation with a larger world containing \(5\) enemies (among which \(2\) are static). In **3Fishes**, the agent controls a fish and is confronted with \(2\) other fishes, one smaller (that the agent needs to "eat", _i.e._ go to) and one bigger, that the agent needs to dodge. A variation is **3Fishes-C**, where the agent can eat green fishes and dodge red ones. Finally, in **Loot**, the agent can open \(1\) or \(2\) chests and their corresponding (_i.e._ same color) keys. In **Loot-C**, the chests have different colors. Further details and hyperparameters are provided in App. D. We aim to answer the following research questions: **Q1.** How does NUDGE compare with neural and logic baselines? **Q2.** Can NUDGE agents easily adapt to environmental changes? **Q3.** Are NUDGE agents interpretable and explainable? **NUDGE competes with PPO agents (Q1)**. We compare NUDGE with different baselines regarding their scores (or returns). First, we present scores obtained by trained DQN, Random and NUDGE agents (with expert supervision) on \(2\) Atari games (_cf._ Tab. 1). Our result show that NUDGE obtain better (Asterix) or similar (Freeway) scores than DQN. However, as said, Atari games are not logically challenging. We thus evaluate NUDGE on \(3\) logic environments. Fig. 3 shows the returns in GetOut, 3Fishes, and Loot, with descriptions for each baseline in the caption. NUDGE obtains better performances than neural baselines (Neural PPO) on 3Fishes, is more stable on GetOut, _i.e._ less variance, and achieves faster convergence on Loot. This shows that NUDGE successfully distillates logic-based policies competing with neural baselines in different complex environments. We also evaluate agents with a baseline without symbolic abstraction, where candidate rules are generated not being guided by neural policies, _i.e._ accept all of the generated rules in the rule refinement steps. This setting corresponds to the template-based approach [10], but we train the agents by the actor-critic method, while vanilla policy gradient [10] is employed in [10]. For the no-abstraction baseline and NUDGE, we provide initial action rules with basic type information, _e.g._\(\mathtt{jump}^{(1)}(\mathtt{agent})\):-\(\mathtt{type}(\mathtt{01},\mathtt{agent}),\mathtt{type}(\mathtt{02},\mathtt{enemy}),\) for each action rule. For this baseline, we generate \(5\) rules for GetOut, \(30\) rules for 3Fishes, and \(40\) rules for Loot in total to define all of the actual actions. NUDGE agents with small \(k\) tend to have less rules, _e.g._\(5\) rules Getout, \(6\) rules in 3Fishes, and \(8\) rules for Loot in NUDGE (top-\(1\) rf.). In Fig. 3, the no-abstraction baselines perform worse than neural PPO and NUDGE in each environment, even though they have much more rules in 3Fishes and Loot. We thus show that NUDGE composes Figure 3: **NUDGE outperforms neural and logic baselines.** Returns (avg.\(\pm\)std.) obtained by NUDGE, neural PPO and logic-based agents without abstraction through training. **NUDGE (Top-\(k\) rf.)**, with \(k\in\{1,3,10\}\) uses neurally-guided symbolic abstraction repeatedly until they get \(k\) rules for each action predicate. **NUDGE (with E.S.)** uses rule set \(\mathcal{C}\) supervised by an expert. **Neural Logic RL** composes logic-based policies by generating all possible rules without neurally-guided symbolic abstraction [10]. Random and human baselines are also provided. efficient logic-based policies using neurally-guided symbolic abstraction. In App. B.1, we visualize the transition of the distribution of the rule weights in the GetOut environment. **NUDGE agents adapt to environmental changes (Q2).** We used the agents trained on the basic environment for this experimental evaluation, with no retraining or finetuning. For 3Fishes-C, we simply exchange the atom is_bigger with the atom same_color. This easy modification is not applicable on the black-box networks of neural PPO agents. For GetOut+ and Loot-C, we do not apply any modification to the agents. Our results are summarized in Tab. 1 (right). Note that the agent obtains better performances than in 3Fishes, as it is easier to dodge a (small) red fish than a big one. For GetOut+, NUDGE performances have decreased as avoiding \(5\) enemies drastically increases the difficulty of the game. On Loot-C, the performances are similar to the ones obtained in the original game. Our experiments show that NUDGE logic agents can easily adapt to environmental changes. **NUDGE agents are interpretable _and_ explainable (Q3).** We show that NUDGE agents are interpretable and explainable by showing that (1) NUDGE produces interpretable policy as a set of weighted rules, and (2) NUDGE can show the importance of each atom, explaining its action choices. The efficient neurally-guided learning on NUDGE enables the system to learn rules without inventing predicates with no specific interpretations, which are unavoidable in template-based approachs [Evans and Grefenstette, 2018, Jiang and Luo, 2019]. Thus, the policy can easily be read out by extracting action rules with high weights. Fig. 4 shows some action rules discovered by NUDGE in GetOut. The first rule says: _"The agent should jump when the enemy is close to the agent (to avoid the enemy)."_. The produced NUDGE is an _interpretable_ policy with its set of weighted rules using interpretable predicates. For each state, we can also look at the valuation of each atom and the selected rule. Moreover, since NUDGE realizes differentiable logic-based policies, we can compute _attribution values_ over logical representations using their gradients. We compute the action gradients w.r.t. input atoms, _i.e._\(\partial\mathbf{v}_{A}/\partial\mathbf{v}^{(0)}\), as shown in Fig. 5, which represent the relevance scores of the probabilistic input atoms \(\mathbf{v}^{(0)}\) for the actions given a specific state. The explanation is computed on the state shown in Fig. 1, where the agent takes **right** as its action. Important atoms receive large gradients, _e.g._\(\neg\)have_key(agent) and on_right(obj2,obj1). By extracting relevant atoms with large gradients, NUDGE can produce clear explanations for the action selection. For example, by extracting the atoms wrapped in orange in Fig. 5, NUDGE can explain the motivation: _"The agent decides to go right because it does not have the key and the key is located on the right-side of it."_. NUDGE is _interpretable_ and _explainable_, each action predicate is defined by interpretable rules and explanations for the action selections can be produced. \begin{table} \begin{tabular}{l|c c c} Score (\(\uparrow\)) & Random & DQN & NUDGE \\ \hline Asterix & 235 \(\pm 134\) & 124.5 & **6259**\(\pm 1150\) \\ Freeway & 0.0 \(\pm 0\) & **25.8** & 21.4 \(\pm 0.8\) \\ \end{tabular} \begin{tabular}{l|c c} Score (\(\uparrow\)) & Random & Neural PPO & NUDGE \\ \hline 3Fishes-C & -0.64 \(\pm 0.17\) & -0.37 \(\pm 0.10\) & **3.26**\(\pm 0.20\) \\ GetOut+ & -22.5 \(\pm 0.41\) & -20.88 \(\pm 0.57\) & **3.60**\(\pm 2.93\) \\ Loot-C & 0.56 \(\pm 0.29\) & 0.83 \(\pm 0.49\) & **5.63**\(\pm 0.33\) \\ \end{tabular} \end{table} Table 1: **Left: NUDGE agents can learn successful policies**. Trained NUDGE agents (with expert supervision) scores (avg. \(\pm\) std.) on \(2\) ALE games. Random and DQN (from van Hasselt et al. [2016]) are also provided. **Right: NUDGE agents adapt to environmental changes.** Returns obtained by NUDGE, neural PPO and random agents on our \(3\) modified environments. Figure 4: **NUDGE produces an interpretable policy as set of weighted rules.** A subset of the weighted action rules discovered by NUDGE in the Getout environment. Full policies for every logic environment are provided in App. B.3. Figure 5: **Explanation using inputs’ gradients**. The action gradient w.r.t. input atoms, _i.e._\(\partial\mathbf{v}_{A}/\partial\mathbf{v}^{(0)}\), on the state shown in Fig. 1. **right** was selected, due to the highlighted relevant atoms (with large gradients). ## 5 Related Work Relational RL (Dzeroski et al., 2001; Kersting et al., 2004; Kersting and Driessens, 2008; Lang et al., 2012) has been developed to tackle RL tasks in relational domains by incorporating logical representations to RL frameworks. These are based on probabilistic reasoning, but NUDGE uses differentiable logic programming. Neural Logic Reinforcement Learning (NLRL) (Jiang and Luo, 2019) is the first framework that integrates Differentiable Inductive Logic Programming (\(\partial\)ILP) (Evans and Grefenstette, 2018) to RL domain. \(\partial\)ILP learns generalized logic rules from examples by gradient-based optimization. NLRL adopts \(\partial\)ILP as a policy function. We extend this approach by proposing neurally-guided symbolic abstraction embracing an extension of \(\partial\)ILP Shindo et al. (2021) for complex programs, which allows agents to learn interpretable action rules efficiently for complex environments. GALOIS (Cao et al., 2022) is a framework to represent policies as logic programs using the _sketch_ setting (Solar-Lezama, 2008), where programs are learned to fill blanks, but NUDGE performs structure learning from scratch using policy gradients. KoGun (Zhang et al., 2020) integrates human knowledge as a prior for RL agents. NUDGE learns a policy as a set of weighted rules and thus also can integrate human knowledge. Neurom-Symbolic RL (NeSyRL) (Kimura et al., 2021) uses Logical Neural Networks (LNNs) (Riegel et al., 2020) for the policy computation. LNNs parameterize the soft logical operators while NUDGE parameterizes rules with their weights. Deep Relational RL approaches (Zambaldi et al., 2018) achieve relational reasoning as a neural network, but NUDGE explicitly encodes relations in logic. Many languages for planning and RL tasks have been developed (Fixes and Nilsson, 1971; Fox and Long, 2003). Our approach is inspired by _situation calculus_(Reiter, 2001), which is an established framework to describe states and actions in logic. Symbolic programs within RL have been investigated, _e.g._ program guided agent (Sun et al., 2020), program synthesis (Zhu et al., 2019), PIPL (Verma et al., 2018), SDRL (Lyu et al., 2019), interpretable model-based hierarchical RL Xu and Fekri (2021), deep symbolic policy Landajuela et al. (2021), and DiffSES Zheng et al. (2021). These approaches use domain specific languages or propositional logic, and address either of interpretability or explainability of RL. To this end, in Tab. 2, we compare NUDGE with the most relevant approaches that share at least 2 aspects of following: supporting first-order logic, neural guidance, interetability and explainability. NUDGE is the first to use neural guidance on differentiable first-order logic and to address both _interpretability_ and _explainability_. ## 6 Conclusion We proposed NUDGE, an interpretable and explainable policy reasoning and learning framework for reinforcement learning. NUDGE uses differentiable forward reasoning to obtain a set of interpretable weighted rules as policy. NUDGE performs neurally-guided symbolic abstraction, which efficiently distillates symbolic representations from neural-based policy, and performs gradient-based policy optimization using actor-critic methods. We empirically demonstrated that NUDGE (1) can compete with neural based policies, (2) use logical representations to produce both interpretable and explainable policies and (3) can automatically adapt or easily be changed to tackle environmental changes. **Societal impact.** As NUDGE can explain the importance given to the input on its decisions, and as its rules are interpretable, it can help understand decision of RL agents trained in sensitive complicated domains as well as discover biases and misalignments of potential discriminative nature. **Limitation and Future Work.** NUDGE is only complete if provided with a sufficiently expressive language (in terms of predicates and entities) to approximate neural policies. For future work, NUDGE could automatically grow to (i) discover predicates, using _predicate invention_(Muggleton et al., 2015), (ii) and augment the number of accessible entities to reason on. Explainable interactive learning (Teso and Kersting, 2019) in RL can be tackled with NUDGE, since NUDGE can produce explanations using logical representations. Causal RL (Madumal et al., 2020) and meta learning (Mishra et al., 2018) also constitute interesting future avenues for NUDGE's development. \begin{table} \begin{tabular}{l c c c c} & FOL & N.G. & Int. & Exp. \\ \hline NLRL & ✓ & ✗ & ✓ & ✗ \\ NeSyRL & ✓ & ✗ & ✓ & ✗ \\ DiffSES & ✗ & ✓ & ✓ & ✗ \\ NUDGE & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 2: **Logic-based RL methods comparison**: First Order Logic (FOL), neurally-guided search (N.G.), interpretability (Int.), and explainability (Exp.).
2301.12384
Persistent Shadowing For Actions Of Some Finitely Generated Groups and Related Measures
In this paper, $\varphi:G\times X\to X$ is a continuous action of finitely generated group $G$ on compact metric space $(X, d)$ without isolated point. We introduce the notion of persistent shadowing property for $\varphi:G\times X\to X$ and study it via measure theory. Indeed, we introduce the notion of compatibility the Borel probability measure $\mu$ with respect persistent shadowing property of $\varphi:G\times X\to X$ and denote it by $\mu\in\mathcal{M}_{PSh}(X, \varphi)$. We show $\mu\in\mathcal{M}_{PSh}(X, \varphi)$ if and only if $supp(\mu)\subseteq PSh(\varphi)$, where $PSh(\varphi)$ is the set of all persistent shadowable points of $\varphi$. This implies that if every non-atomic Borel probability measure $\mu$ is compatible with persistent shadowing property for $\varphi:G\times X\to X$, then $\varphi$ does have persistent shadowing property. We prove that $\overline{PSh(\varphi)}=PSh(\varphi)$ if and only if $\overline{\mathcal{M}_{PSh}(X, \varphi)}= \mathcal{M}_{PSh}(X, \varphi)$. Also, $\mu(\overline{PSh(\varphi)})=1$ if and only if $\mu\in\overline{\mathcal{M}_{PSh}(X, \varphi)}$. Finally, we show that $\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}(X)$ if and only if $\overline{PSh(\varphi)}=X$. For study of persistent shadowing property, we introduce the notions of uniformly $\alpha$-persistent point, uniformly $\beta$-persistent point and recall notions of shadowing property, $\alpha$-persistent, $\beta$-persistent and we give some further results about them.
Ali Barzanouni
2023-01-29T07:41:16Z
http://arxiv.org/abs/2301.12384v1
# Persistent shadowing for actions of some finitely generated groups and related measures ###### Abstract. In this paper, \(\varphi:G\times X\to X\) is a continuous action of finitely generated group \(G\) on compact metric space \((X,d)\) without isolated point. We introduce the notion of persistent shadowing property for \(\varphi:G\times X\to X\) and study it via measure theory. Indeed, we introduce the notion of compatibility the Borel probability measure \(\mu\) with respect persistent shadowing property of \(\varphi:G\times X\to X\) and denote it by \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\). We show \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\) if and only if \(supp(\mu)\subseteq PSh(\varphi)\), where \(PSh(\varphi)\) is the set of all persistent shadowable points of \(\varphi\). This implies that if every non-atomic Borel probability measure \(\mu\) is compatible with persistent shadowing property for \(\varphi:G\times X\to X\), then \(\varphi\) does have persistent shadowing property. We prove that \(\overline{PSh(\varphi)}=PSh(\varphi)\) if and only if \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\). Also, \(\mu\overline{(PSh(\varphi))}=1\) if and only if \(\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\). Finally, we show that \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)\) if and only if \(\overline{PSh(\varphi)}=X\). For study of persistent shadowing property, we introduce the notions of uniformly \(\alpha\)-persistent point, uniformly \(\beta\)-persistent point and recall notions of shadowing property, \(\alpha\)-persistent, \(\beta\)-persistent and we give some further results about them. Key words and phrases: Shadowing, Persistent, Borel Measure 2010 Mathematics Subject Classification: Primary: 37C85; Secondary: 37B25, 37B05 ## 1. Introduction A continuous action is a triple \((X,G,\varphi)\) where \(X\) is a compact metric space with a metric \(d\) and \(G\) is a finitely generated group with the discrete topology which acts on \(X\), such that the action \(\varphi\) is continuous. We denote a continuous action \((X,G,\varphi)\) by \(\varphi:G\times X\to X\), also we denote by \(Act(G;X)\) the set of all continuous actions \(\varphi\) of \(G\) on \(X\). Let \(S\) be a finite generating set of \(G\). We consider a metric \(d_{S}\) on \(Act(G;X)\) by \[d_{S}(\varphi,\psi)=\sup\{d(\varphi(s,x),\psi(s,x)):x\in X,s\in S\}\] for \(\varphi,\psi\in Act(G;X)\). A map \(f:G\to X\) is called a \(\delta\)-pseudo orbit for a continuous action \(\varphi:G\times X\to X\) ( with respect to \(S\)), if \(d(f(sg),\varphi(s,f(g)))<\delta\) for all \(s\in S\) and all \(g\in G\). A \(\delta\)-pseudo orbit \(f:G\to X\) (with respect to \(S\) ) is \(\epsilon\)-shadowed by \(\varphi\)-orbit \(x\in X\), if \(d(f(g),\varphi(g,x))<\epsilon\) for all \(g\in G\). A continuous action \(\varphi:G\times X\to X\) has shadowing property (with respect to \(S\)) if for every \(\epsilon>0\) there is a \(\delta>0\) such that every \(\delta\)-pseudo orbit \(f:G\to X\) for \(\varphi\) can be \(\epsilon\)-shadowed by the \(\varphi\)-orbit of a point \(p\in X\), this means that \(d(f(g),\varphi(g,p))<\epsilon\) for all \(g\in G\). The notion of shadowing property for actions of finitely generated groups introduced by Osipov and Tikhomirov in [10]. They showed that the shadowing property for actions of finitely generated groups on both of the hyperbolic properties of actions of its elements and the group structure. For example, if \(G\) is a finitely generated nilpotent group and action of one element in \(G\) is hyperbolic, then the group action has the shadowing property while it cannot be directly generalized to the case of solvable groups. The notion of topological stability for an action of a finitely generated group on a compact metric space was introduced by Chung and Lee in [4] and they gave a group action version of the Walter's stability theorem. Indeed, a continuous action \(\varphi:G\times X\to X\) is topologically stable (with respect to \(S\)), if for every \(\epsilon>0\) there is \(\delta>0\) such that for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\), there is a continuous map \(f:X\to X\) such that \(d_{C^{0}}(f,id)<\epsilon\) and \(\varphi_{g}f=f\psi_{g}\) for all \(g\in G\). Moreover, \(\varphi\) is called \(s\)-topologically stable when there exists a surjective continuous map \(f:X\to X\) that satisfies the mentioned properties. If \(\varphi:G\times X\to X\) is topologically stable, then for every \(\epsilon>0\) there is \(\delta>0\) such that for every \(x\in X\) and every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\), we have \(d(\varphi(g,f(x)),\psi(g,x))<\epsilon\) for all \(g\in G\). Having this property is well known to say that the continuous action \(\varphi\) is \(\alpha\)-persistent. When \(\varphi\) is \(s\)-topologically stable, for every \(\epsilon>0\) there is \(\delta>0\) such that for every \(x\in X\) and every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\), we can say that if \(f(y)=x\), then \(d(\varphi(g,x),\psi(g,y))<\epsilon\) for all \(g\in G\). In this case, \(\varphi\) is called \(\beta\)-persistent. In other words, a dynamical system is \(\beta\)- persistent if its trajectories can be seen on every small perturbation of it. Although \(s\)-topologically stable implies \(\beta\)-persistent but topological stability does not imply \(\beta\)-persistent. For example, Sakai and Kobayashi [13] observed that the full shift on two symbols is not \(\beta\)-persistent while it is topologically stable. Recently, the authors in [6], introduced a new tracing property for a homeomorphism \(f:X\to X\) referred to as persistent shadowing property and proved that a homeomorphism has persistent shadowing property if and only if it has shadowing property and it is \(\beta\)-persistent. This implies that a homeomorphism has persistent shadowing property if and only if it is pointwise persistent shadowable. In this paper, we extend the notion of persistent shadowing property for a continuous action \(\varphi:G\times X\to X\) of some finitely generated group \(G\) on metric space \((X,d)\). Persistent shadowing property is stronger than of shadowing property and \(\beta\)-persistent, but in equicontinuous actions, shadowing and persistent shadowing properties are equivalent. This implies that every equicontinuous action on the Cantor space \(X\) does have persistent shadowing property. The notion of persistent shadowing property does not depend on the choice of a symmetric finitely generating set and it does not depend on choice of metric \(X\) if \(X\) is compact metric space. But Example 2.2 shows that compactness is essential. Assume that \(H\) be a subgroup of \(G\). It may be happen that \(\varphi:H\times X\to X\) does have the persistent shadowing property while \(\varphi:G\times X\to X\) does not have it. But in Proposition 2.4, we show that if \(H\) is a syndetic subgroup of \(G\), then the situation is different. Also we study relation between persistent shadowing property of \(\varphi:G\times X\to X\) and \(\varphi_{g}:X\to X\). There is system \(\varphi:G\times X\to X\) with persistent shadowing property while \(\varphi_{g}:X\to X\) does not have persistent shadowing property. If \(G\) is free group, then the situation is different. Indeed in Proposition 2.6, we show that if \(F_{2}=\langle a,b\rangle\) is a free group and \(\varphi:F_{2}\times X\to X\) has shadowing property, then \(\varphi_{a^{-1}b}:X\to X\) has persistent shadowing property. Also, one can check that these results do hold for notions of shadowing property, \(\alpha\)-persistent and \(\beta\)-persistent, see Remark2.5 and Remark 2.7 Recently, in [2], we introduced the notion of compatibility of a measure with respect to \(\alpha\)-persistent. We extend this notion with respect persistent shadowing property and the set of compatibility measures with persistent shadowing property for \(\varphi\) is denoted by \(\mathcal{M}_{PSh}(X,\varphi)\), see Subsection 2.3. We show that \(\mathcal{M}_{PSh}(X,\varphi)\) is an \(F_{\sigma\delta}\) subset of \(\mathcal{M}(X)\) and for \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\) and homeomorphism \(f:X\to Y\), we have \(f_{*}(\mu)\in\mathcal{M}_{PSh}(Y,f\circ\varphi\circ f^{-1})\) where \(f\circ\varphi\circ f^{-1}:G\times Y\to Y\) is defined by \(f\circ\varphi\circ f^{-1}(g,x)=f\circ\varphi_{g}\circ f^{-1}(x)\), see Proposition 2.10. In Proposition 2.11, we show that if measure \(\mu\) is compatible with persistent shadowing property for continuous action \(\varphi\), then \(\varphi\) does have persistent shadowing property on \(supp(\mu)\). This implies that if every non-atomic probability measure is compatible with persistent shadowing property for continuous action \(\varphi\), then \(\varphi\) has persistent shadowing property. Also, we introduce compatibility a measure with respect shadowing property, \(\alpha\)-persistent and \(\beta\)-persistent for continuous action \(\varphi:G\times X\to X\) and denote them by \(\mathcal{M}_{Sh}(X,\varphi)\), \(\mathcal{M}_{\alpha}(X,\varphi)\) and \(\mathcal{M}_{\beta}(X,\varphi)\), respectively. Results of Proposition 2.11 can be obtain for compatibility a measure in the case of shadowing property and \(\beta\)-persistent, see Remark 2.13. In Section 3, we introduce the notions of persistent shadowable points, uniformly \(\alpha\)-persistent point, uniformly \(\beta\)-persistent point and denote them by \(PSh(\varphi)\), \(UPersis_{\alpha}(\varphi)\) and \(UPersis_{\beta}(\varphi)\), respectively. Also, we recall the notions of shadowable points, \(\alpha\)-persistent points, \(\beta\)-persistent points for continuous action \(\varphi:G\times X\to X\) and denote them by \(Sh(\varphi)\), \(Persis_{\alpha}(\varphi)\) and \(Persis_{\beta}(\varphi)\), respectively. Although \(Sh(\varphi)\subseteq UPersis_{\alpha}(\varphi)\subseteq Persis_{\alpha}(\varphi)\) but Example 3.2 shows that the \(Sh(\varphi)\neq UPersis_{\alpha}(\varphi)\) and \(Upersis_{\alpha}(\varphi)\neq Persis_{\alpha}(\varphi)\). For equicontinuous action \(\varphi:G\times X\to X\), we have \(UPersis_{\beta}(\varphi)=Persis_{\beta}(\varphi)\) and \(Persis_{\alpha}(\varphi)\subseteq Persis_{\beta}(\varphi)\). Moreover, if \(X\) is generalized homogeneous compact metric space, then \(Sh(\varphi)=UPersis_{\alpha}(\varphi)=Persis_{\alpha}(\varphi)\), see Proposition 3.4. In Subsection 3.2, we study persistent shadowable point for a group action and in item 3 of Proposition 3.6, we show that Continuous action \(\varphi:G\times X\to X\) has the persistent shadowing property if and only if it is pointwise persistent shadowable. Also in item 4 of Proposition 3.6, we prove that \(PSh(\varphi)=UPersis_{\beta}(\varphi)\cap Sh(\varphi)\). This implies that continuous action \(\varphi:G\times X\to X\) has persistent shadowing property if and only if it is \(\beta\)-persistent and it has shadowing property. In Subsection 3.3, we study various shadowable points via measure theory. Indeed, for continuous action \(\varphi:G\times X\to X\) on compact metric space \((X,d)\) and Borel probability measure \(\mu\), we show that \(\mu\in M_{PSh}(X,\varphi)\Leftrightarrow supp(\mu)\subseteq PSh(\varphi)\), see Proposition 3.9. In Proposition 3.11 we show that \(\mu(\overline{PSh(\varphi)})=1\Leftrightarrow\mu\in\overline{\mathcal{M}_{PSh} (X,\varphi)}\) and in Proposition 3.12, we show that \(\overline{PSh(\varphi)}=PSh(\varphi)\) if and only if \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\). Note that, result of this paragraph, obtain for other types of shadowing property, see Proposition 3.13. In equicontinuous action \(\varphi:G\times X\to X\), \(Persis_{\beta}(\varphi)=UPersis_{\beta}(\varphi)\) is closed set in \(X\), hence we can say that \(\overline{\mathcal{M}_{\beta}(X,\varphi)}=\mathcal{M}_{\beta}(X,\varphi)\) if \(\varphi:G\times X\to X\) is equicontinuous action. This implies that \(\mu(Persis_{\beta}(\varphi))=1\) if and only if \(\mu\in\mathcal{M}_{\beta}(X,\varphi)\), whenever \(\varphi:G\times X\to X\) is equicontinuous action. Finally, In Proposition 3.16, we show that \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow\overline {PSh(\varphi)}=X\), \(\overline{\mathcal{M}_{Sh}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow\overline {Sh(\varphi)}=X\), \(\overline{\mathcal{M}_{\beta}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow \overline{Persis_{\beta}(\varphi)}=X\) and \(\overline{\mathcal{M}_{\alpha}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow \overline{Persis_{\alpha}(\varphi)}=X\). ## 2. Persistent shadowing property In this section, firstly, we extend the notion of persistent shadowing property from [6] to group actions and study it. **Definition 2.1**.: A continuous action \(\varphi:G\times X\to X\) has persistent shadowing property ( with respect \(S\)) if for every \(\epsilon>0\) there is \(\delta>0\) such that every \(\delta\)-pseudo orbit \(f:G\to X\) for \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) can be \((\psi,\epsilon)\)-shadowed by a point \(p\in X\). It is not hard to see that notion of persistent shadowing property does not depend on the choice of a symmetric finitely generating set. Also one can check that the this notion does not depend on choice of metric \(X\) if \(X\) is compact metric space. The following example shows that compactness is essential. **Example 2.2**.: Let \(T:\mathbb{R}\to S^{1}\setminus\{(0,1)\}\) be a map given by \[T(t)=(\frac{2t}{1+t^{2}},\frac{t^{2}-1}{t^{2}+1}),\quad\text{for all }t\in \mathbb{R},\] and let \(X=T(\mathbb{Z})\). Let \(d^{\prime}\) be the metric on \(X\) induced by the Riemannian metric on \(S^{1}\), and let \(d\) be a discrete metric on \(X\). It is clear that \(d\) and \(d^{\prime}\) induce the same topology on \(X\). Let \(g_{1}:X\to X\) be a homeomorphism defined by \(g(a_{i})=a_{i+1}\) and \(g_{2}:X\to X\) be defined by \(g_{2}(a_{i})=a_{i+2}\). Consider action \(\varphi:G\times X\to X\) generated \(g_{1},g_{2}:X\to X\). Since the metric \(d\) is discrete, one can see that \(\varphi\) does have persistent shadowing property. By contradiction, let \(\varphi\) has persistent shadowing property with respect to \(d^{\prime}\). Hence it does have shadowing property with respect to \(d^{\prime}\). For \(\epsilon=\frac{1}{2}\) and \(z\in X\), let \(\delta>0\) be an \(\epsilon\)-modulus of shadowing property of continuous action \(\varphi\). Choose \(k\in\mathbb{N}\) satisfying \(d^{\prime}(a_{k},a_{-k})<\frac{\delta}{2}\), and consider a homeomorphisms \(f_{i}:X\to X\), \(i=1,2\), given by \[f_{1}(a_{i})=\left\{\begin{array}{cl}a_{i+1},&i\in\{-k,\ldots,k-1\},\\ a_{-k},&i=k,\\ a_{i},&\text{otherwise}.\end{array}\right.\] and \[f_{2}(a_{i})=\left\{\begin{array}{cl}a_{i+2},&i\in\{-k,\ldots,k-2\},\\ a_{-k+1},&i=k-1\\ a_{-k},&i=k,\\ a_{i},&\text{otherwise}.\end{array}\right.\] Since \(d^{\prime}(f_{i}(x),g_{i}(x))<\delta\) for all \(x\in X\), hence if \(\psi:G\times X\to X\) generated by \(f_{1},f_{2}\), then \(d(\varphi,\psi)<\delta\) and for \(x\in X\) there is \(z\in X\) such that \(d^{\prime}(\varphi(g,z),\psi(g,x))<\epsilon\), this implies that \(d^{\prime}(g_{i}^{n}(z),f_{i}^{n}(y))<\epsilon\). Since \(\{g_{1}^{n}(z),n\in\mathbb{Z}\}=X\), so we can find an integer \(n\in\mathbb{Z}\) such that \(d^{\prime}(g_{1}^{n}(z),f_{1}^{n}(y))\geq\epsilon\), which is a contradiction. Therefore \(\varphi\) does have persistent shadowing property with respect to \(d\) but it does not have persistent shadowing property with respect to \(d^{\prime}\). ### The Action of Syndetic Subgroups Let \(BS(1,n)=\langle a,b:ba=a^{n}b\rangle\) and \(\varphi:G\times\mathbb{R}^{2}\to\mathbb{R}^{2}\) be generated by \(f_{a}(x)=Ax\) and \(f_{b}(x)=Bx\) where \[A=\left(\begin{array}{cc}1&0\\ 1&1\end{array}\right)\ \&\ B=\left(\begin{array}{cc}\lambda&0\\ 0&n\lambda\end{array}\right) \tag{2.1}\] Then for \(1<\lambda\leq n\) and \(n>1\), \(f_{b}\) has persistent shadowing property. In [6], it is shown that the homeomorphism \(f:X\to X\) does have persistent shadowing property if and only if it does have shadowing property and it is \(\beta\)-persistent. Hence if \(H=\langle b\rangle\) is a subgroup of \(B(1,n)\), then \(\varphi|H:H\times\mathbb{R}^{2}\to\mathbb{R}^{2}\) has persistent shadowing property while by [10, Theorem 4.4(1)], \(\varphi:G\times\mathbb{R}^{2}\to\mathbb{R}^{2}\) does not have shadowing property. In this subsection, we show that if \(H\) is a syndetic subgroup of \(G\), then situation is different. Let \(||g||_{S}\) denote the length of the shortest representation of the element \(g\) in term of element from \(S\). For continuous action \(\varphi:G\times X\to X\) on compact metric space \((X,d)\), \(\epsilon>0\) and \(k\in\mathbb{N}\), there is \(\delta>0\) such that for all \(g\in G\) with \(||g||_{S}<k\) \[d(x,y)<\delta\Rightarrow d(\varphi(g,x),\varphi(g,y))<\frac{\epsilon}{k} \tag{2.2}\] By triangle inequality, it can to see that the following lemma is true. **Lemma 2.3**.: _Let \(S\) be a finitely generating set of \(G\) and \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). For \(\epsilon>0\) and \(N\in\mathbb{N}\), there is \(\delta>0\) such that if \(f:G\to X\) is a \(\delta\)-pseudo orbit, then for every \(h\in G\) with \(||h||_{S}<N\) and every \(g\in G\) we have \(d(f(hg),\varphi(h,f(g)))<\epsilon\)_ Subset \(H\subseteq G\) is syndetic if there is finite set \(F\subseteq G\) such that \(G=FH\). Hence a subgroup \(H\) is syndetic in \(G\) if and only if it is finite index subgroup of \(G\) i.e. there is finite set \(\{g_{i}\}_{i=1}^{n}\) such that \(G=\bigcup_{i=1}^{n}g_{i}H\). **Proposition 2.4**.: _Let \(H\) be a finite index subgroup of \(G\). Then continuous action \(\varphi:G\times X\to X\) has persistent shadowing property if \(\varphi:H\times X\to X\) has persistent shadowing property._ Proof.: Let \(H\) be finite index subgroup of \(G\). Let \(A\) be a symmetric finitely generating set of \(H\). We can add more elements to \(A\) to get a symmetric finitely generating set \(S\) of \(G\). Also let \(G=\bigcup_{i=1}^{n}g_{i}H\) and \(N=\max\{||g_{i}||_{S}:1\leq i\leq n\}\). Let \(\epsilon>0\) be given. Choose \(\delta>0\) such that for all \(g\in G\) with \(||g||_{S}<N\) \[d(x,y)<\delta\Rightarrow d(\varphi(g,x),\varphi(g,y))<\frac{\epsilon}{N} \tag{2.3}\] Choose \(\eta>0\) corresponding to \(\delta>0\) and \(N\in\mathbb{N}\) that satisfies Lemma2.3. Let \(\epsilon>0\) be given. By triangle inequality, it is easy to see that for \(\epsilon>0\) and \(N\) above, there is \(\eta>0\) such that if \(d_{S}(\varphi,\psi)<\eta\), and \(d(a,b)<\eta\), then \[d(\varphi(g,a),\psi(g,b))<\frac{\epsilon}{N},\forall g\in G\text{ with }||g||_{S}<N. \tag{2.4}\] and \[d(\psi(g,a),\psi(g,b))<\frac{\epsilon}{N},\text{ for all }||g||_{S}<N. \tag{2.5}\] Let \(\varphi:H\times X\to X\) has persistent shadowing property, we show that \(\varphi:G\times X\to X\) has persistent shadowing property. Let \(\epsilon>0\) be given. Choose \(\delta>0\) that satisfies Lemma 2.3 and Relation 2.4, Relation 2.5. Choose \(\eta>0\) corresponding \(\frac{\delta}{2}>0\) by definition of persistent shadowing property of \(\varphi|H\). We show that every \(\eta\)-pseudo orbit \(F:G\to X\) for continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\eta\) can \(\psi\)-shadowed by a point of \(X\). The map \(F:G\to X\) is \(2\eta\)-pseudo orbit for \(\varphi:G\times X\to X\), hence \[d(F(gh),\varphi(g,F(h)))<\frac{\epsilon}{N},\text{ for all }g\in G\text{ with }||g||_{S}<N. \tag{2.6}\] Since \(F:H\to X\) is \(\eta\)-pseudo orbit for \(\psi:H\times X\to X\) with \(d_{A}(\varphi,\psi)<\eta\), hence by persistent shadowing property of \(\varphi|H\) there is \(p\in X\) such that \(d(F(h),\psi(h,p))<\frac{\delta}{2}\). Also by Relation 2.5 we have \(d(\varphi(g_{i},F(h)),\psi(g_{i},\psi(h,p)))<\frac{\epsilon}{N}\). Hence by Relation 2.6, we have \(d(F(g_{i}h),\psi(g_{i}h,p))<\epsilon\) i.e. \(d(F(g),\psi(g,p))<\epsilon\) for all \(g\in G\). _Remark 2.5_.: Let \(P\) be one of the following property: \((a)\) shadowing property, \((b)\)\(\alpha\)-persistent, \((c)\)\(\beta\)-persistent. Similar to proof of Proposition 2.4, if \(H\leq G\) is syndetic subset of \(G\) and continuous action \(\varphi:H\times X\to X\) does have \(P\)- property, then \(\varphi:G\times X\to X\) has \(P\)-property. ### The Action Of Free Groups Group \(BS(1,n)\) is solvable group, hence it is not free group. In the case of finitely free group actions \(G\), by [10, Theorem 4.9.], if \(\varphi:G\times X\to X\) has shadowing property, then \(\varphi_{g}:X\to X\) has shadowing property, for all \(g\in G\). In the following, we extend it in the case of persistent shadowing property. **Proposition 2.6**.: _Let \(\varphi:G\times X\to X\) be a continuous action of free group \(F_{2}=\langle a,b\rangle\) on compact metric space \((X,d)\). If \(\varphi:F_{2}\times X\to X\) has persistent shadowing property, then \(\varphi_{a^{-1}b}:X\to X\) has persistent shadowing property._ Proof.: Let \(\epsilon>0\) be given. Choose \(\epsilon_{0}>0\) corresponding to \(\epsilon>0\) by persistent shadowing property of \(\varphi:G\times X\to X\). For \(\epsilon_{0}>0\) there is \(\delta>0\) such that for every continuous action \(\psi:F_{2}\times X\to X\) that is \(\delta\)-close to \(\varphi:G\times X\to X\), we have \[d(x,y)<\delta\Rightarrow d(\psi(g,x),\psi(g,y))<\epsilon,|g|_{S}\leq 2,\] Let \(\{x_{n}\}_{n\in\mathbb{Z}}\) be \(\delta\) pseudo orbit of homeomorphism \(f:X\to X\) wit \(d(f,\varphi_{a^{-1}}\varphi_{b})<\delta\). It is easy to see that if \(\psi:F_{2}\times X\to X\) is generated by \(\psi_{a}=\varphi_{a}\) and \(\psi_{b}=\varphi_{a}\circ f\), then \(\psi:F_{2}\times X\to X\) is \(\epsilon_{0}\)- close to \(\varphi:F_{2}\times X\to X\). Define \(K:F_{2}\to X\) define by \(K(t)=\psi(v,x_{k})\) where \(v\in G\) is an element with minimal length such that \(t=v(a^{-1}b)^{k}\) for some \(k\in\mathbb{Z}\). It is not hard to see that \(K:G\to X\) is \(\epsilon_{0}\)- pseudo orbit of continuous action \(\psi:F_{2}\times X\to X\) with \(d(\varphi,\psi)<\epsilon_{0}\). By persistent shadowing property, there is \(y\in X\) such that \(d(K(g),\psi(g,y))<\epsilon\) for all \(g\in G\). Since \(K((a^{-1}b)^{k})=x_{k}\) and \(\psi((a^{-1}b)^{k},y)=f^{k}(y)\), we have \(d(f^{k}(y),x_{k})<\epsilon\) for all \(k\in\mathbb{Z}\). _Remark 2.7_.: Let \(P\) be one of the following property: \((a)\) shadowing property, \((b)\)\(\alpha\)-persistent, \((c)\)\(\beta\)-persistent. Similar to proof of Proposition 2.6, we can show that if continuous action \(\varphi:F_{2}\times X\to X\) of free group \(F_{2}=\langle a,b\rangle\) on compact metric space \((X,d)\) has \(P\)- property, then \(\varphi_{a^{-1}b}:X\to X\) has \(P\)- property. ### Persistent shadowing property and related measure For continuous action \(\varphi:G\times X\to X\), \(\epsilon>0\), \(\delta>0\) and generating set \(S\), we denote \(PSh_{\varphi,S}(\delta,\epsilon)\) the set of all \(x\in X\) such that every \(\delta\)-pseudo orbit \(f:G\to X\) of continuous action \(\psi\) with \(d_{S}(\varphi,\psi)<\delta\) and \(f(e)=x\), respect to generating set \(S\), can be \((\epsilon,\psi)\)-shadowed by a point in \(X\). It is clear that 1. Let \(S,S^{\prime}\) be generating sets for \(G\) and \(\epsilon>0\) be given. Then for every \(\delta>0\) there is \(\eta>0\) such that \(PSh_{\varphi,S}(\eta,\epsilon)\subseteq PSh_{\varphi,S^{\prime}}(\delta,\epsilon)\) 2. Continuous action \(\varphi:G\times X\to X\) does have persistent shadowing property with respect to generating set \(S\) if and only if for every \(\epsilon>0\) there is \(\delta>0\) such that \(PSh_{\varphi,S}(\delta,\epsilon)=X\). 3. If continuous action \(\varphi:G\times X\to X\) does have persistent shadowing property on compact set \(K\subseteq X\), then for every \(\epsilon>0\) there exist a neighborhood \(U\) of \(K\) and \(\delta>0\) such that \(U\subseteq PSh_{\varphi,S}(\delta,\epsilon)\). 4. \(PSh_{\varphi,S}(\delta,\epsilon)\) is a closed subset of \(X\). The Borel \(\sigma\)-algebra of \(X\) is the \(\sigma\)-algebra \(\mathcal{B}(X)\) generated by the open subsets of \(X\). A Borel probability measure is a \(\sigma\)-additive measure \(\mu\) defined in \(\mathcal{B}(X)\) such that \(\mu(X)=1\). We denote by \(\mathcal{M}(X)\) the set of all Borel probability measures of \(X\). This set is convex and compact metrizable if it endowed with weak\({}^{*}\) topology: the one ruled by the convergence \(\mu_{n}\to\mu\) if and only if \(\int fd\mu_{n}\to\int fd\mu\) for every continuous map \(f:X\to\mathbb{R}\). **Definition 2.8**.: A measure \(\mu\in\mathcal{M}(X)\) is compatible with the persistent shadowing property for the continuous action \(\varphi:G\times X\to X\), \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\) if for every \(\epsilon>0\) there is \(\delta>0\) such that if \(\mu(A)>0\), then \[A\cap Sh_{\psi}(\delta,\epsilon)\neq\emptyset.\] for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\). **Example 2.9**.: Let continuous action \(\varphi:G\times X\to X\) admits an \(\varphi\)-invariant measure and let \(\varphi\) does have persistent shadowing property on the non-wandering set \(\Omega(\varphi)\). Then every \(\varphi\)-invariant Borel probability measure \(\mu\) on \(X\) is compatible with the property of persistent shadowing property. Let \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\), \(\epsilon>0\) and \(h:(X,d)\to(Y,\rho)\) be a homeomorphism. We will show that there is \(\delta>0\) such that \[h_{*}(\mu)(B)>0\Rightarrow B\cap PSh_{h\circ\varphi\circ h^{-1}}(\delta, \epsilon)\neq\emptyset. \tag{2.7}\] For \(\epsilon>0\) there is \(\epsilon_{0}>0\) such that \[d(a,b)<\epsilon_{0}\Rightarrow\rho(h(a),h(b))<\epsilon. \tag{2.8}\] For \(\epsilon_{0}>0\) there is \(\epsilon_{1}>0\) in definition of \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\). Since \(\mu(h^{-1}(B))>0\), hence \(h^{-1}(B)\cap\mathcal{M}_{PSh}(\epsilon_{1},\epsilon_{0})\neq\emptyset\). For \(\epsilon_{1}>0\) there is \(\delta>0\) such that \[\rho(c,d)<\delta\Rightarrow d(h^{-1}(c),h^{-1}(d))<\epsilon_{1}. \tag{2.9}\] Fix \(x\in h^{-1}(B)\cap PSh_{\varphi}(\epsilon_{1},\epsilon_{0})\). One can check that if \(F^{\prime}:G\to Y\) be \(\delta\)-pseudo orbit of continuous action \(\psi^{\prime}:G\times Y\to Y\) with \(\rho_{S}(f\circ\varphi\circ f^{-1},\psi^{\prime})<\delta\) and \(F^{\prime}(e)=h(x)\), then \(F:G\to X\) defined by \(F(g)=h^{-1}(F^{\prime}(g))\) is \(\epsilon_{1}\)-pseudo orbit of continuous action \(h^{-1}\circ\psi^{\prime}\circ h\) with \(d_{S}(\varphi,h^{-1}\circ\psi^{\prime}\circ h)<\epsilon_{1}\) and \(F(e)=x\). By \(x\in h^{-1}(B)\cap PSh_{\varphi}(\epsilon_{1},\epsilon_{0})\), there is \(p\in X\) such that \(d(F(g),h^{-1}\circ\psi^{\prime}_{g}\circ h(p))<\epsilon\) for all \(g\in G\). By relation 2.8, we have \(d(h\circ F(g),\psi^{\prime}_{g}(h(p)))<\epsilon\) i.e. \(d(F^{\prime}(g),\psi^{\prime}_{g}(h(p)))<\epsilon\). This means that \(h(x)\in B\cap PSh_{h\circ\varphi\circ h^{-1}}(\delta,\epsilon)\). * If \(h:(X,d)\to(Y,\rho)\) is a homeomorphism and \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\), then \(h_{*}(\mu)\in\mathcal{M}_{PSh}(Y,h\circ\varphi\circ h^{-1})\), * If \(G\) is an abelian group, then \(\mathcal{M}_{PSh}(X,\varphi)\) is \((\varphi_{g})_{*}\)-invariant, for all \(g\in G\). It is known that every continuous action of a countable abelian group on compact metric space admit an \(\varphi\)-invariant measure. Also, One can check that \(\mathcal{M}_{PSh}(X,\varphi)\) is a convex subset of \(\mathcal{M}(X)\), hence by Proposition 2.12 in [2], the following item does hold. * If \(G\) is an abelian group, then \(\overline{\mathcal{M}_{PSh}(X,\varphi)}\) contains a \(\varphi\)-invariant measure. It is known that a group action \(\varphi:G\times X\to X\) on the compact metric space \(X\) has a \(\varphi\)- invariant Borel probability measure on \(X\) if and only if \(G\) is amenable, see [7]. Hence, the following group action where \(G=SL(2,\mathbb{Z})\) and \(X=\mathbb{R}\cup\{\infty\}\) does not contan any invarant measure. \[\varphi(\left(\begin{array}{cc}a&b\\ c&d\end{array}\right),z)=\frac{az+b}{cz+d}.\] Take \[\mathcal{C}_{PSh(\varphi)}(\delta,\epsilon)=\{\mu\in\mathcal{M}(X):\mu(PSh_{ \varphi}(\delta,\epsilon))=1\}.\] It is easy to see that \(\mathcal{C}_{PSh(\varphi)}(\delta,\epsilon)\) is a convex and closed subset of \(\mathcal{M}(X)\) for every \(\epsilon>0\) and any \(\delta>0\). **Proposition 2.10**.: _Let \(\varphi:G\times X\to X\) be a continuous action. Then_ 1. \(\mathcal{M}_{PSh}(X,\varphi)=\bigcap_{n\in\mathbb{N}}(\bigcup_{l\in\mathbb{N} }C_{PSh(\varphi)}(n^{-1}+l^{-1},m^{-1})))\)__ 2. _The subset_ \(\mathcal{M}_{PSh}(X,\varphi)\) _is an_ \(F_{\sigma\delta}\) _subset of_ \(\mathcal{M}(X)\)_._ Proof.: 1. Fix a \(\mu\in\mathcal{M}_{PSh}(X,\varphi)\) and an \(n\in\mathbb{N}\). Choose a \(\delta>0\) such that if \(\mu(A)>0\) then \(A\cap PSh_{\varphi}(\delta,\frac{1}{n})\neq\emptyset\).Choose \(m\in\mathbb{N}\) such that \(m^{-1}<\delta\). Note that if \(A\cap PSh_{\varphi}(\delta,\frac{1}{n})\neq\emptyset\), then \(A\cap PSh_{\varphi}(\frac{1}{n},\frac{1}{n})\neq\emptyset\). This implies that \(\mu\in\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(m^{-1},n^{-1}+l^{-1})))\). Conversely choose a \(\mu\in\bigcap_{n\in\mathbb{N}}(\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in \mathbb{N}}C_{PSh(\varphi)}(m^{-1},n^{-1}+l^{-1})))\). Thus, for every \(n\in\mathbb{N}\), there is \(k\in\mathbb{N}\) such that \(\mu\in\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(\frac{1}{k},\frac{1}{n}+\frac{1 }{l})\). This means that for every \(n\in\mathbb{N}\) there is \(k\in\mathbb{N}\) such that \(\mu\in\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(\frac{1}{k},\frac{1}{n}+\frac{1 }{l})\). This implies that for every \(\epsilon>0\) there exist \(N,K\in\mathbb{N}\) such that \((\frac{1}{N}+\frac{1}{K})<\epsilon\) and \(\mu\in C_{PSh(\varphi)}(\frac{1}{N}+\frac{1}{K},\frac{1}{K})\). Therefor, for every \(\epsilon>0\) choose \(\delta=\frac{1}{K}\) to conclude then \(\mu\in\mathcal{M}_{PSh(\varphi)}(X)\). 2. Since \(C_{PSh(\varphi)}(m^{-1},n^{-1}+l^{-1})))\) is a closed subset of \(\mathcal{M}(X)\) and a countable intersection of closed sets is closed, hence we can say that \((\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(m^{-1},n^{-1}+l^{-1})))\) is a closed subset of \(\mathcal{M}(X)\) for every pair \(m,n\in\mathbb{N}\). Therefor \(\bigcup_{m\in\mathbb{N}}(\bigcap_{l\in\mathbb{N}}C_{PSh(\varphi)}(m^{-1},n^{-1}+l ^{-1})))\) is an \(F_{\sigma\delta}\) subset of \(\mathcal{M}(X)\), for every \(n\in\mathbb{N}\) and hence by item (1), \(\mathcal{M}_{PSh}(X,\varphi)\) is an \(F_{\sigma\delta}\) subset of \(\mathcal{M}(X)\). **Proposition 2.11**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\) and \(\mu\in\mathcal{M}_{PSh}(\varphi)\). Then \(\varphi\) has persistent shadowing property on \(\text{supp}(\mu)\)._ Proof.: Let \(\epsilon>0\) be given. Choose \(0<\delta<\frac{\epsilon}{2}\) such that for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) we have \[\mu(A)>0\Rightarrow A\cap Sh_{\delta,\frac{\epsilon}{2}}(\psi)\neq\emptyset\] For \(\delta>0\) there is \(0<\eta<\frac{\delta}{2}\) such that for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\eta\) we have \[d(a,b)<\eta\Rightarrow d(\psi_{s}(a),\psi_{s}(b))<\frac{\delta}{2},\forall s \in S,\] We claim that if \(d_{S}(\varphi,\psi)<\delta\) and \(F:G\to X\) is an \(\eta\)-pseudo orbit for \(\psi:G\times X\to X\) with \(F(e)=p\in supp(\mu)\), then there is \(z\in X\) such that \(d(F(g),\psi(g,z))<\epsilon\) for all \(g\in G\). By \(p\in supp(\mu)\) we have \(\mu(B_{\eta}(p))>0\), this implies that \(B_{\eta}(p)\cap S_{\delta,\epsilon}(\psi)\neq\emptyset\) for \(\psi:G\times X\to X\). Take \(q\in B_{\eta}(p)\cap Sh_{\delta,\epsilon}(\psi)\) and define \(f:G\to X\) by \(f(e)=q\) and \(f(g)=F(g)\) for all \(g\neq e\). It is easy to see that \(f:G\to X\) is a \(\delta\)-pseudo orbit of \(\psi\). By \(f(e)\in Sh_{\delta,\epsilon}(\psi)\), there is \(y\in X\) with \(d(f(g),\varphi(g,y))<\frac{\epsilon}{2}\) for all \(g\in G\). This implies that \(d(F(g),\psi(g,y))<\epsilon\) for all \(g\in G\). It is known that the set of Borel probability measures of a compact metric space \(X\) with support equals to \(X\) is a dense \(G_{\delta}\) subset of \(\mathcal{M}(X)\), see [3, Lemma 3.6], also if \(X\) has no isolated point, then the non-atomic Borel probability measures is a dense \(G_{\delta}\) subset of \(\mathcal{M}(X)\) see [11, Corollary 8.2], thus since \(X\) is a compact space, we can say that if \(X\) is a compact space without isolated point, then the set of non-atomic Borel probability measures with support equals to \(X\) is dense in \(\mathcal{M}(X)\). Hence by Proposition 2.11, we have **Corollary 2.12**.: _Let \(\varphi:G\times X\to X\) be a continuous action of a compact metric space \(X\) without isolated point.If every non-atomic Borel probability measure \(\mu\) is compatible with persistent shadowing property for \(\varphi:G\times X\to X\), then \(\varphi\) has persistent shadowing property._ For continuous actions \(\varphi,\psi:G\times X\to X\), and \(x\in X\), we denote \[\Gamma_{\epsilon}^{\varphi,\psi}(x)=\bigcap_{g\in G}\varphi(g^{-1},B[\psi(g,x ),\epsilon])=\{y\in X:d(\varphi(g,y),\psi(g,x))\leq\epsilon\text{ for every }g\in G\}\] and \[B(\epsilon,\varphi,\psi)=\{x\in X:\Gamma_{\epsilon}^{\varphi,\psi}(x)\neq \emptyset\}.\] It is easy to see that \(B(\epsilon,\varphi,\psi)\) is a compact set in \(X\). We say that 1. A measure \(\mu\in\mathcal{M}(X)\) is compatible with the shadowing property for the continuous action \(\varphi:G\times X\to X\), \(\mu\in\mathcal{M}_{Sh}(X,\varphi)\) if for every \(\epsilon>0\) there is \(\delta>0\) such that if \(\mu(A)>0\), then \[A\cap Sh_{\varphi}(\delta,\epsilon)\neq\emptyset.\] 2. ([2]) A measure \(\mu\in\mathcal{M}(X)\) is compatible with the \(\alpha\)-persistent for the continuous action \(\varphi:G\times X\to X\), \(\mu\in\mathcal{M}_{\alpha}(X,\varphi)\), if for every \(\epsilon>0\) there is \(\delta>0\) such that if \(\mu(A)>0\), then \[A\cap B(\epsilon,\varphi,\psi)\neq\emptyset.\] for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\). 3. A measure \(\mu\in\mathcal{M}(X)\) is compatible with the \(\beta\)-persistent for the continuous action \(\varphi:G\times X\to X\), \(\mu\in\mathcal{M}_{\beta}(X,\varphi)\), if for every \(\epsilon>0\) there is \(\delta>0\) such that if \(\mu(A)>0\), then \[A\cap B(\epsilon,\psi,\varphi)\neq\emptyset.\] for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\). _Remark 2.13_.: With similar proof of Proposition 2.10, one can check that \(\mathcal{M}_{Sh}(X,\varphi)\), \(\mathcal{M}_{\alpha}(X,\varphi)\) and \(\mathcal{M}_{\beta}(X,\varphi)\) are \(F_{\sigma\delta}\) subsets of \(\mathcal{M}(X)\). Also for homeomorphism \(h:(X,d)\to(Y,\rho)\), if \(\mu\in\mathcal{M}_{Sh}(X,\varphi)\), \(\mu\in\mathcal{M}_{\alpha}(X,\varphi)\) and \(\mu\in\mathcal{M}_{\beta}(X,\varphi)\), then \(f_{*}(\mu)\in\mathcal{M}_{Sh}(Y,h\circ\varphi\circ h^{-1})\), \(f_{*}(\mu)\in\mathcal{M}_{\alpha}(Y,h\circ\varphi\circ h^{-1})\), \(f_{*}(\mu)\in\mathcal{M}_{\beta}(Y,h\circ\varphi\circ h^{-1})\), respectively. With similar technics in Proposition 2.11, we can show that if \(\mu\in\mathcal{M}_{Sh}(X,\varphi)\), \(\mu\in\mathcal{M}_{\alpha}(X,\varphi)\) and \(\mu\in\mathcal{M}_{\beta}(X,\varphi)\), then \(\varphi:G\times X\to X\) does have shadowing property, \(\alpha\)-persistent and \(\beta\)-persistent on \(supp(\mu)\), respectively. ## 3. Pointwise dynamic In this section we introduce persistent shadowable points, uniformly \(\alpha\)-persistent, \(\beta\)-persistent points for a continuous action \(\varphi\). Also, we recall notions of shadowable points, \(\alpha\)-persistent points and \(\beta\)-persistent points for continuous action \(\varphi:G\times X\to X\). This section consists of \(3\)-subsection. In Subsection 3.1, we study relations between various of shadowable points. In Subsection 3.2, we study the set of persistent shadowable points and we give some properties of it. Finally, in Subsection 3.3, we study the relation between compatibility of a measure with respect \(P\)-property and measure of points in \(X\) with \(P\)-property, where \(P\)-property can be persistent shadowing property, shadowing property, \(\alpha\)-persistent and \(\beta\)-persistent. ### Relation between various shadowable points **Definition 3.1**.: Let \(S\) be a finitely generating set of \(G\) and \(\varphi:G\times X\to X\) be a continuous action. 1. ([7]) A point \(x\in X\) is called shadowable point for \(G\)- action \(\varphi:G\times X\to X\), if for every \(\epsilon>0\) there is \(\delta=\delta(\epsilon,x)>0\) such that for every \(\delta\)- pseudo orbit \(f:G\to X\) with \(f(e)=x\) there is \(p\in X\) such that \(d(f(g),\varphi(g,p))<\epsilon\) for all \(g\in G\). 2. A point \(x\in X\) is \(\alpha\)-persistence ( uniformly \(\alpha\)-persistent) for a continuous action \(\varphi:G\times X\to X\) if for every \(\epsilon>0\) there is \(\delta_{x}>0\) such that for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) ( and every \(x^{\prime}\in B_{\delta}(x)\)) there is \(y\in X\) such that \(d(\varphi(g,y),\psi(g,x))<\epsilon\) ( resp. \(d(\varphi(g,y),\psi(g,x^{\prime}))<\epsilon\) ) for all \(g\in G\). Hereafter \(Perssis_{\alpha}(\varphi)\) and \(UPersis_{\alpha}(\varphi)\) will denote the set of all \(\alpha\)-persistence points and uniformly \(\alpha\)-persistent points of \(\varphi\), respectively. 3. A point \(x\in X\) is \(\beta\)-persistence ( uniformly \(\beta\)-persistent) for a continuous action \(\varphi:G\times X\to X\) if for every \(\epsilon>0\) there is \(\delta_{x}>0\) such that for every continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) ( and every \(x^{\prime}\in B_{\delta}(x)\)) there is \(y\in X\) such that \(d(\varphi(g,x),\psi(g,y))<\epsilon\) ( resp. \(d(\varphi(g,x^{\prime}),\psi(g,y))<\epsilon\)) for all \(g\in G\). Hereafter \(Perssis_{\beta}(\varphi)\) and \(UPersis_{\beta}(\varphi)\) will denote the set of all \(\beta\)-persistence points and uniformly \(\beta\)-persistent points of \(\varphi\), respectively. 4. A point \(x\in X\) is called persistent shadowable point for \(\varphi:G\times X\to X\), \(x\in PSh(\varphi)\), if for every \(\epsilon>0\) there is \(\delta>0\) such that every \(\delta\)-pseudo orbit \(f:G\to X\) for \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) and \(f(e)=x\) can be \((\psi,\epsilon)\)-shadowed by a point. It is easy to see that \(PSh(\varphi)\subseteq Sh(\varphi)\subseteq UPersis_{\alpha}(\varphi\subseteq Persis _{\alpha}(\varphi)\) and \(PSh(\varphi)\subseteq UPersis_{\beta}(\varphi)\subseteq Persis_{\beta}(\varphi)\). The following example shows that the converse of inclusions need not be hold. **Example 3.2**.: 1. \(Persis_{\alpha}\neq UPersis_{\alpha}(\varphi)\). Let \(X=\mathbb{S}^{1}\cup\{(x,0):-1\leq x\leq 1\}\). Since \((-1,0)\) is a fixed point of every continuous action, then \((1,0)\in Persis_{\alpha}(\varphi)\) where \(\varphi:F_{2}\times X\to X\) is defined by \(\varphi(g,x)=x\). We claim that \((1,0)\in Persis_{\alpha}(\varphi)-UPersis_{\alpha}(\varphi)\). By contradiction, we assume that \((1,0)\in UPersis_{\alpha}(\varphi)\). For \(\epsilon>0\) there is \(\delta>0\) by \((1,0)\in UPersis_{\alpha}(\varphi)\). Let \(s_{1}:[-1,1]\to\mathbb{R}:x\mapsto\frac{\delta}{4}(1-|x|)\) and \(s_{2}:[-1,1]\to\mathbb{R}:x\mapsto\frac{\delta}{6}(1-|x|)\) then \(-1\leq x-s_{i}(x)\leq x+s_{i}(x)\leq 1\) for any \(x\in[-1,1]\), and \(s_{i}(-1)=s_{i}(1)=0\), for \(i=1,2\). Define \[g_{i}:X\to X:\langle x,y\rangle\mapsto\begin{cases}\langle x-s_{i}(x),0\rangle,& \text{if }y=0\\ \langle x,y\rangle\,,&\text{otherwise}\end{cases}\] For any \((x,y)\in\{(x,0):-1<x<1\}\), the first coordinate of \(g_{i}((x,y))\) is less than \(x\). Assume that \(\psi:F_{2}\times X\to X\) is generated by \(\varphi_{a}=g_{1}\) and \(\varphi_{b}=g_{2}\). Then \(d_{S}(\varphi,\psi)<\delta\), hence for \(y\in(-1,1)\times\{0\}\) with \(d(y,(1,0))<\delta\), there is \(p\in X\) with \(d(p,\psi(g,y))<\epsilon\) for all \(g\in G\) that is a contradiction, because \(g_{i}^{k}(y)\to(-1,0)\) as \(k\to\infty\). 2. By [1, Remark 4.4], there is system \((X,f)\) such that \(f:X\to X\) is \(\alpha\)-persistent while it does not shadowing property. Hence \(UPersis_{\alpha}(f)=X\) but \(Sh(f)\neq X\). A point \(x\in X\) is an equicontinuous point for continuous action \(\varphi:G\times X\to X\) if for every \(\epsilon>0\) there is \(\delta_{x}>0\) such that \[d(x,y)<\delta_{x}\Rightarrow d(\varphi(g,x),\varphi(g,y))<\epsilon,\forall g\in G \tag{3.1}\] The set of equicontinuous points for \(\varphi:G\times X\to X\) is denoted by \(Eq(\varphi)\). It is easy to see that \[Persis_{\beta}(\varphi)\cap Eq(\varphi)\subseteq UPersis_{\beta}(\varphi) \tag{3.2}\] and \[Persis_{\alpha}(\varphi)\cap Eq(\varphi)\subseteq Persis_{\beta}(\varphi) \tag{3.3}\] Let \(\varphi:G\times X\to X\) be continuous action as in Example3.2. Then \(\varphi\) is equcontinuous action and \((1,0)\in Persis_{\beta}(\varphi)=UPersis_{\beta}(\varphi)\) while by Example 3.2, \((1,0)\notin UPersis_{\alpha}(\varphi)\). This implies that \(UPersis_{\beta}(\varphi)\neq UPersis_{\alpha}(\varphi)\) and the converse of Relation 3.3 does not hold. In the following, we show that on compact manifold \(M\) without boundary with \(dim(M)\geq 2\), notions of shadowable point, uniform \(\alpha\)-persistent point and \(\alpha\)-persistent point are equivalent. It is not hard to see that \(x\) is shadowable point if and only if it is finite shadowable point. We say that \(x\in X\) is finite shadowable point, if for every \(\epsilon>0\) there is \(\delta>0\) such that for every \(n\in\mathbb{N}\), every \(\delta-n\)- pseudo orbit \(f:G_{n}\to X\) with \(f(e)=x\) can be \(\epsilon\)-shadowed by point \(p\in X\). Note that \(G_{n}=\{g\in G:|g|_{S}\leq n\}\). **Definition 3.3**.: We say that the space \(X\) is generalized homogeneous, if for every \(\epsilon>0\) there exists \(\delta>0\) such that if \(\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) is a finite set of points in \(X\times X\) satisfying: 1. for every \(i=1,\ldots,n,d(x_{i},y_{i})<\delta\), 2. if \(i\neq j\) then \(x_{i}\neq x_{j}\) and \(y_{i}\neq y_{j}\), then there is a homeomorphism \(h:X\to X\) with \(d_{0}(h,id)<\epsilon\) and \(h(x_{i})=y_{i}\) for \(i=1,\ldots,n\). For example, a topological manifold \(X\) without boundary (\(dim(X)\geq 2\)), a Cartesian product of a countably infinite number of manifolds with nonempty boundary and a cantor set are generalized homogeneous [12]. **Proposition 3.4**.: _Let \(X\) be a generalized homogeneous compact metric space and \(\varphi:G\times X\to X\) be a continuous action. Then \(Sh(\varphi)=Persis_{\alpha}(\varphi)\)._ Proof.: It is clear that \(Sh(\varphi)\subseteq Persi_{\alpha}(\varphi)\). Let \(x\in Persis_{\alpha}(\varphi)\) and \(\epsilon\) be given. W claim that there is \(\delta>0\) such that for every \(n\in\mathbb{N}\), every \(\delta-n\)- pseudo orbit \(f:G_{n}\to X\) with \(f(e)=x\) can be \(\epsilon\)-shadowed by point \(p\in X\). Choose \(0<\epsilon_{0}<\frac{\epsilon}{2}\) corresponding to \(\epsilon>0\) by \(x\in Persis_{\alpha}(\varphi)\). Choose \(0<\delta_{0}<\frac{\epsilon_{0}}{2}\) corresponding to \(\epsilon_{0}>0\) by Definition 3.3. For \(\delta_{0}>0\) there is \(0<\delta<\frac{\delta_{0}}{2}\) such that \[d(a,b)<\delta\Rightarrow d(\varphi(g,a),\varphi(g,b))<\delta_{0},\forall|g|_{ S}\leq 2.\] Let \(f:G_{n}\to X\) be a \(\delta-n\)pseudo orbit with \(f(e)=x\). Similar proof of Lemma 2.1.2 in [12], we can construct \(\delta_{0}\)-pseudo orbit \(F:G_{n}\to X\) with the following property: * \(F(e)=x\), * \(\varphi(s,F(g))\neq F(sg),\forall s\in S,\forall|g|_{S}<n\), * \(d(F(g),f(g))<\delta_{0},\forall g\in G_{n}\) By Definition 3.3, for \(\{(\varphi(s,F(g)),F(sg));g\in G_{n}\}\), there is a homeomorphism \(h:X\to X\) with \(d_{0}(h,id)<\epsilon_{0}\) such that \(h(\varphi(s,F(g)))=F(sg)\). Let \(\psi:G\times X\to X\) be generated by \(h\circ\varphi_{s}\) for \(s\in S\). Then \(\psi:G\times X\to X\) is \(\epsilon_{0}\)-close to \(\psi:G\times X\to X\). This implies that there is \(p\in X\) with \(d(\psi(g,x),\varphi(g,p))<\epsilon\) for all \(g\in G\). But \(\psi(g,x)=F(g)\), hence \(d(f(g),\varphi(g,p))\leq d(f(g),F(g))+d(F(g),\varphi(g,p))<\epsilon\) for all \(g\in G\). By Theorem 3.5 in [7], a continuous action \(\varphi:G\times X\to X\) on compact metric space \((X,d)\) has shadowing property if and only if it is pointwise shadowable point. Hence by Proposition 3.4, we have **Corollary 3.5**.: _If \(\varphi:G\times X\to X\) is a continuous action of finitely generated group \(G\) on generalized homogeneous compact metric space \((X,d)\), then the following equivalent:_ 1. \(\varphi\) _has shadowing property,_ 2. \(\varphi\) _is pointwise shadowable point,_ 3. \(\varphi\) _is pointwise_ \(\alpha\)_-persistent,_ 4. \(\varphi\) _is_ \(\alpha\)_-persistent._ Let \(X\) be a compact metric space. It is easy to see that the following properties hold. 1. Continuous action \(\varphi:G\times X\to X\) is \(\alpha\)-persistent if and only if \(UPersis_{\alpha}(\varphi)=X\) 2. Continuous action \(\varphi:G\times X\to X\) is \(\beta\)-persistent if and only if \(UPersis_{\beta}(\varphi)=X\) 3. Equicontinuous action \(\varphi:G\times X\to X\) is \(\beta\)-persistent if and only if \(Persis_{\beta}(\varphi)=X\). ### Some properties of persistent shadowable points In the following, we give some properties of persistent shadowable points. **Theorem 3.6**.: _Let \(S\) be a finitely generating set of \(G\) and \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\)._ 1. _The point_ \(x=p\) _is persistent shadowable if and only if for every_ \(\epsilon>0\) _there is_ \(\delta>0\) _such that every_ \(\delta\)_-pseudo orbit through_ \(B[p,\delta]\) _of a continuous action_ \(\psi:G\times X\to X\) _with_ \(d_{S}(\varphi,\psi)<\delta\) _can be shadowed by a_ \(\psi\)_-orbit._ 2. _Continuous action_ \(\varphi:G\times X\to X\) _has persistent shadowing property on compact set_ \(K\) _if and only if_ \(K\subseteq PSh(\varphi)\)_._ 3. _Continuous action_ \(\varphi:G\times X\to X\) _has the persistent shadowing property if and only if it is pointwise persistent shadowable._ 4. \(PSh(\varphi)=UPersi_{\beta}(\varphi)\cap Sh(\varphi)\)_._ 5. _Continuous action_ \(\varphi:G\times X\to X\) _has persistent shadowing property if and only if it is_ \(\beta\)_-persistent and it has shadowing property._ Proof.: 1. Suppose by contradiction that \(x=p\) is persistent shadowable point but there are \(\epsilon>0\), a sequence of continuous actions \(\psi_{k}:G\times X\to X\) with \(d_{S}(\varphi,\psi_{k})\leq\frac{1}{k}\), a sequence of \(\frac{1}{k}\)-pseudo orbits \(f^{k}:G\to X\) of \(\psi_{k}:G\times X\to X\) with \(d(f^{k}(e),x)\leq\frac{1}{k}\) such that \(f^{k}:G\to X\) can not be \(2\epsilon\)-shadowed by any \(\psi_{k}\)-orbit, for every \(k\in\mathbb{N}\). For this \(\epsilon\) we let \(\delta\) be given by the persistently shadowableness of \(x\). We can assume that \(\delta<\epsilon\). On the one hand, \[d(\psi_{k}(s,p),\psi_{k}(s,f^{k}(e)) \leq d(\psi_{k}(s,p),\varphi(s,p))+d(\varphi(s,p),\varphi(s,f^{k} (e)))+d(\varphi(s,f^{k}(e)),\psi_{k}(s,f^{k}(e)))\] \[\leq 2d_{s}(\varphi,\psi_{k})+d(\varphi(s,p),\varphi(s,f^{k}(e)))\] We can choose \(k\) large satisfying (3.4) \[\max\{d(\psi_{k}(s,p),\psi_{k}(s,f^{k}(e)),\frac{1}{k}\}<\frac{\delta}{2}, \forall s\in S\] Let us define \(F^{k}:G\to X\) by \[F^{k}(g)=\left\{\begin{array}{cc}f^{k}(g),&g\neq e,\\ e,&g=e.\end{array}\right.\] Then \[d(\psi_{k}(s,F^{k}(g)),F^{k}(sg))=\left\{\begin{array}{cc}d(\psi_{k}(s,f^{ k}(g)),f^{k}(sg),&\text{ for }g\notin\{e,s^{-1}:s\in S\},\\ d(\psi_{k}(s,f^{k}(g)),p),&\text{ for }g=s^{-1},\\ d(\psi_{k}(s,p),f^{k}(s)),&\text{ for }g=e.\end{array}\right.\] Since \(f^{k}:G\to X\) is \(\frac{1}{k}\)-pseudo orbit of \(\psi_{k}:G\times X\to X\), hence (3.5) \[d(\psi_{k}(s,F^{k}(g)),F^{k}(sg))<\frac{1}{k}<\delta,\text{ for }g\notin\{e,s^{-1}:s\in S\}\] Also inequality \[d(\psi_{k}(s,f^{k}(s^{-1})),p)\leq d(\psi_{k}(s,f^{k}(s^{-1})),f^{k}(e))+d(f^ {k}(e),p),\] implies that (3.6) \[d(\psi_{k}(s,F^{k}(g)),F^{k}(sg))<\delta,\text{ for }g=s^{-1}.\] By Relation 3.4 and Relation 3.6 \[d(\psi_{k}(s,p),f^{k}(s))\leq d(\psi_{k}(s,p)+\psi_{k}(s,f^{k}(e)))+d(\psi_{k}( s,f^{k}(e)),f^{k}(s))\] we have (3.7) \[d(\psi_{k}(s,F^{k}(g)),F^{k}(sg))<\delta,\text{ for }g=e\] Hence by Relations 3.5,3.6, 3.7, we get \(F^{k}:G\to X\) is \(\delta\)-pseudo orbit of \(\psi_{k}:G\times X\to X\). But \(d_{S}(\varphi,\psi_{k})<\delta\), hence, by persistent shadowing property of \(\varphi\), for \(\delta\)-pseudo orbit \(F^{k}:G\to X\) with \(F(e)=p\) of continuous action \(\psi_{k}\) with \(d_{S}(\varphi,\psi_{k})<\delta\), there is \(z\in X\) such that \(d(F^{k}(g),\psi_{k}(g,z))<\epsilon\) for all \(g\in G\). Since for \(g\neq e\) one has \(d(f^{k}(g),\psi_{k}(g,z))=d(F^{k}(g),\psi_{k}(g,z))<\epsilon\) and for \(g=e\) \[d(z,f^{k}(e))\leq d(z,p)+d(p,f^{k}(e))<\epsilon+\frac{1}{k}<2\epsilon,\] Hence \(f^{k}:G\to X\) can be \(2\epsilon\)- shadowed by \(\psi_{k}\)-orbit of \(z\in X\). That is a contradiction. 2. It is sufficient to show that if \(K\subseteq PSh(\varphi)\), then \(\varphi\) has persistent shadowing property on \(K\). Let \(\epsilon>0\) be given. For every \(x\in X\) there is \(\delta_{x}>0\) corresponding to \(\epsilon>0\) by item (2). Since \(K\) is a compact space, the open cover \(\{B[x,\delta_{x}]:x\in K\}\) has a finite open subcover \(\{B[x,r_{x_{i}}]:i=1,2,\ldots,n\}\). Take \(\delta=\min\{\delta_{x_{i}}:i\in\{1,2,\ldots,n\}\}\) and let \(F:G\to X\) be a \(\delta\)-pseudo orbit of continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\) and \(F(e)\in K\). Clearly \(F(e)\in B[x_{i},\delta_{x_{i}}]\) for some \(1\leq i\leq n\). This implies that \(F:G\to X\) is \(\delta_{x_{i}}\)-pseudo orbit through \(B[x_{i},\delta_{x_{i}}]\). Then \(F:G\to X\) can be eventually \(\epsilon\)-shadowed by some \(\psi\)-orbit. 3. Take \(K=X\) in item (3), we have that \(\varphi\) has the persistent shadowing property if and only if \(PSh(\varphi)=X\). 4. Firstly, we show that \(PSh(\varphi)\subseteq UPersis_{\beta}(\varphi)\cap Sh(\varphi)\). Take \(x\in PSh(\varphi)\), \(\epsilon>0\). Choose \(\delta_{0}>0\) corresponding \(\frac{\epsilon}{2}>0\) by \(x\in PSh(\varphi)\). Choose \(\delta<\frac{\delta_{0}}{2}\) such that \[d(a,b)<\delta\Rightarrow d(\varphi(s,a),\varphi(s,b))<\frac{\delta_{0}}{2}.\] Fix a continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\). For \(y\in B_{\delta}(x)\), define \(F:G\to X\) by \(F(g)=\varphi(g,y)\) if \(g\neq e\) and \(F(e)=x\), then \(F:G\to X\) is a \(\delta\)-pseudo orbit of continuous action \(\psi:G\times X\to X\) with \(F(e)=x\). By \(x\in PSh(\varphi)\), there is \(p\in X\) such that \(d(F(g),\psi(g,p))<\frac{\epsilon}{2}\) for every \(g\in G\). This implies that \(d(\varphi(g,y),\psi(g,p))<\epsilon\) for all \(g\in G\). It follows that \(x\in UPersis_{\beta}(\varphi)\). Since \(PSh(\varphi)\subseteq Sh(\varphi)\), we get \(x\in UPersis_{\beta}(\varphi)\cap Sh(\varphi)\). Therefore, \(PSh(\varphi)\subseteq UPersis_{\beta}(\varphi)\cap Sh(\varphi)\). Now we show that \(UPersis_{\beta}(\varphi)\cap Sh(\varphi)\subseteq PSh(\varphi)\). Suppose that \(x\in UPersis_{\beta}(\varphi)\cap Sh(\varphi)\) and \(\epsilon>0\) be given. We show that there is \(\delta>0\) such that for every \(\delta\)-pseudo orbit \(f:G\to X\) with \(f(e)=x\) of \(\psi\) such that \(d_{S}(\varpi,\psi)<\delta\), there is \(p\in X\) with \(d(f(g),\psi(g,p))<\epsilon\) for all \(g\in G\). Choose \(\epsilon_{0}<\frac{\epsilon}{4}\) corresponding \(\frac{\epsilon}{2}\) by \(x\in UPersis_{\beta}(\varphi)\). There is \(\eta>0\) corresponding \(\frac{\epsilon_{0}}{2}\) by \(x\in Sh(\varphi)\). Take \(\delta<\frac{\eta}{2}\). If \(f:G\to X\) is \(\delta\)-pseudo orbit of \(\psi\) with \(f(e)=x\) for continuous action \(\psi:G\times X\to X\) with \(d_{S}(\varphi,\psi)<\delta\), then \(f:G\to X\) is \(\eta\)-pseudo orbit of \(\varphi\) with \(f(e)=x\). By \(x\in Sh(\varphi)\) and \(f(e)=x\), there is \(y\in B_{\epsilon_{0}}(x)\) such that \(d(f(g),\varphi(g,y))<\frac{\epsilon}{2}\) for all \(g\in G\). Also by \(x\in UPersis_{\beta}(\varphi)\) and \(y\in B_{\epsilon_{0}}(x)\) there is \(p\in X\) such that \(d(\varphi(g,y),\psi(g,p))<\frac{\epsilon}{2}\) for all \(g\in G\). This implies that \(d(f(g),\psi(g,p))<\epsilon\) for all \(g\in G\).. 5. It is clear that persistent shadowing property implies \(\beta\)-persistent and shadowing property. For the converse let \(\varphi\) is \(\beta\)-persistent and it has shadowing property, then by item (4), \(PSh(\varphi)=Persis_{\beta}(\varphi)\cap Sh(\varphi)=X\), hence \(\varphi\) is pointwise persistent shadowple and by item (3), \(\varphi\) has persistent shadowing property. ### Various shadowable points and related measures Item 1 in Proposition 3.6 implies that if continuous action \(\varphi:G\times X\to X\) does have persistent shadowing property on compact set \(K\subseteq X\), then for every \(\epsilon>0\) there exist a neighborhood \(U\) of \(K\) and \(\delta>0\) such that \(U\subseteq PSh_{\varphi}(\delta,\epsilon)\). Also, by Item 2 in Proposition 3.6, \(K\subseteq PSh(\varphi)\) implies that continuous action \(\varphi:G\times X\to X\) does have persistent shadowing property on compact set \(K\subseteq X\). Hence we have the following proposition. **Proposition 3.7**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). Let \(K\subset PSh(\varphi)\) be a compact subset. Then for every \(\epsilon>0\) there exist a neighborhood \(U\) of \(K\) and \(\delta>0\) such that \(U\subseteq PSh_{\varphi}(\delta,\epsilon)\)._ One can check that Proposition 3.7 is true in the case of shadowing property, \(\alpha\)-persistent and \(\beta\)-persistent. **Proposition 3.8**.: _If \(\mathcal{X}\in\{Sh(\varphi),UPersis_{\alpha}(\varphi),UPersis_{\beta}(\varphi)\}\) and \(K\subseteq X\) be a compact set. Then for every \(\epsilon>0\) there exist a neighborhood \(U\) of \(K\) and \(\delta>0\) such that \(U\subseteq\mathcal{X}_{\varphi}(\delta,\epsilon)\)._ Assume that \(supp(\mu)\subset PSh(X,\varphi)\) and \(X\) be a compact metric space. Since \(supp(\mu)\) is a compact set, hence by Proposition 3.7, for every \(\epsilon>0\) there is \(\delta>0\) such that \[A\cap supp(\mu)\neq\emptyset\Rightarrow A\cap PS_{\delta,\epsilon}(X,\varphi) \neq. \tag{3.8}\] Hence we have the following corollary \[supp(\mu)\subseteq PSh(\varphi)\Rightarrow\mu\in\mathcal{M}_{PSh}(X,\varphi) \tag{3.9}\] By Proposition 2.11 and Remark 2.13 and with similar proof of Remark 3.9 we have the following proposition. **Proposition 3.9**.: _Let \(\varphi:G\times X\to X\) be a continuous action of finitely generated group on compact metric space \((X,d)\). Then_ 1. \(\mu\in M_{PSh}(X,\varphi)\Leftrightarrow supp(\mu)\subseteq PSh(\varphi)\)_,_ 2. \(\mu\in M_{Sh}(X,\varphi)\Leftrightarrow supp(\mu)\subseteq Sh(\varphi)\)_,_ 3. \(\mu\in M_{\alpha}(X,\varphi)\Leftrightarrow supp(\mu)\subseteq UPersis_{ \alpha}(\varphi)\)_,_ 4. \(\mu\in M_{\beta}(X,\varphi)\Leftrightarrow supp(\mu)\subseteq UPersis_{ \beta}(\varphi)\)__ By Proposition 2.10, the set of persistent shadowable points is measureable. With similar proof, one can check that \(Sh(\varphi)\), \(UPersis_{\alpha}(\varphi)\) and \(UPersis_{\beta}(\varphi)\) are measureable sets. Assume that \(supp(\mu)\subseteq\overline{PSh(\varphi)}\), then by Lemma 2.8 in [14], if \(X\) is a compact metric space, then there is a sequence \(\mu_{n}\in\mathcal{M}(X)\) with \(supp(\mu_{n})\subseteq PSh(\varphi)\) and converging to \(\mu\) with respect to the \(weak^{*}\) topology. By Proposition 3.9, \(\mu_{n}\in\mathcal{M}_{PSh}(X,\varphi)\). This implies that \(\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\). Hence, we have following relation \[\mu(\overline{PSh(\varphi)})=1\Rightarrow\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}. \tag{3.10}\] Conversely, let \(\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\). Choose \(\mu_{n}\in\mathcal{M}_{PSh}(X,\varphi)\) such that \(\mu_{n}\to\mu\). By inequality \[\limsup_{n\to\infty}\mu_{n}(\overline{PSh(\varphi)})\leq\mu(\overline{PSh( \varphi)})\] and \(\mu_{n}(\overline{PSh(\varphi)})=1\), we have \(\mu(\overline{PSh(\varphi)})=1\). Hence we have the following relation. \[\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\Rightarrow\mu(\overline{PSh( \varphi)})=1. \tag{3.11}\] By Relation 3.10 and Relation 3.11, we have the following proposition. **Proposition 3.10**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). Then \(\mu(\overline{PSh(\varphi)})=1\) if and only if \(\mu\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\)._ With similar proof, we have the following proposition. **Proposition 3.11**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). Then_ 1. \(\mu(\overline{PSh(\varphi)})=1\Leftrightarrow\mu\in\overline{\mathcal{M}_{PSh} (X,\varphi)}.\)__ 2. \(\mu(\overline{Sh(\varphi)})=1\Leftrightarrow\mu\in\overline{\mathcal{M}_{Sh} (X,\varphi)}.\)__ 3. \(\mu(\overline{UPersis_{\alpha}(\varphi)})=1\Leftrightarrow\mu\in\overline{ \mathcal{M}_{\alpha}(X,\varphi)}.\)__ 4. \(\mu(\overline{UPersis_{\beta}(\varphi)})=1\Leftrightarrow\mu\in\overline{ \mathcal{M}_{\beta}(X,\varphi)}.\)__ We claim that if \(PSh(\varphi)\) is a closed set in \(X\), then \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\). Take \(\mu_{n}\in\mathcal{M}_{PSh}(X,\varphi)\) with \(\mu_{n}\to\mu\). Then \(\limsup_{n\to\infty}\mu_{n}(PSh(\varphi))\leq\mu(PSh(\varphi))\) implies that \(\mu(PSh(\varphi))=1\). Hence \(supp(\mu)\subseteq PSh(\varphi)\). By Proposition 3.9, \(\mu\in\mathcal{M}_{PSh}(\varphi)\) i.e. \[\overline{PSh(\varphi)}=PSh(\varphi)\Rightarrow\overline{\mathcal{M}_{PSh}(X, \varphi)}=\mathcal{M}_{PSh}(X,\varphi). \tag{3.12}\] Conversely, let \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\), we claim that \(\overline{PSh(\varphi)}=PSh(\varphi)\). If it is not true, then there exist \(\{x_{n}\}\subseteq PSh(\varphi)\) with \(x_{n}\to x\) such that \(x\notin PSh(\varphi)\). By \(x_{n}\to x\), we have \(m_{x_{n}}\to m_{x}\). Where \(m_{t}\) the Dirac measure supported on \(t\in X\), indeed, \(m_{t}(A)=0\) or \(1\) depending on whether \(t\notin A\) or \(t\in A\). It is easy to see that \[PSh(\varphi)=\{t\in X:m_{t}\in\mathcal{M}_{PSh}(X,\varphi)\}\] By \(\{x_{n}\}\subseteq PSh(\varphi)\), we have \(m_{x_{n}}\in\mathcal{M}_{PSh}(X,\varphi)\). Also, by \(m_{x_{n}}\to m_{x}\) and \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\), we have \(x\in PSh(\varphi)\) which is a contradiction. This implies the following relation. \[\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi)\Rightarrow \overline{PSh(\varphi)}=PSh(\varphi). \tag{3.13}\] By Relation 3.12 and Relation 3.13, we have the following proposition. **Proposition 3.12**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). Then_ \[\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi) \Leftrightarrow\overline{PSh(\varphi)}=PSh(\varphi).\] On can check that result of Proposition 3.12 can be obtain for other type of shadowing,indeed we have the following proposition. **Proposition 3.13**.: _Let \(\varphi:G\times X\to X\) be a continuous action on compact metric space \((X,d)\). Then_ 1. \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}_{PSh}(X,\varphi) \Leftrightarrow\overline{PSh(\varphi)}=PSh(\varphi).\)__ 2. \(\overline{\mathcal{M}_{Sh}(X,\varphi)}=\mathcal{M}_{Sh}(X,\varphi) \Leftrightarrow\overline{Sh(\varphi)}=Sh(\varphi).\)__ 3. \(\overline{\mathcal{M}_{\alpha}(X,\varphi)}=\mathcal{M}_{\alpha}(X,\varphi) \Leftrightarrow\overline{Persis_{\alpha}(\varphi)}=Persis_{\alpha}(\varphi).\)__ 4. \(\overline{\mathcal{M}_{\beta}(X,\varphi)}=\mathcal{M}_{\beta}(X,\varphi) \Leftrightarrow\overline{Persis_{\beta}(\varphi)}=Persis_{\beta}(\varphi).\)__ If \(\varphi:G\times X\to X\) is equicontinuous action, then \(UPersis_{\beta}(\varphi)=Persis_{\beta}(\varphi)\) is a closed subset of \(X\). This implies the following proposition. **Proposition 3.14**.: _Let \(\varphi:G\times X\to X\) be a equicontinuous action of a finitely generated group \(G\) on compact metric space. Then_ 1. \(\overline{\mathcal{M}_{\beta}(X,\varphi)}=\mathcal{M}_{\beta}(X,\varphi)\)__ 2. \(\mu(Persis_{\beta}(\varphi))=1\) _if and only if_ \(\mu\in\mathcal{M}_{\beta}(X,\varphi).\)__ Proof of the following proposition is clear. **Proposition 3.15**.: _Let \(\varphi:G\times X\to X\) be a continuous action._ 1. \(PSh(\varphi)=\{x\in X:m_{x}\in\mathcal{M}_{PSh}(X,\varphi)\}.\)__ 2. \(Sh(\varphi)=\{x\in X:m_{x}\in\mathcal{M}_{Sh}(X,\varphi)\}\)__ 3. \(Persis_{\beta}(\varphi)=\{x\in X:m_{x}\in\mathcal{M}_{\beta}(X,\varphi)\}\)__ 4. \(Persis_{\alpha}(\varphi)=\{x\in X:m_{x}\in\mathcal{M}_{\alpha}(X,\varphi)\}\)__ Assume that \(\overline{PSh(\varphi)}=X\). By Theorem 6.3 in [11], if \(X\) is a separable metric space, then Then the set of all measures whose supports are finite subsets of \(PSh(\varphi)\) is dense in \(\mathcal{M}(X)\). Also by Lemma 2.7 in [14], every measure with finite support and supported on \(PSh(\varphi)\) is a finite convex combination of Dirac measures supported on points of \(PSh(\varphi)\). By Proposition 3.15, such Dirac measures are compatible with persistent shadowing property. But finite convex combination of measures in \(\mathcal{M}_{PSh}(X,\varphi)\) is compatible with persistent shadowing property. This implies that if \(\overline{PSh(\varphi)}=X\), then the set of finite convex combination of measures in \(\mathcal{M}_{PSh}(X,\varphi)\) is dense in \(\mathcal{M}(X)\). This means that if \(\overline{PSh(\varphi)}=X\), then \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)\). Conversely, assume that \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)\). We claim that \(\overline{PSh(\varphi)}=X\). If it is not true, then there is \(x\in X\) with \(x\notin\overline{PSh(\varphi)}\). Choose open set \(U\) of \(x\) with \(x\in U\subseteq X-\overline{PSh(\varphi)}\). Since \(m_{x}\in\overline{\mathcal{M}_{PSh}(X,\varphi)}\), there is a sequence \(\mu_{n}\in\mathcal{M}_{PSh}(X,\varphi)\) such that \(\mu_{n}\to m_{x}\). By Proposition 3.9, we have \(Supp(\mu_{n})\subseteq PSh(\varphi)\). By \(U\subseteq X-\overline{PSh(\varphi)}\), we have \(\mu_{n}(U)=0\) for all \(n\in\mathbb{N}\). There for \(0=\liminf_{n\rightarrow\infty}\mu_{n}(U)\geq m_{x}(U)=1\) which is a contradiction. With similar technics we can prove the following propositions. **Proposition 3.16**.: _Let \(\varphi:G\times X\to X\) be a continuous action of a finitely generated group \(G\) on compact metric space \((X,d)\). Then the following conditions hold._ 1. \(\overline{\mathcal{M}_{PSh}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow\overline {PSh(\varphi)}=X\)__ 2. \(\overline{\mathcal{M}_{Sh}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow\overline {Sh(\varphi)}=X\)__ 3. \(\overline{\mathcal{M}_{\beta}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow \overline{Persis_{\beta}(\varphi)}=X\)__ 4. \(\overline{\mathcal{M}_{\alpha}(X,\varphi)}=\mathcal{M}(X)\Leftrightarrow \overline{Persis_{\alpha}(\varphi)}=X\)__ ## acknowledgments The author wishes to thank Professor Morales for his idea about Theorem 3.6 given in [9].
2305.00277
Interacting tachyonic scalar field II
The existence of dark energy is essential to explain the cosmic accelerated expansion. We consider a homogenous interacting tachyonic scalar field as a possible candidate for the dynamical dark energy. The interaction between the tachyonic field and matter can be gauged to be linear in the energy density of matter (or the tachyonic field) and Hubble's parameter. We estimate the rate of expansion, the age of the universe, the evolution of energy density of matter and tachyonic field, and the coupling strength of the interaction for a spatially flat ($k=0$) universe. We observed that the upper limit of coupling strength is 1, and it is the same whether the interaction term depends on the energy density of matter or the energy density of tachyonic scalar field.
V K Ojha, Adithya A Rao, S D Pathak
2023-04-29T15:26:56Z
http://arxiv.org/abs/2305.00277v1
# Interacting tachyonic scalar field II ###### Abstract The existence of dark energy is essential to explain the cosmic accelerated expansion. We consider a homogenous interacting tachyonic scalar field as a possible candidate for the dynamical dark energy. The interaction between the tachyonic field and matter can be gauged to be linear in the energy density of matter (or the tachyonic field) and Hubble's parameter. We estimate the rate of expansion, the age of the universe, the evolution of energy density of matter and tachyonic field, and the coupling strength of the interaction for a spatially flat (\(k=0\)) universe. We observed that the upper limit of coupling strength is 1, and it is the same whether the interaction term depends on the energy density of matter or the energy density of tachyonic scalar field. Introduction Type Ia supernovae [1; 2] observation confirms the universe's accelerated expansion. This observation demands the existence of a medium with negative pressure, usually called dark energy. Different scalar fields are observed to satisfy this negative pressure property and have been used extensively to study dark energy [3; 4; 5; 6]. Some popular choices of scalar fields are phantom [7; 8; 9; 10; 11; 12], quintessence [13; 14; 15; 16], and tachyon [17; 18; 19; 20; 21; 22]. All these scalar fields exhibit negative pressure, making each a possible candidate for dark energy. In fact in [23], it has been shown that these three scalar fields are indistinguishable under the slow roll approximation. In this article, we consider the tachyonic scalar field as a candidate for dark energy, making it the source of accelerated expansion of the universe. The tachyonic scalar field was first defined in string theory [24; 25; 26] and later used as a possible candidate for dark energy [4; 20]. The equation of state for a tachyonic scalar field, \(p=-\rho\), satisfies the negative pressure condition for a universe component necessary to be termed as dark energy. Without interactions, the tachyonic scalar field exhibits behavior similar to the cosmological constant. But a static model of the universe, where the tachyonic field does not interact with other universe components, suffers from two major problems, the coincidence problem, and the Cosmological Constant problem. An interacting model of dark energy resolves this problem [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48]. In such a model, the dark energy field is considered dynamic, with energy exchange between the matter and the universe's dark energy content. A major challenge in considering the interacting model is fixing the interactions' functional forms. Different forms of interactions have been proposed by several authors based on dimensional and phenomenological arguments over the years [27; 28; 29; 30; 31; 32; 33; 34; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. We consider an interacting model of dark energy, in which the dark energy is modeled by a tachyonic scalar field that interacts with the universe's matter content. Specifically, we consider that the interaction between the tachyonic field and matter depends linearly on the energy density of matter. Using this interacting model, we investigate the behavior of the energy density, scale factor, and age of the universe and the possible values of the coupling strength. Previously we had considered the interaction to be linearly dependent on the tachyonic field and found that the coupling strength of interaction can not exceed the value 1 [22]. The article is organized as follows. We start with some theoretical background in the section II, and then briefly discuss the interacting tachyonic scalar field model in section III. Section III.1 and III.2 consist of derivation and analysis of the evolution of energy density for matter and field. The evolution of scale factor and the age of the universe are discussed in section IV, and V respectively. A comparison of coupling strength when interaction depends linearly on matter density and tachyonic field is presented in section VI. Finally, we conclude in section VII. ## II Theoretical background The FLRW metric in natural unit is given by \[g^{\mu\nu}=\text{diag}\left(-1,\ \frac{a^{2}}{1-\frac{\kappa r^{2}}{R_{0}^{2}}},\ a ^{2}r^{2},\ a^{2}r^{2}\sin^{2}\theta\right) \tag{1}\] where \(a\) is the time-dependent scale factor, and \(\kappa\) is the global curvature of the universe. This metric follows from the cosmological principle, which implies that for our universe, the stress-energy tensor takes on a relatively simple form \[T^{\mu\nu}=\text{diag}(\rho(t)^{2},\ p(t),\ p(t),\ p(t)) \tag{2}\] For a universe with the FLRW metric and the stress energy tensor given above, the Friedmann equation follows from the Einstein field equations and is given as \[\left(\frac{\dot{a}(t)}{a(t)}\right)^{2}=\frac{8\pi G}{3c^{2}}\rho(t)-\frac{ \kappa c^{2}}{R_{0}^{2}}a(t)^{2} \tag{3}\] For a flat universe, \(\kappa=0\), and the Friedmann equation takes on a simpler form \[\left(\frac{\dot{a}(t)}{a(t)}\right)^{2}=\frac{8\pi G}{3c^{2}}\rho(t) \tag{4}\] The principle of conservation of energy, when applied to the energy component of the universe gives the fluid equation \[\dot{\rho}(t)+3\frac{\dot{a}(t)}{a(t)}\left(\rho(t)+p(t)\right)=0 \tag{5}\] The last two equations, along with the equation of state, \(p(t)=\omega\rho(t)\), specify the dynamics of the universe. ## III Interacting tachyonic scalar field We consider the universe with two dominant components: matter and dark energy, with dark energy modeled by a spatially homogenous tachyonic scalar field (TSF). The Lagrangian density for TSF is [24; 25] \[\mathcal{L}=-V(\phi)\sqrt{1-\partial^{\mu}\partial_{\mu}\phi}. \tag{6}\] For such a TSF, the stress-energy tensor is \[T^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial(\partial_{\mu}\phi)}\partial^{ \nu}\phi-g^{\mu\nu}\mathcal{L}, \tag{7}\] and the energy and pressure density follows directly from the stress-energy tensor as \[\rho=\frac{V(\phi)}{\sqrt{1-\partial^{\mu}\partial_{\mu}\phi}}, \tag{8}\] and \[p=-V(\phi)\sqrt{1-\partial^{\mu}\partial_{\mu}\phi}. \tag{9}\] Since the TSF is spatially homogeneous, the spatial derivatives vanish, while the time derivative survives. Thus \[p=-(1-\dot{\phi}^{2})\rho \tag{10}\] giving \(\omega_{\phi}=-(1-\dot{\phi}^{2})\). We consider the interaction of the TSF with matter via the transfer of energy. The two components exchange energy and hence violate individual energy conservation. But overall, the total energy of the universe is conserved. Thus for such a model, the continuity equation for the energy density of TSF (\(\rho_{\phi}\)) and matter (\(\rho_{m}\)) gets modified as \[\dot{\rho_{\phi}}+3\frac{\dot{a}}{a}(1+\omega_{\phi})\rho_{\phi}=-Q, \tag{11}\] \[\dot{\rho_{m}}+3\frac{\dot{a}}{a}(1+\omega_{m})\rho_{m}=Q, \tag{12}\] where \(\omega_{\phi}=p_{\phi}/\rho_{\phi}\), and \(\omega_{m}=p_{m}/\rho_{m}\). Based on the phenomenological argument, the functional form of the interaction term can be guessed to be linear in the energy density of the matter or field [51; 52]. Previously we investigated the case when \(Q\) depends linearly on the energy density of TSF (\(\rho_{\phi}\)) [22]. In this work, we investigate the case when \(Q\) depends linearly on the energy density of matter (\(\rho_{m}\)). In particular, we choose \(Q\propto\rho_{m}\), with the proportionality constant being \(3\beta\frac{\dot{a}}{a}\), where \(\beta\) is a dimensionless coupling constant specifying the strength of interaction. Thus the interaction term takes on the form \[Q=3\beta\frac{\dot{a}}{a}\rho_{m} \tag{13}\] ### Evolution of Energy Densities With the above-said interaction term, the continuity equations read \[\dot{\rho_{\phi}}+3\frac{\dot{a}}{a}(1+\omega_{\phi})\rho_{\phi}=-3\beta\frac{ \dot{a}}{a}\rho_{m}, \tag{14}\] and \[\dot{\rho_{m}}+3\frac{\dot{a}}{a}(1+\omega_{m})\rho_{m}=3\beta\frac{\dot{a}}{a }\rho_{m}. \tag{15}\] These can be solved to get the equation governing the evolution of \(\rho_{\phi}\) and \(\rho_{m}\) with scale factor \(a\). \(\bullet\)**Solving for \(\rho_{m}\)** \[\dot{\rho_{m}}+3\frac{\dot{a}}{a}(1+\omega_{m})=3\beta\frac{\dot{a}}{a}\rho_{ m},\] \[\implies\dot{\rho_{m}}+3\frac{\dot{a}}{a}(1+\omega_{m}-\beta)\rho_{m}=0 \tag{16}\] This differential equation has a simple solution \[\frac{\rho_{m}}{\rho_{m}^{0}}=\left(\frac{a}{a^{0}}\right)^{-3(1+\omega_{m}- \beta)}=\left(\frac{a}{a^{0}}\right)^{-\gamma} \tag{17}\] where \(\rho_{m}^{0}\) is the energy density of matter at the present time, and \(a^{0}\) is the scale factor at present time. \(\bullet\)**Solving for \(\rho_{\phi}\)** \[\dot{\rho_{\phi}}+3\frac{\dot{a}}{a}(1+\omega_{\phi})\rho_{\phi}=-3\beta\frac {\dot{a}}{a}\rho_{m}^{0}\left(\frac{a}{a^{0}}\right)^{-\gamma} \tag{18}\] Changing variables, \(a=xa^{0},\ \rho_{\phi}=R\rho_{\phi}^{0}\implies\dot{a}=\dot{x}a^{0},\ \dot{\rho_{\phi}}=\dot{R}\rho_{\phi}^{0}\) \[\dot{R}+3\frac{\dot{x}}{x}(1+\omega_{\phi})R=-3\beta\dot{x}\frac{\rho_{m}^{0}}{ \rho_{\phi}^{0}}x^{-\gamma-1} \tag{19}\] The equation can be rewritten as \[\frac{dR}{dx}+\frac{3(1+\omega_{\phi})}{x}R=-3\beta\frac{\rho_{m}^{0}}{\rho_{ \phi}^{0}}x^{-\gamma-1} \tag{20}\] Solving, we get \[R=-x^{-3(1+\omega_{\phi})}\left(3\beta\frac{\rho_{m}^{0}}{\rho_{\phi}^{0}} \frac{x^{3(1+\omega_{\phi})-\gamma}}{3(1+\omega_{\phi})-\gamma}-1\right)-3 \beta\frac{\rho_{m}^{0}}{\rho_{\phi}^{0}}\frac{-x^{-3(1+\omega_{\phi})}}{3(1+ \omega_{\phi})-\gamma} \tag{21}\] For the TSF to mimic the cosmological constant, \(\omega_{\phi}=-1\) (implying that the TSF is a constant over time also, i.e. \(\dot{\phi}=0\)), thus the above equation becomes \[\frac{\rho_{\phi}}{\rho_{\phi}^{0}}=3\beta\frac{\rho_{m}^{0}}{\rho_{\phi}^{0} }\frac{1}{\gamma}\left(\left(\frac{a}{a^{0}}\right)^{-\gamma}-1\right)+1 \tag{22}\] The variation of the energy densities of matter and dark energy with the scale factor for different values of \(\beta\) is plotted in Fig.(1). It is worth noting that \(\beta\) determines the coupling strength between matter and the scalar field \(\phi\). For \(\beta>0\), dark energy decreases as matter density increases. On the other hand, for \(\beta<0\), dark energy increases as matter density increases. When \(\beta=0\), the interaction vanishes, and the two sectors evolve independently, i.e., the cosmological constant scenario. ### Evolution of \(\Omega\) with \(\ln(a)\) The equations of cosmology can be written in terms of another variable - density parameter, defined as the ratio of the energy density and the critical density \(\rho_{c}\), \[\Omega=\frac{\rho}{\rho_{c}}=\frac{8\pi Ga^{2}}{3\dot{a}^{2}}\rho=\frac{8\pi G }{3H^{2}}\rho,\mbox{ with }H=\frac{\dot{a}}{a}. \tag{23}\] The numerical value of \(\Omega\) indicates the nature of the universe locally. For \(\Omega=1\), the universe is spatially flat; for \(\Omega<1\), it is negatively curved; and for \(\Omega>1\), it is positively curved. Since, for any function \(f\) \[\frac{df}{d\ln(a)}=\frac{df}{da}/\frac{d\ln(a)}{da}=a.(\frac{df}{dt}/\frac{da} {dt}),\] therefore, for \(H=\frac{\dot{a}}{a}\), we have \[\frac{df}{d\mathrm{ln}(a)}=\frac{1}{H}\frac{df}{dt}. \tag{24}\] Defining \(\Omega_{m}=\frac{\rho_{m}}{\rho_{c}}\), \(\Omega_{\phi}=\frac{\rho_{\phi}}{\rho_{c}}\), we can write \[\frac{d\Omega_{m}}{d\mathrm{ln}(a)}=\frac{8\pi G}{3}\frac{1}{H}\left(\frac{ \dot{\rho_{m}}}{H^{2}}-2\frac{\rho_{m}}{H^{3}}\dot{H}\right) \tag{25}\] and \[\frac{d\Omega_{\phi}}{d\mathrm{ln}(a)}=\frac{8\pi G}{3}\frac{1}{H}\left(\frac{ \dot{\rho_{\phi}}}{H^{2}}-2\frac{\rho_{\phi}}{H^{3}}\dot{H}\right) \tag{26}\] Figure 1: Plot of the energy densities of matter and TSF against the scale factor for different values of coupling constant \(\beta\). From the continuity equations (Eq.(14,15)), we get \[\dot{\rho}_{\phi}=-3\beta H\rho_{m}\ \ \&\ \ \dot{\rho}_{m}=-3H\rho_{m}+3\beta H \rho_{m} \tag{27}\] and from the Friedmann equation, we get \[2H\dot{H}=\frac{8\pi G}{3}(\dot{\rho}_{m}+\dot{\rho}_{\phi})=\frac{8\pi G}{3}(-3 \rho_{m})\] \[\implies\dot{H}=\frac{-8\pi G\rho_{m}}{2} \tag{28}\] Using Eqs.(27,28), and the fact that for a spatially flat universe such as ours, \(\Omega_{m}+\Omega_{\rho}=1\), the differential Eqs.(25,26) become \[\frac{d\Omega_{m}}{d\mbox{ln}(a)}=3\beta\Omega_{m}+3\Omega_{m}^{2}-3\Omega_{m}, \tag{29}\] and \[\frac{d\Omega_{\phi}}{\mbox{ln}(a)}=3(1-\Omega_{\phi})\Omega_{\phi}-3\beta(1- \Omega_{\phi}). \tag{30}\] These equations are solved, using conditions \(\Omega_{m}|_{a=1}=0.3\) and \(\Omega_{\phi}|_{a=1}=0.7\) for different values of \(\beta\) and the evolution of \(\Omega_{m}\) and \(\Omega_{\phi}\) with \(\mbox{ln}(a)\) is shown in Fig.(2). For \(\beta=0\) we recover the result for our universe, the matter density is dominant initially, and in the future, the dark energy density will be dominant. Also we can see that we are at the epoch of equality of \(\Omega_{m}\) and \(\Omega_{\phi}\). For negative \(\beta\), \(\Omega_{\phi}\) goes to negative values as expected from the behavior of \(\rho_{\phi}\). For universes with positive \(\beta\), the initial difference between \(\Omega_{m}\) and \(\Omega_{\phi}\) is reduced as \(\beta\) is increased, and for \(\beta\gtrapprox 0.5\), dark energy becomes the dominant constituent at all times. ## IV Evolution of scale factor Another important factor to consider is how the scale factor varies with time in the universe. This not only quantifies the expansion of the universe but also allows for the conversion between functions of \(a\) and \(t\). The Friedmann equation gives a relation between \(a\) and \(\rho\) and this can be used to obtain the form of scale factor as a function of time. The Friedmann equation can be written as \[\frac{\dot{a}}{a}=\sqrt{\frac{8\pi G}{3c^{2}}}\ \sqrt{\rho_{\phi}+\rho_{m}}. \tag{31}\] Using the equation for the evolution of energy densities (Eqs.(17,22)), and \(a=xa^{0}\), we get \[\frac{\dot{x}}{x}=\sqrt{\frac{8\pi G}{3c^{2}}}\sqrt{\rho_{m}^{0}x^{-\gamma}+ \rho_{\phi}^{0}\left(3\beta\frac{\rho_{m}^{0}}{\rho_{\phi}^{0}}\frac{1}{\gamma }\left(x^{-\gamma}-1\right)+1\right)},\] which on further simplification becomes \[\frac{\dot{x}}{x}=\sqrt{\frac{8\pi G\rho_{m}^{0}}{3c^{2}}}\sqrt{x^{-\gamma}+3 \beta\frac{1}{\gamma}\left(x^{-\gamma}-1\right)+\frac{\rho_{\phi}^{0}}{\rho_{ m}^{0}}}. \tag{32}\] Figure 2: Plot of the evolution of density parameter \(\Omega\) for matter and TSF as the function of \(\ln(a)\) for different values of coupling constant \(\beta\). From this, we can write \[t=\sqrt{\frac{3c^{2}}{8\pi G\rho_{m}^{0}}}\int\frac{dx}{x\sqrt{\frac{1}{x^{\gamma}} \left(1+\frac{3\beta}{\gamma}\right)+\left(\frac{\rho_{\phi}^{0}}{\rho_{m}^{0}}- \frac{3\beta}{\gamma}\right)}}, \tag{33}\] and using \(\omega_{m}=0\), \(\rho_{c}^{0}=8.7\times 10^{-27}\) kg/m\({}^{3}\), \(\rho_{m}^{0}=\Omega_{m}^{0}\rho_{c}^{0}\), \(\rho_{\phi}^{0}=\Omega_{\phi}^{0}\rho_{c}^{0}\) and integrating over \(x\) gives \[H^{0}t=-\frac{6.28701\sinh^{-1}\left(\frac{1}{3}\sqrt{21-30\beta}\sqrt{x^{3-3 \beta}}\right)}{\sqrt{21-30\beta}\sqrt{3-3\beta}},\] implying that the scale factor \(x\) evolves as \[x=\frac{a}{a^{0}}=\left(3\frac{\sinh\left(0.159\sqrt{21-30\beta}\sqrt{3-3 \beta}\ tH^{0}\right)}{\sqrt{21-30\beta}}\right)\frac{2}{3-3\beta}\;. \tag{34}\] The expansion of the universe, i.e., the evolution of the scale factor with time for different values of coupling constant \(\beta\) is plotted in FIG. 3. For our universe, with \(\beta=0\), the plot shows that the universe's expansion is decelerating in the initial stage and then accelerating at the later stage. For negative \(\beta\), the initial expansion rate of the universe is magnified, making the deceleration curve more prominent. In contrast, for positive \(\beta\), the initial expansion rate is very small, and the universe's deceleration does not occur. ## V Age of the Universe The Age of the Universe (AOU) is the difference between the times when the scale factor is 1 (i.e., the present day) and when the scale factor was 0 (the Big Bang). This time difference which gives the AOU can be directly calculated as the definite integral of the equation \[t_{AOU}=(H^{0})^{-1}H^{0}\sqrt{\frac{3c^{2}}{8\pi G\rho_{m}^{0}}}\int\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(a\) vs \(t\) for different values of coupling constant \(\beta\) Figure 3: Plot of the evolution of scale factor \(a\) of the universe for different values of coupling constant \(\beta\). Using the values of Hubble's constant \(H^{0}\), and the matter and dark energy density from cosmological observations, and integrating the equation over \(x\), we obtain the age of the universe as a function of the coupling constant \(\beta\) \[t_{AOU}(\beta)=(H^{0})^{-1}\frac{1.21\,_{2}F_{1}(0.5,0.5;1.5;3.33\beta-2.33)}{ \sqrt{1-\beta}}, \tag{35}\] for \(\beta<1\). This equation is plotted to visualize the variation of the age of the universe with the coupling constant \(\beta\) in Fig.(4). For \(\beta\geq 1\) the definite integral does not converge to a single value and hence the age of the universe becomes indeterminate. Therefore such a model cannot represent any real universe, and hence we obtain an upper limit on the coupling constant \(\beta\) for a real universe, i.e. \(\beta<1\). There does not exist a lower limit on the coupling constant, as there are no breaks or discontinuities in the age of the universe as a function of \(\beta\). The \(t_{AOU}\) is a smooth and continuous function of \(\beta\), with a lower limit \(-\infty\) and an upper limit 1 on the parameter \(\beta\). The age of the universe converges to 0 as the coupling constant goes to \(-\infty\), i.e. for a universe with coupling constant \(\beta\rightarrow-\infty\), the evolution of the universe from \(a=0\) to \(a=1\) takes place instantaneously. As \(\beta\to 1\), the AOU tends to \(\infty\), which implies that the dynamics of such a universe are very slow on cosmic scales. This behavior is also visible in the evolution of scale factor, where the scale factor tends to 0 even at large t as \(\beta\to 1\). The numerical value of the Age of the Universe in the terms of inverse Hubble's constant for different values of \(\beta\) have also been tabulated in Table 1. ## VI Comparison of coupling strength: \(3\beta\rho_{m}\) vs \(\alpha\rho_{\phi}\) The interaction term \(Q\) can have two possible forms: \(Q=\alpha\rho_{\phi}\), or \(Q=3\beta\rho_{m}\). The complete analysis for the former form has been done in [22] with a significant conclusion that even though the coupling strength has no lower bound, the upper bound on the coupling constant \(\alpha\) must be 1. In this article, we did the entire analysis by taking the later form of the interaction. We obtain a similar conclusion for the coupling strength \(\beta\), i.e., it has no lower bound, but the upper bound must be 1. As the possible range of both \(\alpha\) and \(\beta\) are the same \((-\infty,1)\), the question arises: are they different, or are they just the same thing with different notations? Further investigations in this line can be interesting and significant if we can replace the multiple coupling constant with just one. The age of the universe as a function of the coupling constant obtained from both form of interaction is compared in Fig.(5). Both the coupling constants give rise to the universe, whose age decreases with the value of the coupling constant and goes to infinity as it approaches 1. Both are continuously increasing functions, and for the interaction-less universe, both converge to the same value: our universe's age. ## VII Conclusion We have investigated the dynamics of a universe with an interacting tachyonic scalar field as a possible source of dark energy. The interaction between the matter and dark energy component is modeled as an energy exchange, with the energy exchange depending linearly on the energy density of the matter. We also presented the evolution of energy densities with respect to scale factor, as well as the evolution of density parameter (\(\Omega\)) with the natural logarithm of the scale factor and derived the Age of the Universe was derived as the function of the coupling constant \(\beta\). Our analysis reveals that the case where \(\beta=0\), (no interaction between matter and TSF) corresponds to the dynamics of our universe, as expected. We have also found that for \(\beta\geq 1\), the age of the universe does not converge and hence becomes indeterminate. This puts an upper bound on the possible values of the coupling constant \(\beta\) for real universes. This result is similar to the case when interaction term depend linearly on the energy density of the TSF [22]. A comparison with the work [22] shows that in both the cases, the constraints on the coupling constant are the same, implying that the bound on the coupling constant is the same for both the case, irrespective of whether the interaction term depends on the energy density of matter or the energy density of dark energy (tachyonic field). Since the bound on both \(\alpha\) and \(\beta\) are Figure 5: The age of the universe as a function of the coupling constant in the two different models considered. Here \(\zeta\) is used as a common coupling constant in place of \(\alpha\) and \(\beta\). the same, i.e. \((-\infty,1)\), it would be interesting to further investigate whether the two can be unified and replaced by a single coupling constant.
2304.08502
CyFormer: Accurate State-of-Health Prediction of Lithium-Ion Batteries via Cyclic Attention
Predicting the State-of-Health (SoH) of lithium-ion batteries is a fundamental task of battery management systems on electric vehicles. It aims at estimating future SoH based on historical aging data. Most existing deep learning methods rely on filter-based feature extractors (e.g., CNN or Kalman filters) and recurrent time sequence models. Though efficient, they generally ignore cyclic features and the domain gap between training and testing batteries. To address this problem, we present CyFormer, a transformer-based cyclic time sequence model for SoH prediction. Instead of the conventional CNN-RNN structure, we adopt an encoder-decoder architecture. In the encoder, row-wise and column-wise attention blocks effectively capture intra-cycle and inter-cycle connections and extract cyclic features. In the decoder, the SoH queries cross-attend to these features to form the final predictions. We further utilize a transfer learning strategy to narrow the domain gap between the training and testing set. To be specific, we use fine-tuning to shift the model to a target working condition. Finally, we made our model more efficient by pruning. The experiment shows that our method attains an MAE of 0.75\% with only 10\% data for fine-tuning on a testing battery, surpassing prior methods by a large margin. Effective and robust, our method provides a potential solution for all cyclic time sequence prediction tasks.
Zhiqiang Nie, Jiankun Zhao, Qicheng Li, Yong Qin
2023-04-17T02:16:40Z
http://arxiv.org/abs/2304.08502v1
# CyFormer: Accurate State-of-Health Prediction of Lithium-Ion Batteries via Cyclic Attention ###### Abstract Predicting the State-of-Health (SoH) of lithium-ion batteries is a fundamental task of battery management systems on electric vehicles. It aims at estimating future SoH based on historical aging data. Most existing deep learning methods rely on filter-based feature extractors (e.g., CNN or Kalman filters) and recurrent time sequence models. Though efficient, they generally ignore cyclic features and the domain gap between training and testing batteries. To address this problem, we present CyFormer, a transformer-based cyclic time sequence model for SoH prediction. Instead of the conventional CNN-RNN structure, we adopt an encoder-decoder architecture. In the encoder, row-wise and column-wise attention blocks effectively capture intra-cycle and inter-cycle connections and extract cyclic features. In the decoder, the SoH queries cross-attend to these features to form the final predictions. We further utilize a transfer learning strategy to narrow the domain gap between the training and testing set. To be specific, we use fine-tuning to shift the model to a target working condition. Finally, we made our model more efficient by pruning. The experiment shows that our method attains an MAE of 0.75% with only 10% data for fine-tuning on a testing battery, surpassing prior methods by a large margin. Effective and robust, our method provides a potential solution for all cyclic time sequence prediction tasks. SoH, time sequence, transformer, cyclic attention, transfer learning ## I Introduction Researches on battery management systems (BMS) have received increasing attention for the rapid commercialization of electric vehicles (EVs) [1]. One of the core tasks of BMS is to predict State-of-Health (SoH) of Li-ion batteries. SoH is defined as the ratio of the current releasable battery charge to its rated capacity. It gradually decreases after charging and discharging for a number of cycles, indicating a shrinkage in capacity and maximum power. However, this vital indicator cannot be measured directly due to the complex dynamic behavior and time-varying conditions of Li-ion batteries [2]. The task of predicting SoH is to estimate SoH of future charging-discharging cycles given aging data (current, voltage, temperature, etc.) within every historical cycle (see Fig. 1). Accuracy guarantees safety, and safety protects life. BMS needs accurate SoH predictions to optimize energy consumption, prevent over-charging and over-discharging, and extend battery life. In contrast, inaccurate estimations of SoH may lead EVs to spontaneous combustion or anchoring. Challenges in accurate SoH prediction can be concluded as the following three points: First, SoH is a highly-complicated non-linear function of current, voltage, temperature and other parameters of historical cycles. Theoretically, deep neural network is a perfect choice to fit this function and learn the aging trend from historical data. But practically, it is hard for existing time sequence model to both learn long-term patterns among different cycles and extract battery features within each cycle. Second, different from many application scenarios of artificial intelligence, the aging data of Li-ion batteries is scarce. This usually causes under-fitting on light weight models and over-fitting on larger models. Third, the charging and discharging behavior of one battery may significantly differ from another, even if they are of the same type. A domain gap exists between batteries working in different conditions. Previous works on SoH estimation generally utilize models based on RNN or LSTM [3, 4]. Though more efficient, these models suffer severely from forgetting long-term patterns [5]. More importantly, they quickly forget patterns learnt from training set when being fine-tuned on a test battery, and thus perform poorly when data for fine-tuning is scarce. Recently, several Fig. 1: Illustration of the SoH prediction task and the row-wise and column-wise attention mechanism. The model takes in a series of historical physical quantities, and outputs SoH values of future cycles. In the experiment, the physical quantities we use are measured current (Cm), measured voltage (Vm), load current (Cl), load voltage (VI) and temperature (T). Row-wise attention captures intra-cycle connections, while col-wise attention captures inter-cycle connections. transformer-based methods have been proposed [6, 7, 8]. In order to convert initial data into a transformer style input, most of them use a CNN-based feature extractor to extract intra-cycle features. However, the input data within each cycle does not have a hierarchical waveform structure. Therefore, it is hard for convolution filters to capture intra-cycle features effectively. Additionally, this pipeline compresses all sample points within a cycle into one single dimension, which might induce feature loss. In this work, we present CyFormer, a novel generalized cyclic time sequence model, to address the aforementioned problems. This CNN-free model follows the typical encoder-decoder pipeline of transformer. The encoder first extracts cyclic features from historical data and transmits them into the decoder. Then the SoH queries cross-attend to these features in the decoder and forms the final predictions. At the core of our model lies the cyclic attention (i.e., row-wise and column-wise attention) blocks. Row-wise attention block aims at extracting intra-cycle connections, whereas column-wise attention block aims at extracting inter-cycle connections (Fig. 1). Compared with CNN, these two blocks enable the encoder to capture connections between sample points in different cycles, and preserve intra-cycle features at the same time. Extensive experiments demonstrate both the effectiveness and the robustness of CyFormer on SoH prediction. Compared with previous works, we adopt a more challenging criterion for testing to fully demonstrate the transfer learning performance of our model. More specifically, we fine-tune the model with only 10% of the SoH data at the beginning, predict SoH of the remaining 90% hidden cycles, and calculate prediction error with the ground truth of these hidden cycles. With this testing method, our model achieved an MAE of 0.75% and an MAPE of 0.90%, surpassing baseline methods by a large margin. Industries may acquire the first few cycles of SoH data in quality control process of every new battery. Our result means that industries can use this SoH data to train a model which could give highly accurate predictions of SoH on BMS. Our contributions can be concluded as follows: * We proposed CyFormer, a generalized cyclic time sequence model with row-wise and column-wise attention mechanism. With CyFormer, we gained highly accurate SoH predictions and achieved new SOTA in SoH estimation. * We adopt a transfer learning style testing criterion, which is closer to the real application scenario. Experiments showed that our model maintains accuracy by this criterion. * We designed a light weight version of CyFormer for BMS by pruning unnecessary modules. It becomes significantly more efficient than the initial version with only a tiny loss on accuracy. ## II Related Work SoH PredictionModern SoH prediction methods can be generally divided into three categories: direct measurement methods, adaptive algorithms and data-driven methods [6]. Direct measurement methods [9, 10] analyze the aging behavior through numerous laboratory tests. These off-line methods require specialized sensors in laboratories. Adaptive algorithms use traditional mathematical models and numerical filters. Lim et al. [11] proposed Fading Kalman filter (FKF), which avoids large estimation errors in conventional Kalman filter. This method induces large computational costs [5], and is not efficient enough to be deployed on BMS. Data-driven methods can be further divided into machine learning methods and deep learning methods. Machine learning methods typically utilize support vector machine (SVM) [12, 13, 14] or Gaussian process regression (GPR) [15, 16, 17]. Most existing SOTAs adopt deep learning methods, such as CNN-LSTM [3], ViT [7] or DynaFormer [8]. Fan et al. [4] proposed a hybrid neural network, which extracts local information with CNN and captures time dependencies with GRU. To better capture global representation, Gu et al. [6] proposed a CNN-Transformer framework that replaces recurrent modules with transformer encoder and decoder. To reduce oscillations, Shen et al. [5] introduced Immersion and Invariance (I&I) adaptive observer into the transformer-based pipeline. Time Series AnalysisIn time series forecasting, one of the most prominent models is ARIMA [18]. Flunkert et al. [19] first integrated auto-regression with RNN and proposed DeepAR, a probabilistic forecasting network. Bai et al. [20] discovered that a simple convolutional architecture outperforms canonical recurrent networks (e.g., RNN, LSTM) on a wide spectrum of tasks and datasets. Li et al. [21] proposed LogSparse Transformer with only \(O(L(logL)^{2})\) memory cost. They also utilized convolutional self-attention so that local context can be better incorporated into attention mechanism. Transfer LearningIn many deep learning tasks, a domain gap exists between training and testing datasets. Therefore, many generalized transfer learning methods have been proposed. Kumar et al. [22] suggested LP-FT, a two-step strategy that first trains linear probing module and then fine-tunes the entire model. Similar strategies have been adopted in SoH predictions. To boost performance on batteries in different working conditions, Fu et al. [7] conducted fine-tuning with SoH data of the first few cycles to shift the model to the testing battery. ## III Task Statement As illustrated in Fig. 1, consider that we have a battery working at the end of the t-th charge-discharge cycle. Given size of the prediction window \(n_{out}\), the task of our model is to predict SoH value of cycle \(t+1\) to \(t+n_{out}\), based on the aging data of cycle \(1\) to \(t\). To be specific, assume that \(l_{sample}\) is the sample point number within each cycle, and \(c\) is the number of physical quantities measured at each sample point (i.e., input channel size). The input of this task can be organized as the following \(t\times l_{sample}\) matrix, \[input=\left[\begin{array}{ccccc}X_{11},&X_{12},&\cdots,&X_{1l_{sample}}\\ X_{21},&X_{22},&\cdots,&X_{2l_{sample}}\\ \vdots&\vdots&&\vdots\\ X_{t1},&X_{t2},&\cdots,&X_{tl_{sample}}\end{array}\right] \tag{1}\] where each \(X_{ij}\) is a vector of size \(C\). It is composed of physical quantities like current, voltage and temperature. The output of this task is a sequence of predicted SoH values, \[output=\{SoH_{i}|i=t+1,\cdots,t+n_{out}\} \tag{2}\] where \(SoH_{i}\) is a number within [0, 1]. There are two special cases worth considering. The first one is named just-in-time (JIT) prediction, where we set \(n_{out}\) as 1 and only predict SoH of the current cycle. The second one is named Remaining Useful Life (RUL) prediction, where we set \(n_{out}\) to a pre-defined maximum and predict the cycle number when SoH decreases to a certain threshold. This threshold represents the scrapping point of batteries, and the number of remaining cycles indicates the remaining life of the battery. In this work, we focus on JIT research. ## IV Method An overview of our model is depicted in Fig.2. As described in Section III, our model directly processes two-dimensional cyclic data, rather than the one-dimensional input sequence of the classical transformer architecture. The encoder first encodes the two-dimensional input into a one-dimensional feature sequence, and then feed it into the decoder. In the decoder, the randomly-initialized SoH queries cross-attend to these features to form prediction values of \(n_{out}\) future cycles. In the following sections, we will discuss the detailed structure of the encoder and the decoder respectively. ### _Encoder_ The encoder is composed of four main parts, namely the input embedding module, the 2D positional encoding module, a stack of encoder layers and the output head. Both the input embedding and the output head are linear layers. The input embedding module extends the channel size of each input token to \(d_{encoder}\). It applies the following affine transformation on each vector in the input matrix 1: \[X_{ij}^{{}^{\prime}}=W^{\top}X_{ij}+b \tag{3}\] where \(W^{T}\) is a \(C\times d_{encoder}\) weight matrix and b is a bias vector of size \(d_{encoder}\). As the output of the embedding module, \(X_{ij}^{{}^{\prime}}\) is then fed into the first encoder layer. The output head fully connects all sample points in each input cycle to form \(n_{in}\) feature vectors with the channel size of \(d_{decoder}\). It can be described as Formula 4. \[F_{i}=\sum_{m=1}^{l_{sample}}O_{m}^{\top}X_{im}+b \tag{4}\] where \(F_{i}\) is the i-th vector in the cyclic feature sequence. Each \(W_{m}^{T}\) is a \(d_{encoder}\times d_{decoder}\) weight matrix, and b is a bias vector of size \(d_{decoder}\). The 2D positional encoding of each input token is defined as: \[PE2D_{(cycle,sample)}=PE1D_{x}+PE1D_{y} \tag{5}\] Where \(PE1D_{x}\) and \(PE1D_{y}\) are both 1D sinusoidal positional encodings calculated by [23]: \[PE1D_{(pos,2i)}=sin(pos/10000^{2i/d_{model}}) \tag{6}\] \[PE1D_{(pos,2i+1)}=cos(pos/10000^{2i/d_{model}}) \tag{7}\] Each encoder layer consists of a row-wise attention block, a column-wise attention block and a 3-layer MLP. Each of these blocks is followed by a residual block and a layer normalization block. Both row-wise and column-wise attention blocks derive from self-attention blocks (Alg. 1). They are designed to capture intra-cycle and inter-cycle connections respectively. Row-wise AttentionThe upper part of Fig.3 illustrates the structure of a row-wise attention block. Row-wise attention aims at capturing connections between data sampled at different times within a single cycle. To this end, we first split the two-dimensional input into single rows. Each row contains all sample points within a particular cycle. Then each row is regarded as an individual input sequence and goes through a self-attention block with shared weights. To be specific, each row generates its own queries, keys and values. Following the typical multi-head attention mechanism, we first calculate dot-product affinities between queries and keys to form attention weights. Then we multiply the values with their corresponding weights, and feed the result into an output linear layer. After going through self-attention blocks, the outputs of all rows are concatenated to form a final output with the same shape as the original input. Column-wise AttentionThe lower part of Fig.3 illustrates the structure of a column-wise attention block. Column-wise attention aims at capturing connections between data sampled at the same time but in different cycles. Similar to row-wise attention, we achieved this goal by slicing the input data into individual columns, feeding them into self-attention blocks with shared weights, and concatenating them into an output matrix. It should be noticed that the inputs and outputs of row-wise and column-wise blocks all have the same shape, which renders these blocks compatible with the sequential transformer architecture. Additionally, our method preserves intra-cycle features since the second dimension is never squeezed until the end of the encoder. ### _Decoder_ The decoder is composed of three main parts, namely the positional encoding module, a stack of decoder layers, and a linear output head. A sequence of randomly-initialized SoH queries goes through these parts to form the final SoH predictions. The query sequence consists of \(n_{out}\) vectors with a channel size of \(d_{decoder}\): \[query=\left[\begin{array}{cccc}Q_{1},&Q_{2},&\cdots,&Q_{n_{out}}\end{array}\right] \tag{8}\] The output head is a linear layer. It fully connects all channels of each query vector and outputs \(n_{out}\) SoH values: \[SoH_{t+i}=W^{\top}Q_{i}+b \tag{9}\] In the decoder, we use the 1D sinusoidal positional encoding defined in Formula 6 and 7. Following the typical transformer decoder architecture, each decoder layer consists of a self-attention block, a cross-attention block and a 3-layer MLP. Similar to the encoder, each of these blocks is followed by a residual block and a layer normalization block. The cross-attention block informs queries of the historical features \(\{F_{i}\}\), while the self-attention block keeps the historical trend among all the predictions. ## V Experiment In this section, we first introduce the dataset and the data pre-processing procedure (Sec. V-A). Then, We introduce evaluation metrics (Sec. V-B) and implementation details (Sec. V-C). We conducted a comparative experiment with CNN-LSTM and CNN-Transformer (Sec. V-D) to demonstrate the accuracy of CyFormer. To better validate each component of Fig. 3: An illustration of cyclic attention mechanism. The upper part and the lower part show the structure of row-wise and column-wise attention blocks, respectively. Each square in rounded rectangles represents a vector of size \(d_{encoder}\). Fig. 2: An overview of our model. Components of the encoder are colored green, whereas componets of the decodered are colored orange. Each cube in the input or square in the output represents a number, while each square in rounded rectangles represents a vector of size \(d_{decoder}\). our model, we provide detailed ablation studies (Sec. V-E). Finally, we present a light weight version of CyFormer, striking a balance between accuracy and computational costs(Sec. V-F). ### _Dataset_ We carry out the experiment with the Battery Data Set provided by NASA Ames Prognostics Center of Excellence [24, 25]. We removed batteries that are extremely inconsistent with the common aging patterns of batteries. Fig. 4 shows the aging curve of the nineteen batteries we used. These battery cells worked in different ambient temperatures(\(4\)C, \(2\)\(\mathrm{\SIUnitSymbolC}\), \(4\)\(\mathrm{\SIUnitSymbolC}\)).In each cycle, they were first charged through a constant current - constant voltage (CC-CV) procedure with the upper voltage at 4.2V until the current decayed to 20 mA. Then they were discharged with constant or pulse current waveforms until each of the cells reached its cut-off voltage. We use load voltage (VI), load current (Cl), measured voltage (Vm), measured current (Cm) and temperature (T) curves of the discharge process (see Fig. 1). The SoH of a batteryis defined as the ratio of the maximum charge to its rated capacity: \[\mathrm{SOH}=\frac{Q_{\mathrm{max}}}{C_{r}}\times 100\% \tag{10}\] where \(Q_{\mathrm{max}}\) is the maximum charge available from the current battery and \(C_{r}\) is the rated capacity. To align sample points in different cycles, we linearly interpolated and re-sampled intra-cycle data. Among all nineteen batteries, one is selected as the target battery for testing, and others are used as the source dataset for training. To this end, aging data of the target battery is further divided into the fine-tuning segment(10%) and the hidden segment(90%). We adopt a two-stage transfer learning strategy to narrow the domain gap between the source and the target batteries. We first train the model on the source dataset to fit general working conditions, and then fine-tune the model on the fine-tuning segment of the target battery to shift to a new working condition. Finally, evaluation of the model is conducted on the hidden segment of the target battery. ### _Evaluation Metrics_ To evaluate the performance of CyFormer, three different evaluation metrics are employed: mean absolute percentage error (MAPE), mean absolute error (MAE) and root mean square error (RMSE). MAPE represents the relative percentage error between the prediction and the actual value. MAE is the average of the absolute difference between the estimation and the actual value of SOH. It aims at measuring the average magnitude of errors of the proposed method. RMSE indicates the deviation of the estimation value and the actual value, and thus represents the quality of estimation. MAE, MAPE, RMSE are defined as: MAE \[=\frac{1}{n}\sum_{i=1}^{n}\left|\widehat{y}_{i}-y_{i}\right|\] (11) MAPE \[=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{\widehat{y}_{i}-y_{i}}{y_ {i}}\right|\] (12) RMSE \[=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\widehat{y}_{i}-y_{i} \right)^{2}}\] (13) where \(y_{1},y_{2},\cdots,y_{n}\) are the actual values, \(\widehat{y}_{1},\widehat{y}_{2},\cdots,\widehat{y}_{n}\) are the predicted values, and \(n\) is the number of testing samples. ### _Implementation Details_ We choose MAE as the loss function. The network is trained with Adam optimizer with a learning rate of 0.0001. We set \(\beta_{1}\) as 0.9 and \(\beta_{2}\) as 0.999. Grid search is used to obtain the optimal model parameters. The selected parameters are shown in Table I. The input window size \(n_{in}\) is defined as the number of cycles contained in the input sequence. The SoH prediction performances with different input window size \(n_{in}\) are shown in II. When \(n_{in}\) increases, the accuracy increases \begin{table} \begin{tabular}{l l} \hline \hline Hyperparameters & value \\ \hline \(l_{sample}\) & 32 \\ \(d_{encoder}\) & 16 \\ \(d_{decoder}\) & 16 \\ \(n_{in}\) & 16 \\ Gamma & 0.1 \\ Batch size & 32 \\ Epochs & 1500 \\ Encoder layers & 4 \\ Decoder layers & 4 \\ Attention heads \(h\) & 8 \\ Learning rate(training) & 0.0001 \\ Learning rate(fine-tuning) & 0.0002 \\ \hline \hline \end{tabular} \end{table} TABLE I: Hyperparameter settings Fig. 4: The SoH decay curves of the 19 batteries in the Battery Data Set. The same linestyle indicates the same group of batteries. Batteries are marked with different colors in the same group. simultaneously, whereas the computational cost rises as well. Additionally, ground-truth data for fine-tuning would be scarce if \(n_{in}\) were too large. Therefore, we set \(n_{in}=16\) as it strikes a balance between accuracy and efficiency. We adopt a transfer learning style testing criterion. The fine-tuning segment of the target battery only contains a small amount (10%) of data at the beginning of the ageing phase. The feature extraction modules are mainly trained on the source dataset. In the fine-tuning process, parameters in the decoder are mainly modified, and the model quickly shifts to the target domain with the 10% fine-tuning segment. ### _Comparison with Other Methods_ As mentioned in section V-A, Cell #B0007 is randomly selected as the target battery. Other batteries are selected as the source dataset. The first 10%, 30%, or 70% of #B0007 were adopted for offline fine-tuning, while the remaining part (90%, 70%, or 30%) were used for online evaluation. As typical baselines, CNN-LSTM and CNN-Transformer were employed to estimate battery SoH with the same testing criteria. As shown in Tab. III, the MAEs, MAPEs and RMSEs of the CyFormer are within 1%, while for CNN-LSTM and CNN-Transformer, the highest errors are about 4% and 3%, respectively. CyFormer achieves the lowest loss among all three methods under all three circumstances, demonstrating its effectiveness and robustness. It should be noticed that CyFormer can achieve accurate prediction using only 10% or less fine-tuning data, while CNN-Transformer need at least 70% to reach a comparable result. This can also prove the transfer learning efficiency of CyFormer. Fig. 5 (a)-(c) show the SoH prediction results on the target battery (#B0007). The SoH prediction accuracy of CyFormer surpasses other methods by a large margin, especially when the fine-tuning proportion is 10% (Fig. 5(a)). When the fine-tuning proportion expands to 30%, CNN-LSTM and CNN-Transformer models closely follow the battery ageing trend only in the first twenty cycles. Significant improvement on accuracy of CNN-Transformer has not appeared until the fine-tuning proportion reaches 70%. In contrast, the CNN-LSTM model still jitters significantly even if the the fine-tuning proportion reaches 70%. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(n_{in}\) & FLOPs & Params & MAE & MAPE & RMSE \\ \hline 8 & 0.09 & 0.32 & 2.68\% & 2.95\% & 3.13\% \\ 12 & 0.13 & 0.33 & 1.56\% & 1.66\% & 1.90\% \\ 16 & 0.17 & 0.35 & 0.75\% & 0.89\% & 0.95\% \\ 32 & 0.34 & 0.41 & 0.73\% & 0.90\% & 0.95\% \\ \hline \hline \end{tabular} \end{table} TABLE II: The SoH result of different \(n_{in}\) Fig. 5: The result of SoH estimation based on 10% (a), 30% (b), 70% (c) transfer learning dataset. \begin{table} \begin{tabular}{c l c c c} \hline \hline Battery & Method & MAE & MAPE & RMSE \\ \hline B0007(10\%) & CNN-LSTM & 2.69\% & 2.98\% & 3.30\% \\ & CNN-Transformer & 2.01\% & 2.41\% & 2.33\% \\ & CyFormer & **0.75\%** & **0.89\%** & **0.95\%** \\ B0007(30\%) & CNN-LSTM & 1.74\% & 2.19\% & 2.17\% \\ & CNN-Transformer & 1.12\% & 1.56\% & 1.67\% \\ & CyFormer & **0.66\%** & **0.87\%** & **0.96\%** \\ B0007(70\%) & CNN-LSTM & 1.70\% & 2.11\% & 1.89\% \\ & CNN-Transformer & 0.66\% & 0.86\% & 0.82\% \\ & CyFormer & **0.38\%** & **0.52\%** & **0.49\%** \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison of estimation errors among CyFormer and other methods ### _Ablation Study_ In order to validate the effect of row-wise and column-wise attention blocks, we conducted ablation studies on following conditions. * w/o Row-wise: CyFormer without row-wise structure * w/o Col-wise: CyFormer without column-wise structure * w/o Row-wise + Col-wise: CyFormer without row-wise and column-wise structure According to Fig. 6, cutting off row-wise and column-wise attention blocks decreases accuracy, verifying the effectiveness of the two structures. As shown in Tab. IV, the column-wise and row-wise attention blocks reduce MAE by 3.06%, MAPE by 3.82%, and RMSE by 3.54%. ### _Pruning_ In this section, we designed a light weight version of CyFormer by pruning. To be specific, we reduced the number of encoders (i.e., depth), and the number of sampling points. We only change one component each time to observe how that affects performance and efficiency. DepthThe depth of the model is defined as the number of CyFormer encoders. It is highly related to the effectiveness of feature extraction. The initial depth is set as 4. To make it more efficient, we carried out four groups of experiments that set depth to 1-4, respectively. The RMSEs, MAEs, MAPEs and FLOPs of SoH prediction are shown in Fig. 7. It can be seen from Fig. 7 that the feature extraction ability becomes stronger as the model depth increases. Each of the first three layers improves performance significantly, while additional layers brings minor improvement. In consideration of prediction accuracy and inference speed, we set the depth of the network as 3 in the light weight model. Sampling RateFig. 8 shows the loss and FLOPs when using different numbers of sampling points. We linearly interpolated and re-sampled from each cycle to form \(l_{sample}\) sample points. Initially, we set \(l_{sample}\) to 32. As \(l_{sample}\) increases, the prediction accuracy augments, but FLOPs rises as well. Thus, we choose the elbow point 24 as \(l_{sample}\), striking a balance between performance and computational costs. The pruning results of each module are shown in Tab. V. The joint effect of pruning both depth and sampling rate leads FLOPs and quantity of parameters to reduce by 41% and 26%, respectively. At the same time, the accuracy of the model is hardly affected. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Methods & FLOPs & Params & MAE & MAPE & RMSE \\ \hline Initial & 0.17 & 0.35 & 0.75\% & 0.89\% & 0.95\% \\ Depth & 0.13 & 0.28 & 0.77\% & 0.93\% & 1.10\% \\ Sampling point & 0.13 & 0.33 & 0.92\% & 1.19\% & 1.22\% \\ Overall & 0.10 & 0.26 & 0.95\% & 1.17\% & 1.26\% \\ \hline \hline \end{tabular} \end{table} TABLE V: Pruning Experiment Fig. 8: The results of pruning study on model depth. \begin{table} \begin{tabular}{c c c c c} \hline \hline Col-wise. & Row-wise. & MAE & MAPE & RMSE \\ \hline & & 3.81\% & 4.72\% & 4.53\% \\ & \(\surd\) & 1.87\% & 2.18\% & 3.42\% \\ \(\surd\) & & 2.93\% & 3.58\% & 4.02\% \\ \(\surd\) & \(\surd\) & **0.75\%** & **0.89\%** & **0.95\%** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Ablation study of column-wise and row-wise attention blocks Fig. 6: Ablation Study Fig. 7: The results of pruning study on model depth. ## VI Conclusion In this work, we present CyFormer, a generalized cyclic time sequence model with row-wise and column-wise attention mechanism. Via cyclic attention, our model effectively captures inter-cycle and intra-cycle connections. To narrow the domain gap among different working conditions, we adopt a two-stage transfer learning strategy. We also designed a light weight version of CyFormer for embedding systems by pruning. Experiments show that our model produces accurate SoH predictions using only 10% data for fine-tuning, demonstrating the effectiveness and robustness of our model. CyFormer provides a potential solution for all cyclic time sequence prediction tasks, and we expect to see more applications of our method. ## Acknowledgment This work was supported by General Terminal IC Interdisciplinary Science Center of Nankai University.
2307.11503
General regularization in covariate shift adaptation
Sample reweighting is one of the most widely used methods for correcting the error of least squares learning algorithms in reproducing kernel Hilbert spaces (RKHS), that is caused by future data distributions that are different from the training data distribution. In practical situations, the sample weights are determined by values of the estimated Radon-Nikod\'ym derivative, of the future data distribution w.r.t.~the training data distribution. In this work, we review known error bounds for reweighted kernel regression in RKHS and obtain, by combination, novel results. We show under weak smoothness conditions, that the amount of samples, needed to achieve the same order of accuracy as in the standard supervised learning without differences in data distributions, is smaller than proven by state-of-the-art analyses.
Duc Hoan Nguyen, Sergei V. Pereverzyev, Werner Zellinger
2023-07-21T11:19:00Z
http://arxiv.org/abs/2307.11503v1
# De Gruyter ###### Abstract Sample reweighting is one of the most widely used methods for correcting the error of least squares learning algorithms in reproducing kernel Hilbert spaces (RKHS), that is caused by future data distributions that are different from the training data distribution. In practical situations, the sample weights are determined by values of the estimated Radon-Nikodym derivative, of the future data distribution w.r.t. the training data distribution. In this work, we review known error bounds for reweighted kernel regression in RKHS and obtain, by combination, novel results. We show under weak smoothness conditions, that the amount of samples, needed to achieve the same order of accuracy as in the standard supervised learning without differences in data distributions, is smaller than proven by state-of-the-art analyses. c 2019 ## 1 Introduction Over the past few decades, data-based algorithms have resulted in significant advances in an extensive variety of different fields. Nevertheless, a noticeable disparity has emerged between the theoretical assumptions that form the basis of algorithmic development and the practical conditions in which these algorithms are deployed. In learning theory, one studies the relationship between the explanatory (input) variable \(x\in\mathbf{X}\subset\mathbb{R}^{d_{1}}\) and the response (output) variable \(y\in\mathbf{Y}\subset\mathbb{R}^{d_{2}}\) under the assumption that they are governed by an unknown probability measure \(p(x,y)\) on \(\mathbf{X}\times\mathbf{Y}\). This means that an input \(x\in\mathbf{X}\) does not determine uniquely an output \(y\in\mathbf{Y}\), but rather a conditional probability \(\rho(y|x)\) of \(y\) given \(x\), which is assumed to be unknown. Then one uses a training data sample \(\mathbf{z}=\{(x_{i},y_{i}),x_{i}\in\mathbf{X},y_{i}\in\mathbf{Y},i=1,2,\ldots, n\},|\mathbf{z}|=n\), drawn independently and identically (i.i.d) from the measure \(p(x,y)\) to infer a function \(f:\mathbf{X}\rightarrow\mathbf{Y}\) which predicts the label \(y^{\prime}\in\mathbf{Y}\) of any future input \(x^{\prime}\in\mathbf{X}\)[3, 39]. This problem is an inverse problem, see e.g. [5, 35, 1]. The error of the prediction model \(f\) is quantified by the expected risk \[\mathcal{R}_{p}(f)=\int_{\mathbf{X}\times\mathbf{Y}}\ell(f(x),y)dp(x,y) \tag{1}\] for some _loss_ function \(\ell:\mathbf{Y}\times\mathbf{Y}\rightarrow[0,\infty)\), e.g., the squared loss \(\ell(f(x),y)=(f(x)-y)^{2}\). The choice of the expected risk \(\mathcal{R}_{p}(f)\) realizes a core assumption of learning theory: _Any new example \((x^{\prime},y^{\prime})\) is drawn from the same probability measure \(p\)._ This assumption is, however, often violated in practice. For example, in medical image analysis, pre-trained learning models, are applied to data from patients following distributions that are different from the training one [21]. Chemical measurement systems need to be re-calibrated for new distributions after changes in system setups [26]. To overcome this problem, unsupervised domain adaptation arises as a relevant approach when the underlying relationship between input samples \(x\in\mathbf{X}\) and their corresponding outputs \(y\in\mathbf{Y}\) is not exclusively governed by a single probability measure \(p(x,y)\). Instead, it encompasses the situation where one more probability measure, denoted as \(q(x,y)\), appears to characterize the joint distribution over \(\mathbf{X}\times\mathbf{Y}\). In contrast to the classical problem of learning from examples, in domain adaptation one uses the training data sample \(\mathbf{z}\) drawn independently and identically (i.i.d) from one of the measures, say \(p(x,y)\), to reduce the expected risk of the prediction model \(f\) over the other measure \(q(x,y)\). In the context of domain adaptation, \(p(x,y)\) and \(q(x,y)\) are called, respectively, the _source_ probability and the _target_ probability. ### Covariate shift assumption In general, the domain adaptation problem with different source and target probabilities is unsolvable, as \(p(x,y)\), \(q(x,y)\) could be arbitrarily far apart. Therefore, in the present study, we follow [34], [11] and rely on the so-called covariate shift assumption, where only probabilities of inputs in the source (S) and the target (T) domains (marginal probabilities) \(\rho_{S}(x)\) and \(\rho_{T}(x)\) differs, while the conditional probability \(\rho(y|x)\) is the same under both the source and the target probabilities. This means that the joint probabilities \(p(x,y)\), \(q(x,y)\) can be factorized as the following products \[p(x,y)=\rho(y|x)\rho_{S}(x),\ q(x,y)=\rho(y|x)\rho_{T}(x). \tag{2}\] In this work, we restrict ourselves to learning with least squares loss, where the expected risk of the prediction of \(y\) from \(x\) by means of a function \(f:\mathbf{X}\rightarrow\mathbf{Y}\) is defined in the target domain as \[\mathcal{R}_{q}(f):=\int\limits_{\mathbf{X}\times\mathbf{Y}}(f(x)-y)^{2}dq(x,y).\] It is easy to check that \(\mathcal{R}_{q}(f)\) attains its minimum at the so-called regression function \[f(x)=f_{q}(x)=\int\limits_{\mathbf{Y}}yd\rho(y|x) \tag{3}\] see, e.g., [3, Proposition 1]. However, in the unsupervised domain adaptation setting, neither \(\mathcal{R}_{q}(f)\) nor \(f_{q}(x)\) can be computed, because the information about underlying probability \(q(x,y)\) is only provided in the form of a set \(\mathbf{x}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},\ldots,x_{m}^{\prime}),| \mathbf{x}^{\prime}|=m\), of unlabeled examples \(x_{i}^{\prime}\) of inputs drawn i.i.d. from the target marginal probability measure \(\rho_{T}(x)\). #### Goal The goal of unsupervised domain adaptation is to use this information, together with training data \(\mathbf{z}\), to approximate the ideal minimizer \(f_{q}\) by an empirical estimator \(f_{\mathbf{z}}\) in the sense of excess risk \[\mathcal{R}_{q}(f_{\mathbf{z}})-\mathcal{R}_{q}(f_{q})=\left\|f_{\mathbf{z}}- f_{q}\right\|_{L_{2,\rho_{T}}}^{2};\] here \(L_{2,\rho_{T}}\) is the space of square integrable functions \(f:\mathbf{X}\rightarrow\mathbb{R}\) with respect to the marginal probability measure \(\rho_{T}\). It can be observed that under covariate shift assumption (2), the expected risks \(\mathcal{R}_{p}(f)\) in supervised learning and \(\mathcal{R}_{q}(f)\) in domain adaptation attain their minimum at the same regression function \(f^{*}(x)=f_{p}(x)=f_{q}(x)\) given by (3). Therefore, in unsupervised domain adaptation under covariate shift the aim of approximation is the same as in the standard supervised learning, and it is logical to adjust the methods developed there to the domain adaptation scenario. One natural step towards this direction is the combination of sample reweighting with regularized least squares regression; a procedure generally referred to as importance weighted regularized least squares (IWRLS) [34, 37, 12]. However, although IWRLS is one of the major approaches to unsupervised domain adaptation, its analysis is still in its early developments. Even for learning with least-squares loss in reproducing kernel Hilbert spaces (RKHS), one of the most well understood directions in statistical learning theory, risk bounds have been developed only recently. ### Contribution In this work, we discuss recent results for the analysis of IWRLS in RKHS under the covariate shift assumption. Our main focus is on general regularization schemes, often referred to as spectral regularization (cf. [1], [31], [18], and the references therein). As a result, we refine state-of-the-art risk bounds for IWRLS by combining known results. In particular, we show how smoothness conditions for Radon-Nikodym differentiation allow IWRLS to achieve the same order of accuracy as regularized kernel regression, but with a smaller amount of samples as in known situations. In Section 2, we describe the IWRLS algorithm and review the recent risk bound [9]. To the best of our knowledge, this is the first risk bound for IWRLS. This analysis is based on smoothness conditions on \(f_{q}\) and it assumes access to the values of the (unknown) Radon-Nikodym derivative \(\frac{d\rho_{T}}{d\rho_{S}}\) of \(\rho_{T}\) w.r.t. \(\rho_{S}\), which appear as the sample weights in IWRLS. In Section 3, we review recent error bounds of [23] for algorithms estimating the Radon-Nikodym derivative which refine the study [9]. In particular, we highlight general smoothness conditions, under which upper bounds on the pointwise error are smaller than upper bounds on the error in RKHS norm. In Section 4, we embed the regularized Radon-Nikodym differentiation considered in Section 3 into the regularization schemes considered in Section 2, so that no exact values of the unknown Radon-Nikodym derivative are needed. The considered smoothness conditions provide novel situations under which IWRLS achieves the same order of accuracy as standard least squares regression in RKHS. Interestingly, the obtained order of accuracy is much higher than anticipated under the slightly weaker conditions of the state of the art [9]. In Section 5, we review a general method for regularization parameter choice issues in IWRLS [9, 7]. The method is based on aggregating several regularized solutions. Again, the considered smoothness conditions allow to refine known error bounds for this method. ## 2 Importance weighted regularized least squares In the following, we summarize recent results from [9], which are, to the best of our knowledge, the first risk bounds for IWRLS. Later, in Section 4, we will refine these bounds. Since we have no direct access to the target probability measure \(\rho_{T}\) and to the space \(L_{2,\rho_{T}}\) in which we are going to approximate the regression function \(f^{*}=f_{q}\), some additional assumptions should be imposed on the relationship between the source probability \(\rho_{S}\) and the target probability \(\rho_{T}\). In the present study we follow [11] and assume that there is a function \(\beta:\mathbf{X}\to\mathbb{R}_{+}\) such that \[d\rho_{T}(x)=\beta(x)d\rho_{S}(x).\] Then \(\beta(x)\) can be viewed as the Radon-Nikodym derivative \(\frac{d\rho_{T}}{d\rho_{S}}\) of the target measure with respect to the source measure. In this section, we assume that we have access to the values \(\beta_{i}:=\beta(x_{i})\) of the Radon-Nikodym derivative \(\beta(x)=\frac{d\rho_{T}(x)}{d\rho_{S}(x)}\) at the points \(x_{i},i=1,2,\ldots,n\), drawn i.i.d from \(\rho_{S}(x)\). Moreover, we assume that for any \(x\in\mathbf{X}\), \(|\beta(x)|\leq b_{0}\) for some \(b_{0}>0\), as in [11]. Let \(\mathcal{H}_{K}\) be a reproducing Kernel Hilbert space with a positive-definite function \(K:\mathbf{X}\times\mathbf{X}\to\mathbb{R}\) as reproducing kernel. We assume that \(K\) is a continuous and bounded function, such that for any \(x\in\mathbf{X}\) \[\|K(\cdot,x)\|_{\mathcal{H}_{K}}=\langle K(\cdot,x),K(\cdot,x)\rangle_{ \mathcal{H}_{K}}^{\frac{1}{2}}=[K(x,x)]^{\frac{1}{2}}\leq\kappa_{0}<\infty.\] Recall that the information about the source and target marginal measures are only provided in the form of samples \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{n}\}\) and \(\mathbf{x}^{\prime}=\{x_{1}^{\prime},x_{2}^{\prime},\ldots,x_{m}^{\prime}\}\) drawn independently and identically (i.i.d) from \(\rho_{S}\) and \(\rho_{T}\) respectively. In the sequel, we distinguish two sample operators \[S_{\mathbf{x}^{\prime}}f=(f(x_{1}^{\prime}),f(x_{2}^{\prime}), \ldots,f(x_{m}^{\prime}))\in\mathbb{R}^{m},\] \[S_{\mathbf{x}}f=(f(x_{1}),f(x_{2}),\ldots,f(x_{n}))\in\mathbb{R} ^{n},\] acting from \(\mathcal{H}_{K}\) to \(\mathbb{R}^{m}\) and \(\mathbb{R}^{n}\), where the norms in later spaces are generated by \(m^{-1}\)-times and \(n^{-1}\)-times the standard Euclidean inner products. Then the adjoint operators \(S_{\mathbf{x}^{\prime}}^{*}:\mathbb{R}^{m}\to\mathcal{H}_{K}\) and \(S_{\mathbf{x}}^{*}:\mathbb{R}^{n}\to\mathcal{H}_{K}\) are given as \[S_{\mathbf{x}^{\prime}}^{*}u(\cdot)=\frac{1}{m}\sum_{j=1}^{m}K(\cdot,x_{j}^{ \prime})u_{j},\hskip 14.226378ptu=(u_{1},u_{2},\ldots,u_{m})\in\mathbb{R}^{m},\] \[S_{\mathbf{x}}^{*}v(\cdot)=\frac{1}{n}\sum_{i=1}^{n}K(\cdot,x_{i})v_{i}, \hskip 14.226378ptv=(v_{1},v_{2},\ldots,v_{n})\in\mathbb{R}^{n}.\] In the context of domain adaptation with covariate shift, the objective is to construct an approximation of the minimizer \(f^{*}=f_{q}\) for the target expected risk \(\mathcal{R}_{q}(f)\) utilizing the available data \(\mathbf{z}=\{(x_{i},y_{i})\}_{i=1}^{n}\) sampled from the source distribution \(p(x,y)\). To accomplish this objective, one popular employed approach is penalized least squares regression combined with sample reweighting, which is also called importance weighted regularized least squares (IWRLS), see, e.g [8], [37], and [12]. If we are looking for approximations in RKHS \(\mathcal{H}_{K}\), then within IWRLS-approach the approximant \(f_{\mathbf{z}}=f_{\mathbf{z}}^{\lambda}\) of \(f^{*}=f_{q}\) is constructed as the minimizer of weighted and penalized empirical risk \[\mathcal{R}_{\mathbf{z},\lambda,\beta}(f)=\frac{1}{n}\sum_{i=1}^{n} \beta_{i}\left(f(x_{i})-y_{i}\right)^{2}+\lambda\|f\|_{\mathcal{H}_{K}}^{2}.\] Since \(\beta_{i}\) are assumed to be non-negative, \(\mathcal{R}_{\mathbf{z},\lambda,\beta}\) can be written in the form of the so-called Tikhonov regularization functional \[\mathcal{R}_{\mathbf{z},\lambda,\beta}(f)=\left\|B^{\frac{1}{2}}S _{\mathbf{x}}f-B^{\frac{1}{2}}\mathbf{y}\right\|_{\mathbb{R}^{n}}^{2}+\lambda \|f\|_{\mathcal{H}_{K}}^{2},\] where \(B^{\frac{1}{2}}=\mathrm{diag}(\sqrt{\beta_{1}},\sqrt{\beta_{2}},\ldots,\sqrt{ \beta_{n}})\), and \(\mathbf{y}=(y_{1},y_{2},\ldots,y_{n})\). The minimizer of \(\mathcal{R}_{\mathbf{z},\lambda,\beta}\) admits the following representation \[f_{\mathbf{z}}^{\lambda}=(\lambda\mathbf{I}+S_{\mathbf{x}}^{*}BS _{\mathbf{x}})^{-1}S_{\mathbf{x}}^{*}B\mathbf{y}, \tag{4}\] and can be seen as an approximate solution to the normal equation \(S_{S_{X}}^{*}BS_{S_{X}}f=S_{\mathbf{x}}^{*}B\mathbf{y}\) regularized by perturbation \(\lambda\mathbf{I}f\). At the same time, the whole arsenal of regularization schemes can potentially be applied to that equation to construct approximations \(f_{\mathbf{z}}=f_{\mathbf{z}}^{\lambda}\) of the minimizer \(f^{*}=f_{q}\) of the target expected risk \(\mathcal{R}_{q}(f)\) from the data \(\mathbf{z}=(\mathbf{x},\mathbf{y})\) that are sampled from the source measure \(p\). In particular, we will use a general regularization scheme to construct a family of approximants as follows \[f_{\mathbf{z}}^{\lambda}=g_{\lambda}(S_{\mathbf{x}}^{*}BS_{ \mathbf{x}})S_{\mathbf{x}}^{*}B\mathbf{y}, \tag{5}\] where \(\{g_{\lambda}\}\) is a family of operator functions parametrized by a regularization parameter \(\lambda>0\). ### General regularization scheme Recall (see, e.g., Definition 2.2 in [19]) that regularization schemes can be indexed by parametrized functions \(g_{\lambda}:[0,c]\rightarrow\mathbb{R}\), \(\lambda>0\). The only requirements are that there are positive constants \(\gamma_{0},\gamma_{-\frac{1}{2}},\gamma_{-1}\) for which \[\sup_{0<t\leq c}|1-tg_{\lambda}(t)|\leq\gamma_{0},\] \[\sup_{0<t\leq c}\sqrt{t}|g_{\lambda}(t)|\leq\frac{\gamma_{-\frac{ 1}{2}}}{\sqrt{\lambda}},\] \[\sup_{0<t\leq c}|g_{\lambda}(t)|<\frac{\gamma_{-1}}{\lambda}. \tag{6}\] The qualification of the regularization scheme indexed by \(g_{\lambda}\) is the maximal \(\nu>0\) such that for any \(\lambda\in(0,c]\) it holds \[\sup_{0<t\leq c}t^{\nu}|1-tg_{\lambda}(t)|\leq\gamma_{\nu}\lambda^{\nu}, \tag{7}\] where \(\gamma_{\nu}\) does not depend on \(\lambda\). Following Definition 2.3 of [19] we also say that qualification \(\nu\) covers a non-decreasing function \(\varphi:[0,c]\rightarrow\mathbb{R}\), \(\varphi(0)=0\), if the function \(t\rightarrow\frac{t^{\nu}}{\varphi(t)}\) is non-decreasing for \(t\in(0,c]\). Observe that one can use the operator functional calculus to represent IWRLS-approximant (4) in terms of the function \(g_{\lambda}(t)=\left(\lambda+t\right)^{-1}\) indexing Tikhonov regularization. It is easy to check that for \(g_{\lambda}(t)=\left(\lambda+t\right)^{-1}\) the requirements (6) are satisfied with \(\gamma_{0}=\gamma_{-1}=1\), \(\gamma_{-\frac{1}{2}}=\frac{1}{2}\). Moreover, the qualification \(\nu\) of the Tikhonov regularization scheme is equal to \(1\), and such a small qualification is the main drawback of this scheme. Besides, the qualification of the regularization can be increased if one employs the so-called iterated Tikhonov regularization, according to which IWRLS-approach needs to be repeated such that the approximation \(f_{\mathbf{z}}^{\lambda}=f_{\mathbf{z},l}^{\lambda}\) obtained in the previous \(l\)-th step plays the role of an initial guess for the next approximation \(f_{\mathbf{z}}^{\lambda}=f_{\mathbf{z},l+1}^{\lambda}\) constructed as the minimizer of weighted and penalized empirical risk \[\mathcal{R}_{\mathbf{z},\lambda,\beta}^{l+1}(f)=\frac{1}{n}\sum_{i=1}^{n}\beta _{i}(f(x_{i})-y_{i})^{2}+\lambda\left\|f-f_{\mathbf{z},l}^{\lambda}\right\|_{ \mathcal{H}_{K}}^{2},\quad f_{\mathbf{z},0}^{\lambda}=0.\] After \(\nu\) such iterations we obtain the approximation \(f_{\mathbf{z}}^{\lambda}=f_{\mathbf{z},\nu}^{\lambda}\) that can be represented in the form (5) with \[g_{\lambda}(t)=g_{\lambda,\nu}(t)=\frac{1-\frac{\lambda^{\nu}}{ \left(\lambda+t\right)^{\nu}}}{t}.\] The regularization indexed by \(g_{\lambda,\nu}(t)\) has the qualification \(\nu\) that can be taken as large as desired. Moreover, for \(g_{\lambda}(t)=g_{\lambda,\nu}(t)\) the requirements (6), (7) are satisfied with \(\gamma_{0}=1,\gamma_{-\frac{1}{2}}=\nu^{\frac{1}{2}},\gamma_{-1}=\nu,\gamma_{ \nu}=1\). ### General source conditions As mentioned in the Introduction, in unsupervised domain adaptation, we intend to approximate a solution of the equation arising from the minimization of the excess risk \[\mathcal{R}_{q}(f)-\mathcal{R}_{q}(f_{q})=\left\|f-f_{q}\right\|_{L _{2,\rho_{T}}}^{2}\] Let \(J_{T}:\mathcal{H}_{K}\hookrightarrow L_{2,\rho_{T}}\) and \(J_{S}:\mathcal{H}_{K}\hookrightarrow L_{2,\rho_{S}}\) be the inclusion operators. Then in \(\mathcal{H}_{K}\) the above minimization can be written in terms of the inclusion operator as \(\left\|J_{T}f-f_{q}\right\|_{L_{2,\rho_{T}}}\to min\), and it leads to the infinite-dimensional normal equation \[J_{T}^{*}J_{T}f=J_{T}^{*}f_{q}. \tag{8}\] Due to compactness of the operator \(J^{*}J_{T}\), its inverse \(\left(J_{T}^{*}J_{T}\right)^{-1}\) cannot be a bounded operator in \(\mathcal{H}_{K}\), and this makes the equation (8) ill-posed, but since \(f_{q}\) is assumed to be in \(\mathcal{H}_{K}=Range(J_{T})\), the Moore-Penrose generalized solution \(f^{\dagger}\) of (8) coincides in \(\mathcal{H}_{K}\) with \(f_{q}\), or \(J_{T}f^{\dagger}=f_{q}\) in \(L_{2,\rho_{T}}\). Of course, the equation (8) is not accessible because neither \(q\) nor \(f_{q}\) are known, but the result [20] of the regularization theory tells us that there is always a continuous, strictly increasing function \(\varphi:[0,\left\|J_{T}^{*}J_{T}\right\|_{\mathcal{H}_{K}}]\rightarrow\mathbb{R}\) that obeys \(\varphi(0)=0\) and allows the representation of \(f^{\dagger}=f_{q}\) in terms of the so-called source condition: \[f_{q}=\varphi(J_{T}^{*}J_{T})\nu_{q},\quad\nu_{q}\in\mathcal{H}_{K}. \tag{9}\] The function \(\varphi\) above is usually called the index function. Moreover, for every \(\epsilon>0\) one can find such \(\varphi\) that (9) holds true for \(\nu_{q}\) with \[\left\|\nu_{q}\right\|_{\mathcal{H}_{K}}\leq(1+\epsilon)\|f_{q}\|_{\mathcal{H }_{K}}.\] Note that since the operator \(J_{T}^{*}J_{T}\) is not accessible, there is a reason to restrict ourselves to consideration of such index functions \(\varphi\), which allow us to control perturbations in the operators involved in the definition of source conditions. In the context of supervised learning, a class of such index functions has been discussed in [1], and here we follow that study. Namely, we consider the class \(\mathcal{F}=\mathcal{F}(0,c)\) of index functions \(\varphi:[0,c]\rightarrow\mathbb{R}_{+}\) allowing splitting \(\varphi(t)=\vartheta(t)\psi(t)\) into monotone Lipschitz part \(\vartheta,\vartheta(t)=0\), with the Lipschitz constant equal to \(1\), and an operator monotone part \(\psi,\psi(0)=0\). Recall that a function \(\psi\) is operator monotone on \([0,c]\) if for any pair of self-adjoint operators \(U,V\) with spectra in \([0,c]\) such that \(U\leq V\) (i.e. \(V-U\) is an non-negative operator) we have \(\psi(U)\leq\psi(V)\). Examples of operator monotone index functions are \(\psi(t)=t^{\nu}\), \(\psi(t)=\log^{-\nu}\left(\frac{1}{t}\right),\)\(\psi(t)=\log^{-\nu}\left(\log\frac{1}{t}\right),0<\nu\leq 1\), while an example of a function \(\varphi\) from the above defined class \(\mathcal{F}\) is \(\varphi(t)=t^{r}\log^{-\nu}\left(\frac{1}{t}\right),r>1,0<\nu\leq 1\), since it can be splitted in a Lipschitz part \(v(t)=t^{r}\) and an operator monotone part \(\psi(t)=\log^{-\nu}\left(\frac{1}{t}\right).\) Note that source conditions with the above index functions are traditionally considered in the regularization theory. ### Risk bounds under the assumption of knowing the Radon-Nikodym derivative Under the assumption that the source condition (9) holds true, with \(\varphi\in\mathcal{F}(0,c)\) and a sufficiently large value of \(c\), we consider the approximant \(f_{\mathbf{z}}^{\lambda}\) as specified in (5), where the regularization scheme, indexed by \(g_{\lambda}(t)\), has the qualification \(\nu\) that covers the function \(\varphi(t)\sqrt{t}\). In this context, we establish the risk bounds between the approximant \(f_{\mathbf{z}}^{\lambda}\) and the target function \(f_{q}\) in RKHS and \(L_{2,\rho_{S}}\) as the following theorem **Theorem 2.1** ([9]).: _Assume that the source condition (9) is satisfied with \(\varphi\in\mathcal{F}(0,c)\) and \(c\) is large enough. Consider the approximant \(f_{\mathbf{z}}^{\lambda}\) given by (5), where the regularization scheme indexed by \(g_{\lambda}(t)\) has the qualification \(\nu\) that covers the function \(\varphi(t)\sqrt{t}\). Consider also the function \(\theta(t)=\varphi(t)t\) and choose \(\lambda=\lambda_{m,n}=\theta^{-1}(m^{-\frac{1}{2}}+n^{-\frac{1}{2}})\). Then for sufficiently large \(m\) and \(n\) with probability at least \(1-\delta\) it holds_ \[\|f_{q}-f_{\mathbf{z}}^{\lambda_{m,n}}\|_{L_{2,\rho_{T}}} \leq c\log\frac{1}{\delta}\ \varphi(\theta^{-1}(m^{-\frac{1}{2}}+n^{-\frac{1}{2}}))\sqrt{ \theta^{-1}(m^{-\frac{1}{2}}+n^{-\frac{1}{2}})},\] \[\|f_{q}-f_{\mathbf{z}}^{\lambda_{m,n}}\|_{\mathcal{H}_{K}} \leq c\log\frac{1}{\delta}\ \varphi(\theta^{-1}(m^{-\frac{1}{2}}+n^{-\frac{1}{2}}))\] _The values of the coefficients \(c\) in the above inequalities do not depend on \(\delta,m,n\)._ To the best of our knowledge, before the study [9], no error bounds were known even for IWRLS-approach (4) that corresponds to Tikhonov regularization scheme \(g_{\lambda}(t)=(\lambda+t)^{-1}\). On the other hand, in the standard supervised learning setting this scheme has been analysed in [35] uniformly for the whole class of RKHS \(\mathcal{H}_{K}\) under the assumption, which in our terms can be written as \(\|(J_{T}J_{T}^{*})^{-r}f_{q}\|_{L_{2,\rho_{T}}}\leq c\) with \(r>\frac{1}{2}\). From Proposition 3.2 of [4] we know that the above assumption can be equivalently written as the source condition (9) with \(\varphi(t)=t^{r-\frac{1}{2}}\). For this index function our Theorem 2.1 gives respectively the error bounds of orders \(O\left((m^{-\frac{1}{2}}+n^{-\frac{1}{2}})^{\frac{2r}{2r+1}}\right)\) and \(O\left((m^{-\frac{1}{2}}+n^{-\frac{1}{2}})^{\frac{2r-1}{2r+1}}\right)\) in \(L_{2,\rho_{T}}\) and \(\mathcal{H}_{K}\). For a sufficiently large number \(m\geq n\) of unlabeled inputs \(x_{1}^{\prime},x_{2}^{\prime},\ldots,x_{m}^{\prime}\) sampled from the target measure \(\rho_{T}\) the above results match the orders of the bounds [35] in the standard supervised learning setting. The comparison of Theorem 2.1 with the results [35] (for Tikhonov regularization) and [1] (for general regularization scheme) allows the conclusion that in the scenario of domain adaptation with covariate shift, one can guarantee the same order of the error as in the standard supervised learning setting, provided that the number of unlabeled target inputs is big enough, and the values of the Randon-Nykodym derivative at that inputs are known. The latter assumption is seldom satisfied in practice. Therefore, in the next sections, we discuss approximate Random-Nikodym differentiation and its use in the context of domain adaptation. ## 3 Radon-Nikodym differentiation Recall that our initial assumption in Section 2 has been that the values of the Radon-Nikodym derivative \(\beta(x)\) are exactly given. However, in practice, neither \(\rho_{S}\) nor \(\rho_{T}\) is known. In this section, we therefore discuss recent results of [9, 23], where the goal is to approximate the Radon-Nikodym derivative \(\beta=\frac{d\rho_{T}}{d\rho_{S}}\) by some function \(\tilde{\beta}\). We will later use the approximation \(\tilde{\beta}\) within the regularization (5), where the matrix \(B=\text{diag}(\beta(x_{1}),\beta(x_{2}),\ldots,\beta(x_{n}))\) will be substituted by a matrix \(\tilde{B}=\text{diag}(\tilde{\beta}(x_{1}),\tilde{\beta}(x_{2}),\ldots,\tilde{ \beta}(x_{n}))\) of the corresponding approximate values \(\tilde{\beta}(x_{i})\approx\beta(x_{i}),i=1,2,...,n\). Note, however, that the problem of estimating the Radon-Nikodn derivative appears not only in domain adaptation with covariate shift, but also in anomaly detection [36, 10], two-sample testing [16, 13], divergence estimation [24, 25], covariate shift adaptation [34, 9, 7], generative modeling [22], conditional density estimation [33], and classification from positive and unlabeled data [15]; cf. also the monograph [38]. ### Error bounds in RKHS In the literature, various RKHS-based approaches are available for a Radon -Nikodym derivative estimation. Here we may refer to [14] and to references therein. Conceptually, under the assumption that \(\beta\in\mathcal{H}_{K}\), several of the above approaches can be derived from a regularization of an integral equation, which can be written in our terms as \[J_{S}^{*}J_{S}\beta=J_{T}^{*}J_{T}\mathbf{1} \tag{10}\] and is ill-pose similar to (8). Here \(\mathbf{1}\) is the constant function that takes the value \(1\) everywhere, and almost without loss of generality, we assume that \(\mathbf{1}\in\mathcal{H}_{K}\), because otherwise the kernel \(K_{1}(x,x^{\prime})=1+K(x,x^{\prime})\) will, for example, be used to generate a suitable RKHS containing all constant functions. Just as the equation (8) is inaccessible, so is the equation (10). But in contrast to (8), the reduction of (10) to a finite-dimensional problem does not require any labels, such as \(\mathbf{y}\), that were necessary for dealing with the normal equation \(S_{S_{X}}^{*}BS_{S_{X}}f=S_{\mathbf{x}}^{*}B\mathbf{y}\). Since in practice, the amount of unlabeled inputs is usually much greater than that of labeled ones, we assume that the sizes \(M\) and \(N\) of i.i.d. samples \((x_{1}^{\prime},x_{2}^{\prime},\ldots,x_{M}^{\prime})\) and \((x_{1},x_{2},\ldots,x_{N})\) drawn respectively from \(\rho_{T}\) and \(\rho_{S}\) are much larger than \(m\) and \(n\) appearing in Theorem 2.1. Then we consider two sample operators \[S_{M,T}f=(f(x_{1}^{\prime}),f(x_{2}^{\prime}),\ldots,f(x_{M}^{ \prime}))\in\mathbb{R}^{M},\] \[S_{N,S}f=(f(x_{1}),f(x_{2}),\ldots,f(x_{N}))\in\mathbb{R}^{N},\] and the finite-dimensional problem \[S_{N,S}^{*}S_{N,S}\beta=S_{M,T}^{*}S_{M,T}\mathbf{1}, \tag{11}\] which is an empirical version of the equation (10), where, similar to the above notations the operators \(S_{N,S}^{*}:\mathbb{R}^{N}\rightarrow\mathcal{H}_{K}\), \(S_{M,T}^{*}:\mathbb{R}^{M}\rightarrow\mathcal{H}_{K}\) are given as \[S_{N,S}^{*}v(\cdot)=\frac{1}{N}\sum_{i=1}^{N}K(\cdot,x_{i})v_{i},\hskip 14.226378ptu=(v_{1},v_{2},\ldots,v_{N})\in\mathbb{R}^{N},\] \[S_{M,T}^{*}u(\cdot)=\frac{1}{M}\sum_{j=1}^{M}K(\cdot,x_{j}^{ \prime})u_{j},\hskip 14.226378ptu=(u_{1},u_{2},\ldots,u_{M})\in\mathbb{R}^{M}.\] A regularization of equations (10), (11) may serve as a starting point for several approaches of estimating the Radon - Nikodym derivative \(\beta\). For example, as it has been observed in [14], the known kernel mean matching (KMM) method [11] can be viewed as the regularization of (10), (11) by the method of quasi (least-squares) solutions, originally proposed by Valentin Ivanov (1963) and also known as Ivanov regularization (see, e.g., [27] and [28] for its use in the context of learning). In KMM an Ivanov-type regularization is applied to the empirical version (11) and leads to a quadratic problem. At the same time, the kernelized unconstrained least-squares importance fitting (KuLSIF) proposed in [14] allows an analytic-form solution and can be reduced to solving a linear problem with respect to corresponding variables. From Theorem 1 of [14] it follows that in KuLSIF the approximation \(\tilde{\beta}\) of the Radon - Nikodym derivative \(\beta=\frac{d\rho_{T}}{d\rho_{S}}\) is in fact constructed by application of the Tikhonov regularization scheme to the empirical version (11) of the equation (10), that is in KuLSIF we have \[\tilde{\beta}=\beta_{M,N}^{\lambda}=g_{\lambda}(S_{N,S}^{*}S_{N,S})S_{M,T}^{*} S_{M,T}\mathbf{1}, \tag{12}\] where \(g_{\lambda}(t)=(\lambda+t)^{-1}\). Though there are several studies devoted to KMM and KuLSIF, to the best of our knowledge there has been no study of pointwise approximation error \(\beta(x)-\tilde{\beta}(x)\) which is of interest in analysis of regularized domain adaptation methods, such as IWRLS. For example, in [14] and [30] (see Type I setting there) the statistical consistency and accuracy of KuLSIF have been analysed in the space \(L_{2,\rho_{S}}\), where pointwise evaluations are undefined. We can also mention the study [33], where KuLSIF represented as (12) with \(g_{\lambda}(t)=\left(\lambda+t\right)^{-1}\) was discussed in a RKHS, but only convergence of \(\tilde{\beta}\) to \(\beta\) was proved, without quantifying its rate. At the same time, using the concept of source conditions naturally appearing because of equation (10), we can obtain the following statement **Theorem 3.1** ([9]).: _Assume that \(\beta=\frac{d\rho T}{d\rho_{S}}\) meets source condition \(\beta=\phi(J_{S}^{*}J_{S})\nu_{\beta}\), where \(\phi\in\mathcal{F}(0,c),\) and \(c\) is large enough. Consider the approximant \(\beta_{M,N}^{\lambda}\) given by (12), where the regularization scheme indexed by \(g_{\lambda}(t)\) has the qualification \(\nu\) that covers the index function \(\phi(t)\). Let \(\lambda=\lambda_{M,N}=\theta_{\phi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}),\) where \(\theta_{\phi}(t)=\phi(t)t\). Then for sufficiently large \(M\) an \(N\) with probability at least \(1-\delta\) it holds_ \[\left\|\beta-\beta_{M,N}^{\lambda_{M,N}}\right\|_{\mathcal{H}_{K}}\leq c \left(\log\frac{1}{\delta}\right)\phi\left(\theta_{\phi}^{-1}(M^{-\frac{1}{2} }+N^{-\frac{1}{2}})\right). \tag{13}\] **Remark 1**.: _As we already mentioned, for \(g_{\lambda}(t)=\left(\lambda+t\right)^{-1}\) the convergence in probability of the approximation (12) to \(\beta\) in RKHS has been proven in [33]. Such convergence has been established in [33] on the base of an error estimation (see Supplementary material A in [33]), which in our terms can be written as follows:_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}\leq\left\|\beta- \beta^{\lambda}\right\|_{\mathcal{H}_{K}}+c\left(\frac{N^{-a}}{\lambda^{2}}+ \frac{M^{-b}}{\lambda}\right)\left(\log\frac{1}{\delta}\right), \tag{14}\] _where \(\beta^{\lambda}=\left(\lambda I+J_{S}^{*}J_{S}\right)^{-1}J_{T}^{*}J_{T}\mathbf{ 1},\) and \(0<a<\frac{1}{2},0<b<\frac{1}{2}.\) Note that in the regularization theory, the quantities \(\left\|\beta-\beta^{\lambda}\right\|\) are sometimes called the profile functions, and Corollary 2 of [20] estimates them in terms of the source condition \(\beta=\phi(J_{S}^{*}J_{S})\nu_{\beta}\) as follows_ \[\left\|\beta-\beta^{\lambda}\right\|_{\mathcal{H}_{K}}\leq c\phi(\lambda). \tag{15}\] _Furthermore, the bound presented in Theorem 3.1 can be expressed in the following form_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}\leq c\left(\phi (\lambda)+\frac{M^{-\frac{1}{2}}+N^{-\frac{1}{2}}}{\lambda}\right)\log\left( \frac{1}{\delta}\right). \tag{16}\] _Comparing (14) - (16) and keeping in mind that \(\frac{M^{-\frac{1}{2}}+N^{-\frac{1}{2}}}{\lambda}\) is smaller in the sense of the order than \(\left(\frac{N^{-a}}{\lambda^{2}}+\frac{M^{-b}}{\lambda}\right)\), one can conclude that the error bound (16) obtained by our argument generalized and refines the results of [33]._ **Remark 2**: _Recall that for our purpose we restrict ourselves to the estimation of accuracy of (12) in RKHS. At the same time, there are studies, where the accuracy of (12) has been analysed in the space \(L_{2,\rho_{S}}\). For example, in [14]\(L_{2,\rho_{S}}\) - convergence rate of KuLSIF estimator, which corresponds to (12) with \(g_{\lambda}(t)=\left(\lambda+t\right)^{-1},\) has been established in terms of the order \(\gamma\) of the so-called bracketing entropy of the underlying space \(\mathcal{H}_{K}.\) For a given space \(\mathcal{H}_{K}\), the \(\gamma\)- value is fixed within the interval \((0,2)\), i.e., \(0<\gamma<2,\) and Theorem 2 of [14] tells that if \(\beta\in\mathcal{H}_{K},\) then for arbitrary \(\epsilon\) satisfying \(1-\frac{2}{(2+\gamma)}<\epsilon<1,\) and \(\lambda_{M,N,\epsilon}^{-1}=O\left(\left(N\wedge M\right)^{1-\epsilon}\right)\) with high probability it holds_ \[\left\|\beta-\beta_{M,N}^{\lambda_{M,N,\epsilon}}\right\|_{L_{2,\rho_{S}}}=O \left(\left(N\wedge M\right)^{-\frac{(1-\epsilon)}{2}}\right), \tag{17}\] _where \(N\wedge M=\min\{N,M\}.\) Note that the rate established by (17) cannot be better than \(O\left(\left(N\wedge M\right)^{-\frac{1}{2+\gamma}}\right)\), and it does not take into account additional smoothness that any particular element \(\beta\in\mathcal{H}_{K}\) has in the underlying space \(\mathcal{H}_{K}.\) Such additional smoothness can be caught in the form of a source condition because as we already know from [20], there is always an index function \(\phi\) such that \(\beta=\phi(J_{S}^{*}J_{S})\nu_{\beta}\). Then assuming that \(\phi(t)\sqrt{t}\) is covered by the qualification of the regularization used in (12), and applying almost the same argument as in Theorem 3.1 we can obtain the bound_ \[\left\|\beta-\beta_{M,N}^{\lambda_{M,N}}\right\|_{L_{2,\rho_{S}}}\leq c\left( \left(\log\frac{1}{\delta}\right)\phi\left(\theta_{\phi}^{-1}(M^{-\frac{1}{2} }+N^{-\frac{1}{2}})\right)\right)\sqrt{\theta_{\phi}^{-1}(M^{-\frac{1}{2}}+N^ {-\frac{1}{2}})}. \tag{18}\] _To simplify a comparison of (18) with the best possible rate \(O\left(\left(N\wedge M\right)^{-\frac{1}{2+\gamma}}\right)\) of (17) we consider the same index functions \(\phi(t)=t^{r-\frac{1}{2}}\) as in Section 2.3. Then (18) gives the rate of order \(O\left(\left(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}\right)^{\frac{2r}{2r+1}}\right)\), and for \(r>\frac{1}{\gamma}\) this rate is better than the one given by (17), because_ \[(N\wedge M)^{-\frac{1}{2+\gamma}}>(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})^{\frac{2 }{2+\gamma}}>(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})^{\frac{2r}{2r+1}}.\] _This is one more example of how the study [9] generalizes, specifies and refines previously known results._ In [23], we highlight that the convergence of algorithms for Radon-Nikodym differentiation is impacted by both, the smoothness of the function being approximated and the capacity of the approximating space. However, the result in Theorem 3.1 only takes into consideration the smoothness of the approximated derivative \(\beta.\) Therefore, we follow [29] and employ the concept of the so-called regularized Christoffel function that allows direct incorporation of the regularization parameter \(\lambda\) into the definition of a capacity measure. Consider the function \[C_{\lambda}(x)=\left\langle K(\cdot,x),(\lambda I+J_{S}^{*}J_{S})^{-1}K(\cdot,x) \right\rangle_{\mathcal{H}_{K}}=\left\|(\lambda I+J_{S}^{*}J_{S})^{-\frac{1}{2}} K(\cdot,x)\right\|_{\mathcal{H}_{K}}^{2} \tag{19}\] Note that in [29] the reciprocal of \(C_{\lambda}(x)\), i.e. \(\frac{1}{C_{\lambda}(x)}\), was called the regularized Christoffel function, but for the sake of simplicity, we will keep the same name also for (19). Note also that in the context of supervised learning, where usually only one probability measure, say \(p\), is involved, the expected value \[\mathcal{N}(\lambda)=\int_{\mathbf{X}}C_{\lambda}(x)dp(x)\] of \(C_{\lambda}(x)\), called the effective dimension, has been proven to be useful [2]. This function is also frequently used as a capacity measure of \(\mathcal{H}_{K}\). At the same time, if more than one measure appears in the supervised learning context, as is, for example, the case in the analysis of Nystrom subsampling [32, 17], then the \(C\)-norm of the regularized Christoffel function \[\mathcal{N}_{\infty}(\lambda):=\sup_{x\in\mathbf{X}}C_{\lambda}(x) \tag{20}\] is used in parallel with the effective dimension \(\mathcal{N}(\lambda)\). This gives a hint that \(\mathcal{N}_{\infty}(\lambda)\) could also be a suitable capacity measure for analysing the accuracy of Radon-Nikodym numerical differentiation, since there more than one measure is also involved. To estimate the regularized Christoffel functions we slightly generalize a source condition for kernel sections \(K(\cdot,x)\) that has been used in various contexts in [17] and [6]. **Assumption 3.2**.: _(Source condition for kernel) There is an operator concave index function \(\xi:[0,\|J_{S}^{*}J_{S}\|]\to[0,\infty)\) such that \(\xi^{2}\) is covered by the qualification \(\nu=1\), and for all \(x\in\mathbf{X}\),_ \[K(\cdot,x)=\xi(J_{S}^{*}J_{S})v_{x},\quad\|v_{x}\|_{\mathcal{H}_{K}}\leq c,\] _where \(c\) does not depend on \(x\)._ We mention the following consequence of Assumption 3.2. **Lemma 3.3**.: _Under Assumption 3.2,_ \[\mathcal{N}_{\infty}(\lambda)\leq c\frac{\xi^{2}(\lambda)}{\lambda}.\] By taking into account the smoothness properties of the Radon-Nikodym derivative \(\beta\) and the capacity of \(\mathcal{H}_{K}\) expressed in terms of the regularized Christoffel functions, in [23], we establish a novel bound that relates \(\beta_{M,N}^{\lambda}\) to \(\beta\) in RKHS. **Theorem 3.4** ([23]).: _Let \(K\) satisfies Assumption 3.2, and \(\lambda\geq\lambda^{*}\) with \(\lambda_{*}\) satisfying \(\mathcal{N}(\lambda_{*})/\lambda_{*}=N\). If \(\beta=\frac{d\rho_{T}}{d\rho_{S}}\) meets source condition (9) \(\beta=\phi(J_{S}^{*}J_{S})\nu_{\beta}\), where \(\phi=\vartheta\psi\in\mathcal{F}(0,c)\) with large enough \(c\), and the qualification \(g_{\lambda}\) covers \(\vartheta(t)t^{\frac{3}{2}}\), then with probability at least \(1-\delta\), it holds_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}\leq c\left(\phi( \lambda)+(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\frac{\xi(\lambda)}{\lambda} \right)\left(\log\frac{2}{\delta}\right)^{2}.\] _Consider \(\theta_{\phi,\xi}(t)=\frac{\phi(t)t}{\xi(t)}\) and \(\lambda=\lambda_{M,N}=\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\), then_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}\leq c\phi\left( \theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\right)\log^{2}\frac{ 1}{\delta}.\] **Remark 3**.: _In order to compare the bounds presented in Theorem 3.1 and Theorem 3.4, let us consider the case when \(\beta\) meets the source condition (9) with \(\phi(t)=t^{\eta}\). In this case the bound (13) can be reduced to_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}=O\left((M^{- \frac{1}{2}}+N^{-\frac{1}{2}})^{\frac{\eta}{\eta+1}}\right). \tag{21}\] _It is noteworthy that the error bound established in Theorem 3.1 does not take into consideration the capacity of \(\mathcal{H}_{K}\). Such an additional factor can be accounted for in terms of Assumption 3.2. Assume that \(K\) satisfies Assumption 3.2 with \(\xi(t)=t^{\varsigma},0<\varsigma\leq\frac{1}{2}\), then for \(\lambda=\lambda_{M,N}=\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\), the bound in Theorem 3.4 gives_ \[\left\|\beta-\beta_{M,N}^{\lambda}\right\|_{\mathcal{H}_{K}}=O\left((M^{- \frac{1}{2}}+N^{-\frac{1}{2}})^{\frac{\eta}{\eta+1-\varsigma}}\right),\] _that is better than the order of accuracy given by (21). Then one can conclude that the bound in Theorem 3.4 obtained by our argument generalizes, specifies, and refines the results in Theorem 3.1._ ### Error bounds for the pointwise evaluation In the following, we discuss the error between point values of \(\beta(x)\) and \(\beta_{M,N}^{\lambda}(x)\) for any \(x\in\mathbf{X}\). In view of the reproducing property of \(K\) we have \[\left|\beta(x)-\beta_{M,N}^{\lambda}(x)\right|=\left|\left\langle K _{x},\beta-\beta_{M,N}^{\lambda}\right\rangle_{\mathcal{H}_{K}}\right| =\left|\left\langle K(\cdot,x),\beta-\beta_{M,N}^{\lambda}\right\rangle _{\mathcal{H}_{K}}\right|\] \[=\left|\left\langle\xi(J_{S}^{*}J_{S})v_{x},\beta-\beta_{M,N}^{ \lambda}\right\rangle_{\mathcal{H}_{K}}\right|\] \[\leq c\left\|\xi(J_{S}^{*}J_{S})(\beta-\beta_{M,N}^{\lambda}) \right\|_{\mathcal{H}_{K}}.\] Thus, the error between point values can be bounded by the approximation error in a weighted norm that is weaker than the one of the underlying space. Therefore, pointwise error estimates can be smaller than the ones guaranteed by Theorem 3.4. This observation is quantified by the following theorem in [23]. **Theorem 3.5** ([23]).: _Under the assumption of Theorem 3.4, for \(\lambda>\lambda_{*}\) with probability at least \(1-\delta\), for all \(x\in\mathbf{X}\), we have_ \[\left|\beta(x)-\beta_{M,N}^{\lambda}(x)\right|\leq c\xi(\lambda)\left(\phi( \lambda)+(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\frac{\xi(\lambda)}{\lambda} \right)\left(\log\frac{2}{\delta}\right)^{2},\] _and for \(\lambda=\lambda_{M,N}=\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\),_ \[\left|\beta(x)-\beta_{M,N}^{\lambda}(x)\right|\leq c\xi(\lambda_{M,N})\phi( \lambda_{M,N})\log^{2}\frac{1}{\delta}.\] **Remark 4**.: _Let us consider the same index functions \(\varphi(t)=t^{\eta}\) and \(\xi(t)=t^{\varsigma}\) as in Remark 3, where the accuracy of order \(O\left((M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\frac{\eta}{\eta+1-\varsigma}\right)\) has been derived for (12). Under the same assumptions, Theorem 3.5 guarantees the accuracy of order \(O\left((M^{-\frac{1}{2}}+N^{-\frac{1}{2}})\frac{\eta+\varsigma}{\eta+1- \varsigma}\right)\). This illustrates that the reconstruction of the Radon-Nikodym derivative at any particular point can be done with much higher accuracy than its reconstruction as an element of RKHS. But it should be stressed that the above high order of accuracy is guaranteed when the qualification \(\nu\) of the used regularization scheme is higher than that of the Tikhonov-Lavrentiev regularization employed in KuLSIF._ ## 4 Embedded regularization In this section, we derive a novel error bound based on the combination of the results discussed in last two sections. More precisely, in Section 2, we have established error bounds between the approximant \(f_{\mathbf{z}}^{\lambda}\) and the target function in the context of domain adaptation, assuming knowledge of the exact value of the Radon-Nikodym derivative \(\beta(x)\). However, this assumption is rarely satisfied in practice. Consequently, we embed the regularized Radon-Nikodym numerical differentiation considered in the previous section into the general regularization scheme for unsupervised domain adaptation such that no exact values of \(\beta(x)\) are required. This is done by substituting the matrix \(B\) in (5) by the matrix \[B_{M,N}=\text{diag}(\beta_{M,N}^{\lambda_{M,N}}(x_{1}),\beta_{M,N}^{\lambda_{ M,N}}(x_{2}),\ldots,\beta_{M,N}^{\lambda_{M,N}}(x_{n})).\] As the result, we obtain an embedded regularization, which produces \(f_{\mathbf{z},M,N}^{\lambda}\) instead of \(f_{\mathbf{z}}^{\lambda}\). With the exact same arguments as used in Section 3.2 of [9], in the theorem below we obtain the error bounds for such embedded regularization. The only difference is that, instead of utilizing the bound from Theorem 3.1 as done in [9, Section 3.2], we employ the error bounds derived in Theorem 3.5. **Theorem 4.1**.: _Let assumptions and conditions of Theorems 2.1 and 3.5 be satisfied. Then with probability at least \(1-\delta\) for_ \[\lambda_{\delta}=\theta^{-1}\left(m^{-\frac{1}{2}}+n^{-\frac{1}{2}}+\xi( \theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}))\phi(\theta_{\phi, \xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}))\right),\] _it holds_ \[\left\|f_{q}-f_{\mathbf{z},M,N}^{\lambda_{\delta}}\right\|_{ \mathcal{H}_{K}}\leq c\left(\log^{\frac{3}{2}}\frac{1}{\delta}\right)\varphi \left(\lambda_{\delta}\right),\] \[\left\|f_{q}-f_{\mathbf{z},M,N}^{\lambda_{\delta}}\right\|_{L_{2, \rho_{T}}}\leq c\left(\log^{\frac{3}{2}}\frac{1}{\delta}\right)\varphi\left( \lambda_{\delta}\right)\sqrt{\lambda_{\delta}}.\] **Remark 5**.: _As has been emphasized in Remark 4 of [9], the main message of Theorem 3 in [9] is that in unsupervised domain adaptation the error bounds of the same order as in the standard supervised learning may potentially be guaranteed provided that there are big enough amounts of unlabeled data sampled from both target and source domains. To estimate how big these amounts have to be, let us consider \(\beta=\frac{d\rho_{T}}{d\rho_{S}}\) meeting the so-called Holder type source conditions \(\beta=\phi(J_{S}^{*}J_{S})\nu_{\beta}\) with \(\phi(t)=t^{a}\), where the Holder exponent \(a\) can be arbitrary small but positive to guarantee the inclusion of \(\beta\) in \(\mathcal{H}_{K}\). To take into account the capacity of \(\mathcal{H}_{K}\) we assume that \(K\) satisfies Assumption 3.2 with \(\xi(t)=t^{b},0<b\leq\frac{1}{2}\), then \(\xi(\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}))\phi(\theta_{ \phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}}))=(M^{-\frac{1}{2}}+N^{-\frac {1}{2}})^{\frac{a+b}{a-b+1}}\). Now it is interesting to observe that if \(b=\frac{1}{2}\), then independently of the Holder exponent \(a\), the error bounds guaranteed by our novel Theorem 4.1 will be of the same order as the ones in Theorem 2.1, provided \(M\) and \(N\) are respectively of order \(m\) and \(n\). This is essential improvement compared to Remark 4 of [9], where it was required that an amount of unlabeled data should be at least as big as the squared amount of labeled ones to potentially allow an accuracy of the same order as in the standard supervised learning._ ## 5 Parameter choice To complete our analysis of IWRLS, in the following, we discuss the strategies presented in [9, 7] for choosing regularization parameters. In particular, we obtain a novel error bound for these methods that is based on Theorem 4.1. The regularization parameters \(\lambda_{m,n},\lambda_{M,N},\lambda_{\delta}\) used by Theorems 2.1, 3.5, and 4.1 crucially rely on the knowledge of the index functions \(\varphi,\phi\) and \(\xi\), where \(\varphi,\phi\) describes the smoothness of \(f_{q},\beta=\frac{d\rho_{T}}{d\rho_{S}}\) in terms of the corresponding source conditions, and \(\xi\) describes the capacity of \(\mathcal{H}_{K}\). Since such smoothness and capacity of approximated space are usually unknown, one faces the issue of choosing the values of the regularization parameters used for constructing the approximations \(f_{\mathbf{z},M,N}^{\lambda}\). The idea in [9, 7] is to construct a linear combination \[f_{\mathbf{z}}=\sum_{k=1}^{l}c_{k}f_{\mathbf{z},M,N}^{\lambda_{k}} \tag{22}\] of approximants corresponding to all tried values of the regularization parameters used in the embedded regularization \(f_{\mathbf{z},M,N}^{\lambda}\). For the sake of simplicity we still label that approximants by a sequence of \(\{\lambda_{k}\}_{k=1}^{l}\). It is clear that the best \(L_{2,\rho_{T}}-\)space approximation of the target regression function \(f_{q}\) by the above linear combinations \(f_{\mathbf{z}}\) corresponds to the vector \(\mathbf{c}=(c_{1},c_{2},\ldots,c_{l})\) of ideal coefficients in (22) that solves the linear system \(G\mathbf{c}=\mathbf{g}\) with the Gram matrix \(G=\left(\left\langle f_{\mathbf{z},M,N}^{\lambda_{k}},f_{\mathbf{z},M,N}^{ \lambda_{u}}\right\rangle_{L_{2,\rho_{T}}}\right)_{k,u=1}^{l}\) and the right-hand side vector \(\mathbf{g}=\left(\left\langle f_{q},f_{\mathbf{z},M,N}^{\lambda_{k}}\right\rangle _{L_{2,\rho_{T}}}\right)_{k=1}^{l}\). But, of course, neither Gram matrix \(G\) nor the vector \(\mathbf{g}\) is accessible, because there is no access to the target measure \(\rho_{T}\). To overcome this obstacle we first observe that the norms \(\left\|f_{\mathbf{z},M,N}^{\lambda_{k}}\right\|_{\mathcal{H}_{K}}\) are under our control, such that we can put a threshold \(\gamma_{l}>0\) and consider only such approximants for which \(\left\|f_{\mathbf{z},M,N}^{\lambda_{k}}\right\|_{\mathcal{H}_{K}}\leq\gamma_{l}\). In practice the number \(l\) of the elements in the set \(\{f_{\mathbf{z},M,N}^{\lambda_{k}}\}_{k=1}^{l}\) can be assumed to be negligible compared to the cardinalities \(m,n,M,N\) of the available data samples (usually not more than 10 - 15 approximants are computed for different values of the regularization parameters). Therefore, \(l\)-dependent coefficients do not affect the orders \(O\left(m^{-\frac{1}{2}}+n^{-\frac{1}{2}}+\xi(\theta_{\phi,\xi}^{-1}(M^{-\frac {1}{2}}+N^{-\frac{1}{2}}))\phi(\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{- \frac{1}{2}}))\right)\) Then the inaccessible Gram matrix \(G\) and the vector \(\mathbf{g}\) can be approximated by respectively \[\tilde{G}=\left(\frac{1}{m}\sum_{j=1}^{m}f_{\mathbf{z},M,N}^{\lambda_{k}}(x_{j}^{ \prime})f_{\mathbf{z},M,N}^{\lambda_{u}}(x_{j}^{\prime})\right)_{k,u=1}^{l},\; \tilde{g}=\left(\frac{1}{n}\sum_{i=1}^{n}\tilde{\beta}_{M,N}(x_{i})y_{i}f_{ \mathbf{z},M,N}^{\lambda_{k}}(x_{i})\right)_{k=1}^{l},\] which can be effectively computed from data samples. Under the assumption that \(\tilde{G}^{-1}\) exists, we can approximate function \(f_{\mathbf{z}}\) by \(\tilde{f}_{z}=\sum_{k=1}^{l}\tilde{c}_{k}f_{\mathbf{z},M,N}^{\lambda_{k}}\), where \(\tilde{c}=(\tilde{c}_{1},\tilde{c}_{2},\ldots,\tilde{c}_{l})=\tilde{G}^{-1} \tilde{g}\). With the same arguments as used in Section 4 of [9], we establish a novel error bound between \(\tilde{f}_{z}\) and \(f_{\mathbf{z}}\) in space \(L_{2,\rho_{T}}\) as the following theorem. The only difference to [9, Section 4] is that we use our improved bounds of Theorem 4.1 for their Eq. (24). **Theorem 5.1**.: _Assume that the approximants are such that \(\left\|f_{\mathbf{z},M,N}^{\lambda_{k}}\right\|_{\mathcal{H}_{K}}\leq\gamma_{l},\) and the conditions of Theorems 2.1 and 3.5 hold. Consider \(\tilde{f}_{z}=\sum_{k=1}^{l}\tilde{c}_{k}f_{\mathbf{z},M,N}^{\lambda_{k}},\) where \(\tilde{c}=(\tilde{c}_{1},\tilde{c}_{2},\ldots,\tilde{c}_{l})=\tilde{G}^{-1} \tilde{g}\). Then with probability \(1-\delta\) it holds that_ \[\begin{split}&\left\|f_{q}-\tilde{f}_{z}\right\|_{L_{2,\rho_{T}}} \leq\min_{c_{k}}\left\|f_{q}-\sum_{k=1}^{l}c_{k}f_{\mathbf{z},M,N}^{\lambda_{ k}}\right\|_{L_{2,\rho_{T}}}\\ &\quad+C\cdot\log\Bigl{(}\frac{1}{\delta}\Bigr{)}\left(m^{-\frac {1}{2}}+n^{-\frac{1}{2}}+\xi(\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{- \frac{1}{2}}))\phi(\theta_{\phi,\xi}^{-1}(M^{-\frac{1}{2}}+N^{-\frac{1}{2}})) \right),\end{split} \tag{23}\] _for a generic constant \(C>0\) maybe depending on \(l\) but not depending on \(m,n,M,N\)._ Assume that the sequence \(\lambda_{1},\lambda_{2},\ldots,\lambda_{l}\) is so tight that one of the values, say \(\lambda=\lambda_{\mu}\), is so close to the value \(\lambda_{\delta}\) indicated in Theorem 4.1, and the corresponding approximant \(f_{\mathbf{z},M,N}^{\lambda_{\mu}}\) provides an accuracy of the order guaranteed by that theorem. Then under conditions of Theorem 5.1 the aggregate approximation \(\tilde{f}_{z}\) also guarantees an accuracy of the same order but does not require any knowledge of the index functions \(\varphi,\phi\) describing the smoothness of \(f_{q}\) and \(\frac{d\rho_{T}}{d\rho_{S}}\), and the index function \(\xi\) describing the capacity of \(\mathcal{H}_{K}\). This follows from the fact that the second term of the right-hand side of (23) is negligible compared to the error bounds given by Theorem 4.1, and from the obvious inequality \[\min_{c_{k}}\left\|f_{q}-\sum_{k=1}^{l}c_{k}f_{\mathbf{z},M,N}^{\lambda_{k}} \right\|_{L_{2,\rho_{T}}}\leq\left\|f_{q}-f_{\mathbf{z},M,N}^{\lambda_{\mu}} \right\|_{L_{2,\rho_{T}}}.\] ## 6 Conclusion and outlook In this work, we discussed recent error bounds for IWRLS in the setting of unsupervised domain adaptation with covariate shift. Such error bounds require a combined study of weighted kernel ridge regression with methods for estimating Radon-Nikodym derivatives, as the regression is applied on the output of the estimation procedures. As a novel result, we obtained, as combination of known results, novel error bounds. It turned out that a weak source condition on the reproducing kernel allows IWRLS to achieve the same order of accuracy as kernel ridge regression in standard supervised learning, with a much smaller number of samples than anticipated before. **Acknowledgment:** The research reported in this paper has been funded by the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), the Federal Ministry for Digital and Economic Affairs (BMDW), and the Province of Upper Austria in the frame of the COMET-Competence Centers for Excellent Technologies Programme and the COMET Module S3AI managed by the Austrian Research Promotion Agency FFG.
2308.01195
Personalized Category Frequency prediction for Buy It Again recommendations
Buy It Again (BIA) recommendations are crucial to retailers to help improve user experience and site engagement by suggesting items that customers are likely to buy again based on their own repeat purchasing patterns. Most existing BIA studies analyze guests personalized behavior at item granularity. A category-based model may be more appropriate in such scenarios. We propose a recommendation system called a hierarchical PCIC model that consists of a personalized category model (PC model) and a personalized item model within categories (IC model). PC model generates a personalized list of categories that customers are likely to purchase again. IC model ranks items within categories that guests are likely to consume within a category. The hierarchical PCIC model captures the general consumption rate of products using survival models. Trends in consumption are captured using time series models. Features derived from these models are used in training a category-grained neural network. We compare PCIC to twelve existing baselines on four standard open datasets. PCIC improves NDCG up to 16 percent while improving recall by around 2 percent. We were able to scale and train (over 8 hours) PCIC on a large dataset of 100M guests and 3M items where repeat categories of a guest out number repeat items. PCIC was deployed and AB tested on the site of a major retailer, leading to significant gains in guest engagement.
Amit Pande, Kunal Ghosh, Rankyung Park
2023-07-24T18:38:10Z
http://arxiv.org/abs/2308.01195v1
# Personalized Category Frequency prediction for Buy It Again recommendations ###### Abstract. Buy It Again (BIA) recommendations are crucial to retailers to help improve user experience and site engagement by suggesting items that customers are likely to buy again based on their own repeat purchasing patterns. Most existing BIA studies analyze guests' personalized behaviour at item granularity. This finer level of granularity might be appropriate for small businesses or small datasets for search purposes. However, this approach can be infeasible for big retailers which have hundreds of millions of guests and tens of millions of items. For such data sets, it is more practical to have a coarse-grained model that captures customer behaviour at the item category level. In addition, customers commonly explore variants of items within the same categories, e.g., trying different brands or flavors of yogurt. A category-based model may be more appropriate in such scenarios. We propose a recommendation system called a _hierarchical PCIC model_ that consists of a _personalized category model_ (PC model) and a _personalized item model within categories_ (IC model). PC model generates a personalized list of categories that customers are likely to purchase again. IC model ranks items within categories that guests are likely to reconsure within a category. The hierarchical PCIC model captures the general consumption rate of products using survival models. Trends in consumption are captured using time series models. Features derived from these models are used in training a category-grained neural network. We compare PCIC to twelve existing baselines on four standard open datasets. PCIC improves NDCG up to 16% while improving recall by around 2%. We were able to scale and train (over 8 hours) PCIC on a large dataset of 100M guests and 3M items where repeat categories of a guest outnumber repeat items. PCIC was deployed and A/B tested on the site of a major retailer, leading to significant gains in guest engagement. Personalization, Recommender Systems, E-commerce, Repeat purchases, Buy it again, Survival Models, Time-Series Models, Neural Network + Footnote †: journal: Information systems + Footnote †: journal: Information systems systems + Footnote †: journal: Information systems systems + Footnote †: journal: Information systems systems + Footnote †: journal: Information systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Information systems systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems + Footnote †: journal: Journal of systems systems + Footnote †: journal: Journal of systems + Existing work in BIA recommendations has focused on modeling item repurchase probabilities by using variants of recurrent neural networks or statistical models. Large retailers handle hundreds of millions of items and guests, but the majority of repurchase transactions are on a small subset of items and guests. This can lead to underfitting for item-grained models, as the data ends up being represented sparsely in a very high dimensional space. In the worst case, training itself may become infeasible due to computational resource limitations. In this work, we emphasize the effectiveness of personalized category frequency modeling on BIA predictions. Customers will often explore variants of an item or new items within a category for reasons such as the desire to try different brands, the need to satisfy varying taste preferences in the customer's family, or the presence of discounts on alternative items. Category-based repurchase modeling can effectively capture higher abstraction information on these item repurchase dynamics. As shown in Figure 1, the percentage of items that have high numbers of repurchases is small (Figure 0(a)), but most categories demonstrate high levels of repurchases (Figure 0(b)). The discrepancy means that models geared toward category repurchases may be more effective at satisfying guest preferences. Furthermore, due to the aforementioned sparsity, it is far more difficult to train performant BIA recommendation models on item repurchases than it is on category repurchases. In this work, we emphasize the importance of both personalized product frequency as well as repeat purchase prediction models to make good Buy It Again predictions. More specifically, we observe that the product purchase frequency may be sparse in predicting customer repurchases and we discuss how using personalized category frequency can be a better choice. Customers often like to explore new items within a category. In this paper, we propose a \(2\)-tier _PCIC model_ for BIA recommendations. The _personalized category model_ (PC model) predicts which categories customers will buy again on their next visit, and the _personalized item within categories model_ (IC model) provides personalized ranks of items in categories. Final BIA recommendations for individual customers are generated by combining both predictions. PC is a neural network that outputs category-level likelihoods for each customer. Input features to PC are generated by an ensemble of time-series machine learning algorithms that captures personalized consumption rates of each category and predicts when customers will buy items in each category. IC is a regression model that predicts category-agnostic item ranking. The outputs of the two models are combined to generate personalized BIA item recommendations for individual customers. We compare PCIC to twelve existing state of the art baseline algorithms on four standard open datasets. PCIC improves NDCG up to 16% while improving recall by around 2%. We were able to scale and train PCIC on a large dataset of 100M guests and 3M items where repeat categories of a guest outnumers repeat items. PCIC was deployed on an Apache Spark cluster, allowing us to train and score the model in around 8 hours. It was A/B tested on the site of a major retailer, leading to significant gains in guest engagement. The main contributions of this work as summarized below: 1. We propose a hierarchical PCIC model for Buy It Again recommendations which combines coarse prediction by a personalized category model (PC model) and finer-grained prediction by a personalized item within categories model (IC model). We show how the model supports our insights that customers tend to explore brands, sizes, flavors, etc. similar to a given item within a category. 2. We demonstrate that the proposed PCIC model outperforms existing baselines of public datasets. We also show that PCIC scales to large datasets. 3. We deploy PCIC in a commercial setting to provide BIA recommendations for millions of customers. We demonstrate improved guest experience on the site as evidenced by multiple A/B tests. We discuss our experiences deploying and scaling PCIC. Figure 1. Percentage of items and categories against number of repurchases in 1.5 years. (a) Most items have small number of repurchasing transactions. (b) Most categories have large number of repurchasing transactions. Categories have more sufficient amount of data for modeling than items. ## 2. Literature Review One of the early reported work for Buy It Again recommendations came from Bhagat et al. (Bhagat et al., 2018) for Amazon shoppers' data back in 2018. In this work, the authors model the repeat consumption pattern of products using a modified Poisson-Gamma (mPG) model. The mPG model is built over a simpler PG model which assumes repeat-purchase of a item at a customer level to be a possion process with a gamma prior for the purchase rate \(\lambda\). They also provide two simple customer agnostic item level models viz. Repeat Customer Probability (RCP) and Aggregated Time Distribution (ATD) which works as a baseline for the mPG model in experiments. Another work was also reported by Dey et al. (Dey et al., 2016) in 2016, but this was more towards capturing repeat purchase behavior in longer time durations for e.g. several weeks to months. They have used PG model for capturing repeat purchase as base and then further used Dirichlet model to predict purchase probablities of items in a category. Apart from the above work, we have been exploring other related works in the repeat purchase domain. While there were not so many, but still some notable works in the domain of customer purchase modeling has been done historically (starting from 60's era) where inspirations of modeling customer purchase events using statistical distributional assumptions can be taken. Once the mathematical expression of the unknown distributional parameters is rigorously derived, one can compute their estimates using data by calling simple math libraries / custom user defined functions etc. Several such works include, the Negative-Binomial distribution models (NBD) discussed in Enrehberg (Ehagag et al., 2016) and Grahn (Grahn, 2017), the Erlang-2-Gamma model discussed by Chattfield and Goodhardt (Chattfield and Goodhardt, 2018) etc. Later on, it was interesting to see works of Fader and Hardie on alternate versions of NBD model viz. Pareto-NBD, Beta-Geometric NBD (Fader and Hardie, 2018)(Fader and Hardie, 2018) etc. While these approaches because of its strong foundations, may have influenced many later work based on statistical distributions (for e.g. (Bhagat et al., 2018)), but still these were mostly useful in solving some of the popular marketing problems (often referred as Marketing Science) like predicting shopping probabilities of a customer for the next n days tending to predict chances of their attrition, predicting expected customer basket size, predicting customer life-time value etc. The problems are mostly related to a customer's journey in a generic way and the solutions are often used to choose the right audience to whom retention policies needs to be deviced. When the notion of guest's category/item behaviors comes into the picture, (such as similar items, buy it again etc.), we should not be limited to such approaches. Rather using these approaches as signals and applying additional layers of learning with some supervision (if possible) would intuitively be a positive step to take. Several literature on recommender systems are available, which has abilities to recommend a customer or user's personal taste on products. One of the older notable ones is the Grouplems project (Konstan et al., 2016) by Konstan et al. on Usenet news data in the late 90's, which used User based kNN (userKNN) approach of collaborative filtering to recommend personalized articles. Later on, another notable interesting approach we came across in the NBR domain was called the Factorised Personalised Markov Chain (FPMC) (Konstan et al., 2016) by Rendle et al. in 2010. This work uses a combination of two popular approaches to solve an NBR problem viz. Matrix Factorization (MF) which captures user's taste by factorizing observed user-item matrix and Markov Chains (MC) which captures the sequential behavior of a user using transition graphs to predict the next action. Other similar works include, one by He et al. on sequential recommendation algorithms (He et al., 2016) in 2016 and another (He et al., 2016) in 2018 which builds on the approach of (Konstan et al., 2016). Another approach of using temporal dynamics on recommender algorithms was taken by Koren (Koren, 2017) in 2009, which is worth mentioning in this context. Our work certainly believes that temporal signals are important, but we have taken a different approach unlike integrating it directly with state-of-the-art recommender algorithms (viz. MF or MC) as done by (Konstan et al., 2016), (He et al., 2016) or (Koren, 2017). We have considered or modeled it as separate signal and apply supervised learning on top it to cater to our problem. More recently, with the popularity of neural network based applications, many other parallel and subsequent works have used a Recurrent Neural Network or LSTM or Transformer to more effectively capture the repeat purchase pattern. A more recent work by Hu et al. called Sets2Sets (Hu et al., 2018) has the encoder which maps the set of elements from each previous time step onto a vector, while the decoder, uses a set-based attention mechanism to decode the set of elements from each subsequent time step from the vectors. This approach outperforms several state-of-the-art methods. Another work done by Hu et al. called TIFUKNN (Hu et al., 2018) in 2020, proposes a simpler method which outperforms even the RNN based approaches when it comes to NBR. It claims that personalized item frequency (PIF) provides critical signals for NBR, but existing methods including the RNNs fail to capture it. Their solution is an item frequency-based kNN method. It is to be noted that we also implement inter-category product ranking where item-frequency is a key signal, but our implementation is dependent on features derived from guest purchases while TIFUKNN depends on insights from similar guests using k-Nearest Neighbors. Another RNN approach developed by Yu et al. called DREAM (Yu et al., 2018) in 2016, where the input layer consists of multiple basket representations followed by a pooling operation on items in them to obtain a representation of the basket. Dynamic representation of the customer is obtained in the hidden layer and the output layer displays the customer's scores towards all items. The approach of Ying et al. called SHAN (Ying et al., 2017), conststs of 2 stage attention layers called sequential hierarchical attention layers. The first layer captures customer's long-term behavior, followed by a second layer which is a composition of long and short term behavior. Finally we explore the approach of Ren et al. called RepeatNet (Ren et al., 2017) developed in 2019. They capture repeat consumption by incorporating a unique repeat-explore mechanism in RNN, which consists of encoder and 2 decoders to learn the recommendation probability for each item in the two modes viz. repeat and explore. There has been some work on hazard based approach by Kapoor et al. (Kapoor et al., 2014) in 2014, to predict customer's return time. They proposed framework to evaluate factors that influence customer return for web services, using the Cox's proportional hazard model (Cox, 1979). This model can include several covariates. Compared to baseline regression and classification methods, the hazard-based model performs better in predicting user return time and categorizing users by their predicted return time. On top of this work, they also created a semi-Markov model (Kapoor et al., 2014) that predicts when users will return to familiar items. The model takes into account latent psychological factors such as sensitization and boredom that occur when the same items are repeatedly consumed. While we noted learnings from the existent research that has been done in the NBR domain, but as per the best of our knowledge our approach has its uniqueness and while compared against many of the above solutions as baselines, we saw promising results. Our approach captures the importance of sequence models by considering time-series as a feature. It also accepts the success of hazard based approach and considers it to be an integral component of the solution. Also, it takes care of PIF to generate recommendations at category to item level - which has been a concern for traditional RNNs. On top of it, it has capability to capture complex (non-linear) relationships amongst the all signals through a simple usage of FC neural network. ## 3. Model ### Category level repurchase modeling We use category level features to predict the customers' likelihood to repurchase items. Each customer has their own features crafted by their purchase history, and the last \(m\) days of customer purchase data is used to generate labels to train a category level model. All purchase history before this \(m\) days is used to generate the features. Any category in which customers repurchased an item in this time period is considered label 1 while the other categories are assigned label 0. The main features considered to train the model are enumerated in subsequent subsections. The purchase history of a customer before this time frame is used to obtain features. #### 3.1.1. Survival Analysis Survival analysis focuses on the expected duration of time until occurrence of an event of interest. It differs from traditional regression by the fact that parts of the training data can only be partially observed, which is stated as being censored. For these censored observations, we only know that the event time is greater than the time at the point of censoring. In the retail scenario, we consider the purchase of an item within a category as an event. For each category, repeat purchase data can then be used to construct a life table across customers for each category, which will allow us to predict repeat purchase risk as a function of time. A life table summarizes the events and censored cases across time. At time 0, all observations (reference purchases) are still at risk, which means that they have not yet repeated the purchase (event) or been censored. As events and censored cases occur, observations fall out of the risk set. Repeat purchase data can be used to compute a few useful features: 1. hazard (eq. 1) is the probability of event occurring at kth day, conditional on the event not occurring before day k. It denotes an approximate probability that an event (repurchase) occurs in a given time interval, under the condition that an user would remain event-free up to that time (no purchase). \[\texttt{hazard}_{k}=\texttt{n\_event}_{k}/\texttt{n\_risk}_{k} \tag{1}\] 2. cum_hazard (eq. 2) is cumulative sum of hazard over time. \[\texttt{cum\_hazard}_{k}=\sum_{k=0}^{k}\texttt{hazard}_{kk} \tag{2}\] 3. survival (eq. 3) is probability of the event occurring after day k or equivalently, the proportion that have not yet experienced the event by time t. \[\texttt{survival}_{k}=\exp(-1*\texttt{cum\_hazard}_{k}) \tag{3}\] 4. cum_survival (eq. 4) as probability of event occuring in \(\pm 3\) days to today. We additionally define this feature since many grocery customers shop once a week. \[\texttt{cum\_survival}_{k}=\texttt{survival}_{k+3}-\texttt{survival}_{k-3} \tag{4}\] 5. normalized_risk (eq. 5) is defined as risk associated with the user category today as a fraction of risk on the day of purchase. \[\texttt{norm\_risk}_{k}=\texttt{n\_risk}_{k}/\texttt{n\_risk}_{0} \tag{5}\] 6. normalized_event (eq. 6) is defined as the event probability on the given day normalized by event plus censor population. \[\texttt{norm\_event}_{k}=\texttt{n\_event}_{k}/\texttt{n\_event\_k}_{\text{ censor}}_{k} \tag{6}\] Building this model gives a population level overview of the item repurchase rate. For example, we observe that people mostly repurchase bananas every 7 days and cleaning supplies every 21 days, so the hazard function is maximized at that time duration between purchases. Based on the last date of purchase of each item by the customer, we can use survival analysis to predict the date of repurchase or the probability of repurchase after n days. #### 3.1.2. ARIMA models Autoregressive Integrated Moving Average or ARIMA models are useful for short term forecasts on non-stationary time series problem. For each customer and category, we try to characterize their purchase pattern using ARIMA and predict the next day of purchase. ARIMA models have three parameters (\(p\), \(d\), \(q\)) where \(p\) is the order of the autoregressive model, \(d\) is the degree of differencing, and \(q\) is the order of the moving-average model. We build one ARIMA model that observes the past dates of purchases within a category to predict the next one and a second model to consider the quantity of item purchased and predict the current rate of consumption by the customer (say X uses 2 oz of shampoo daily). This is then used to predict the date when the customer will likely run out of the item. For each customer-category pair, we train these models and use their forecasts ARIMA(date) and ARIMA(rate) as features. #### 3.1.3. Other features We consider three more behavioral category level features: NumPurchases - Number of times a given customer has purchased from the category, tripsSinceLastPurchased - the number of purchases in other categories customer has made since purchasing in this category, daysSinceLastPurchased - the time difference between today and last date the customer made a purchase in this category. #### 3.1.4. Model training We take the past 1.5 years of user shopping data to train the model to ensure we capture a yearly cadence. The last \(m\) days of data is held out to generate labels. For example - we may take Jan 2021- July 24 2022 dataset to generate features for all guests. For those guests who shopped during July 25 - 31 (\(m=7\)), we generate labels 0 and 1 for categories not shopped and shopped respectively. The 6 features from survival model, 2 predictions from two ARIMA models and the 3 other features mentioned earlier are generated for each user and category pair. We trained a 2 layer neural network on the category level guest purchase dataset. We wanted to keep it light because the number of input features is small (11), and we wanted it to scale well for the large number of users. The most performant neural net was composed of 2 fully connected layers (10 and 5 neurons) with sigmoid activations. The output layer is run through a softmax and the logistic loss function is used for optimization. ### Inter-category Product Ranking In general, we observed that a customer is most likely to repurchase their most frequently or most recently bought items. The two main features used to rank products within a category are frequency (Freq) and recency (Rec) of purchase. We wanted to combine them both to arrive at optimal ranks, however, recency is measured in days and frequency is a count. To come to a common ground, we convert both into ranks. Item Frequency Rank (IFR) and Item Recency Rank (IRR) are obtained by ranking the frequency counts and days (respectively) since the last purchase of an item (DaysSincePurchase). \(\mathsf{IFR}=Rk(Freq),\mathsf{IRR}=Rk(DaysSincePurchase)\). We combine the ranks using a weighted average, rank again, then divide the rank by number of times the item is bought (\(NIB\)). This insight was based on user feedback and will be discussed in later sections. The equation 5.2 shows how final Item Rank (IR) is calculated. \[\mathsf{IR}=ceil(\frac{1}{NIB}\times Rk(\alpha\times IRR+\beta\times IFR)) \tag{7}\] where the parameters \(\alpha\) and \(\beta\) were obtained using exhaustive grid search in the range [0,1]. ### Model output We combine the outputs of PC and IC models to get an aggregated single list of items for recommendations. Let \(Rk_{PC}\) and \(Rk_{IC}\) represent the PC rank for an item's category and IC rank of the item respectively. The PCIC model outputs in a round robin manner i.e. \(Rk=Rk(sortByAsending(Rk_{PC},Rk_{IC}))\) ## 4. Experiments In this section, we conduct experiments to answer the following questions: Q1: What is the effectiveness of the proposed method? Does it outperform state-of-the-art NBR/ BIA methods? Q2: How well does this method scale up to generate recommendations for millions of users? Q3: How is model performance impacted by the input features? Q4: How do training and testing date ranges change the performance of the model? ### Experimental Settings #### 4.1.1. Datasets We use four publicly available datasets shown in Table 1 to compare the performance of the proposed method with existing methods in literature: ValuedShopper2, Instacart3, Dunnhumby4, and TaFeng5. We also evaluate using an internal dataset consisting of the sales history of users at a large retailer. There are around 100M users and 3M products in this dataset. Footnote 2: [https://www.kaggle.com/c/acquire-valued-shoppers-challenge/overview](https://www.kaggle.com/c/acquire-valued-shoppers-challenge/overview) Footnote 3: [https://www.kaggle.com/c/instacart-market-basket-analysis](https://www.kaggle.com/c/instacart-market-basket-analysis) Footnote 4: [https://www.dunhumby/com/cancers/engineering/socurecfiles](https://www.dunhumby/com/cancers/engineering/socurecfiles) Footnote 5: [https://www.kaggle.com/chinniyidas09/ta-feng-grocery-dataset](https://www.kaggle.com/chinniyidas09/ta-feng-grocery-dataset) #### 4.1.2. Evaluation Protocol We use recall (@K) and NDCG (@K) metrics to evaluate and compare our methods. The first metric evaluates the fraction of ground truth items, which customers bought in last trip, that have been rightly ranked over top-K items in all testing sessions. NDCG is a ranking based measure which takes into account the order of purchased items in the recommendations and generates a score between 0 to 1. We use the past baskets of a given customer to predict their last basket. We consider 80% of customers data to train the model and remaining to test using 5-fold cross validation. We reserve 10% training data as a validation dataset for hyper-parameters tuning in all the methods. #### 4.1.3. Baselines 1. TopSell: It uses the most frequent items that are purchased by users as the recommendations to all users. 2. FBought: It uses the most frequent items that are purchased by a user as the recommendationto him. 3. userKNN (Kumar et al., 2017): It uses classical collaborative filtering based on kNN. All the items in the historical baskets of user are merged as a set of items. 4. RepeatNet (Kumar et al., 2018): RNN-based model for session-based recommendation which captures the repeated purchase behavior of users. It uses GRUs and Attention. To apply this method, user baskets are translated to a sequence of items. 5. FPMC (Kumar et al., 2017): Matrix Factorization uses all data to learn the general taste of the user whereas Markov Chains can capture sequence effects in time. FPMC combines the both for Next Basket Recommendation problem. 6. DREAM (Kumar et al., 2018): Dynamic REcurrent bAsket Model (DREAM) learns a dynamic representation of a user but also captures global sequential features among baskets. 7. SHAN (Kumar et al., 2018): A deep model based on hierarchical attention networks. It partitions the historical baskets into longterm and short-term parts to learn the long-term preference and \begin{table} \begin{tabular}{|c|c c c c|} \hline \hline & Num Items & Num Users & Basket Size & Baskets/ User & Items/ user \\ \hline tafeng & 12062 & 13949 & 6.27 & 5.69 & 6.397 \\ \hline dunhumby & 4997 & 36241 & 7.33 & 7.99 & 22.56 \\ \hline shoppers & 7907 & 10000 & 8.71 & 56.85 & 24.934 \\ \hline instacart & 8000 & 19935 & 8.97 & 7.97 & 33.271 \\ \hline Internal & \(\sim\)3M & \(\sim\)100M & \(\sim\)10 & \(\sim\)25 & \(\sim\)200 \\ \hline \hline \end{tabular} \end{table} Table 1. Some characteristics of datasets considered for evaluation short-term preference based on the corresponding items attentively. * Sets2Sets (Kang et al., 2017): The state-of-the-art end-to-end method for following multiple baskets prediction based on RNN. Repeated purchase pattern is also integrated into the method. * RCP (Kang et al., 2017): Repeat Customer Probability (RCP) finds repeat probably of an item & repeat items based on that. * ATD (Kang et al., 2017): Aggregate Time Distribution Model fits a time distribution to model probablity distribution and time characteristics of repeat items. * PG (Kang et al., 2017): Poisson Gamma distribution fitted to predictaggregate purchasing behavior. * MPG (Kang et al., 2017): A modified PG distribution to make the results time dependent and integrate repeat customer probability. We use grid search to tune the hyper-parameters in compared methods. For userKNN, the number of nearest neighbors is searched from range(\(100,1300\)). For FPMC, the dimension of factor is searched from the set of values (Kang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). For RepeatNet, DREAM, SHAN, and Sets2Sets, the embedding size is searched from the set of values (Kang et al., 2017; Wang et al., 2017; Wang et al., 2017). For PCIC model, ARIMA model was autofitted in range (3, 3, 0). ### Performance Comparison (Q1) Table 2 gives the performance comparison of PCIC model with existing baselines. Several observations can be made from the table. First, we observe that the PCIC model has highest recall and NDCG values in most cases on Valued Shopper, instacard and Dunhumby datasets. Surprisingly, RCP model performs well on fafeng dataset. in NDCG and recall metrics in TIFUKNN and Sets2Sets against PCIC. As a result, we did not put effort into scaling either algorithm. PCIC was implemented in a distributed hadoop cluster using Apache Spark and takes around 6-8 hours of time to train and test the model for 100M users. The main time-consuming part is to figure out ARIMA model hyper-parameters for each user-category pairs and to generate those features. FBought is straight forward to implement and takes few minutes of run time. We also implemented MPG model in distributed cluster using the maths described in the paper. Table 3 shows the performance comparison of FBought, MPG and PCIC models. Although PCIC performs well in terms of NDCG, the recall is slightly lower than MPG. Next, we calculated MPG parameters at category level instead of original item level and input it as part of features to PC. The performance of integrated PCIC(+MPG) outperforms both PCIC and MPG. ### Feature Importance (Q3) To obtain the feature importance, we replaced the original neural layer with a Gradient Boosting Tree classifier. The values are plotted in figure 2. We can observe that the ARIMA forecasts have a very high impact on the output of the model, particularly the model that tries to predict the next purchase based on rate of individual consumption of item by the user. The survival features have smaller impact on the prediction quality meaning other user's purchases play a small role in user's verbincrease than his own characteristics. This can be one of the reason why approaches like itemKNN or TIFUKNN which focus on collaborative user behavior don't perform as well as PCIC. MPG does capture rate of consumption with a statistical model and it comes close to PCIC. The features such as number of days since past purchase and explicit category frequency (num purchase) also have high feature importance. if we were to collect the top 3 features, we can say that we can predict whether a user will purchase an item today based on how many times he has purchased before, how many days since his last purchase with us, how much did he purchase last time and how long will it last. ### Impact of train and test data selection (Q4) We held out one week of the most recent customer purchases from this dataset for testing and used one year of purchases made prior to that week for training. A customer and their product purchase were considered as a repeat purchase in the test period only if the customer purchased a product in the training period (y years before the test period, y =1.5) and also purchased the same product sometime in the test period. The (user, category) pairs purchased in this duration are labeled 1 and the categories the user did not purchase in this duration was labeled as 0. As the pandemic caused increased adoption of the app and website, users started shopping online more frequently particularly. Based on the initial feedback, we observed that the BIA list was not updating particularly for the highly engaged users. We hypothesized that this can be because of the following reasons: (1) the model being trained on all users may not be able to exactly capture the signals and behavior of highly engaged user. (2) The labels are captured based on last 1 week of purchases. But highly engaged users shop much more often, hence their labels are not very accurate. We experimented with scoring the model daily on 1 day of user purchases. We also experimented on training the model only on the most engaged users, defined as users who have made purchases in more than 25 categories. Table 4 shows the improvement in NDCG metric for the PC model with the changes in test time frame and with training on only the most engaged users. Reducing the test time frame significantly improved the performance of the model. The most engaged users had a lower NDCG performance than all users when the test dates were 7 days. We also observed that training the model only on the most engaged users improves NDCG for all users too although it leads in savings on training time. The time taken to train the generate the features and train the model on all users is 2.5x the time taken for highly engaged users ## 5. Deployment journey In this section, we discuss several user-facing questions we addressed as well as our experience in deploying PCIC. ### Deployment and Online Experience While offline metrics are informative and help us build competent models, the true test of a recommendations model is online, where we can measure impact on user behavior. We deployed PCIC to a production environment where recommendations are generated daily in our compute cluster on an Apache Spark ecosystem and exported to the cloud for real-time serving. When a user visits the \begin{table} \begin{tabular}{|l|l|l|l|} \hline \multirow{2}{*}{Trained on} & \multirow{2}{*}{Test Timeframe} & \multicolumn{2}{|c|}{NDCG (Test)} \\ \cline{3-4} & & Most Engaged & All \\ \hline All & 7 days & 0.2009 & 0.2325 \\ \hline All & 1 day & 0.3501 & 0.3583 \\ \hline Most Engaged & 1 day & 0.3602 & 0.3589 \\ \hline \end{tabular} \end{table} Table 4. Modifications in performance of PC model with changes in training data selection and testing timeframe. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline & Recall@3 & NDCG@3 & Recall@5 & NDCG@5 \\ \hline \hline FBought & 0.2020 & 0.0832 & 0.0305 & 0.1212 \\ \hline MPG & **0.0307** & 0.1036 & **0.0433** & 0.1328 \\ \hline PCIC & 0.0267 & **0.1071** & 0.0377 & **0.1368** \\ \hline \hline PCIC(+MPG) & 0.0317 & **0.1091** & 0.0447 & **0.1408** \\ \hline \end{tabular} \end{table} Table 3. Performance comparison on internal dataset Figure 2. Relative importance of input features to PC model site, these recommendations are then served to them, filtered on the item availability based on inventory and available shipment options selected by the user. ### Human-in-the-loop feedback We first rolled out the results to a pool of internal team members for testing. This gave us some feedback as to having an exclusion list of some categories which users may not be very comfortable looking at, in their App (with friends and family or otherwise). Based on the feedback, we built an exclusion list of categories which are applied on top of recommendations as filters. Secondly, we found that users were sometimes recommended an item they'd recently purchased (e.g., a new flavor of yogurt) from a category where they repurchase, but not one they'd like to repurchase. We used a two step approach filter out such items from recommendations. Apart from the category being a repurchase category, we tried to ensure that the item was bought by the guest at least twice in the past n months (n=6). This helps the customer to identify the items in buy it again list as an item they have repeat purchased. Second, we identified items with low repurchase rates (similar to repurchase rate threshold in RCP (Bradner et al., 2015)) and removed them. Several users also noted they typically buy more than one item from a specfic category (e.g., two or more flavors of yogurt) in a single trip. In the backend, we may have a ranked list of categories and ranked list of items within each category. Originally, the PCIC model would round robin among these lists to merge a new list which has first item in each category followed by second item on each category and so on till the list is finished. For a user who purchases more than one item per category every trip, this may be inconvenient. To resolve this, we calculate a variable \(N1B\) which denotes the number of times the item was purchased by the user per trip. We tweaked the math used to combine the two lists by dividing item rank by NIB in and then taking a ceil function to create new item ranks. ### Metrics To quantify the impact of the proposed PCIC algorithm, we performed A/B tests against existing online baselines. Each test was run for more than two weeks and stopped after ensuring that the samples are statistically significant. The metrics considered for tests are defined as follows: * CTR or Click Through Rate : Percentage of recommendation displays which were clicked by the guest. * Conversion Rate: Percentage of clicked recommendations which were purchased by the guest same day. * Units: The total number of units purchased by the users who were part of the treatment. ### Testing against baseline When we introduced Buy It Again recommendation lists to the guest shopping experience, we A/B tested PCIC against a baseline of FBought. The results are given in Table 5. We can see that there is significant lift across all three metrics - 6% in CTR, 9% in Conversion and 27% in units purchased. ### Testing Buy it Again on web search We tested adding a Buy It Again recommendation list to the search results of all users. For this, we filtered the Buy It Again results using the search query context, so if someone searched for paper towel, the BIA recommendation list would be filtered to show only paper towels. As a sizable fraction of user searches do not pertain to items the user has already purchased, most of the time this recommendation list would not be shown to the guest. We found that the user interaction with this recommendation list was significantly higher than existing search results (by over 20%). As we were testing a recommendation list against a non-existing one, we used visit level metrics to evaluate BIA. It was observed that the add-to-carts, average order values, and units per order went up by 0-2%. We also observed that the guests were able to directly add the items to cart from the recommendation list and subsequently browsed fewer items despite having higher add to carts. These metrics consolidate our belief that showing buy it again items helps the guest in their shopping experience. ### Building virtual aisles Research studies and our internal surveys indicate that online grocery shopping experience for users is significantly different from the typical user store experience (Bradner et al., 2015). Shopping basket variety is significantly lower for online shopping trips, as measured by the number of unique categories and items purchased. Online grocery shopping environments may accelerate consumer inertia, leading to repurchase of essentials and reduction in purchase of fresh vegetables, impulse purchases such as candy or bakery desserts, and discretionary spending. After identifying these opportunities in online shopping experience development, in 2021, we rolled out BIA to guests by filtering recommendations by categories (Milk, Yogurt, Beauty, etc) to create a virtual aisles experience for online users. We use the personalized list of categories for each guest using PC model. For each category, we present a list of recommended items from IC model to form a virtual aisle. In each aisle, we first showed the BIA items of the guest followed by other relevant items (generated using other algorithms and personalized to each guest, not discussed here for sake of brevity). We rolled out the experience directly to the users and report the lift in experience of guests who used the experience against those who didn't interact with it. Users who interacted with these recommendations had a significant increase in units per order (25-50%), and average order value (7-35%). Since the buy it again essentials are lower ticket items, they have a smaller dollar impact in order value than units per order. We saw higher engagement of guests with virtual aisles experience for frequency categories in the App than in the site. \begin{table} \begin{tabular}{|l|l|} \hline & Lift (\%) \\ \hline CTR & 6 \\ \hline Conversion & 8.5 \\ \hline Units & 27.5 \\ \hline \end{tabular} \end{table} Table 5. Measuring impact of BIA against FBought. ## 6. Future Directions Buy It Again recommendations help users to quickly complete their shopping missions. Traditional approaches tend to model guest personalized behavior at item granularity. In this paper, we present the case for a coarse grained model which can capture the customer behavior at item category level. The proposed Personalized Category (PC) model combined with Items-within-Category (IC) model outperform existing BIA and NBR models on standard public datasets. The PCIC model also scales well for large retailers with millions sized product catalogs and millions of active guests. The A/B tests on the site show a significant improvement in guest shopping experience and guest spends by using the model. In the future, we would recommend that retailers explore models that combine the insights from Personalized Category features with Personalized Item features. Moreover, we would recommend considering mutual excitation among items and categories as simultaneous consumption has some inherent relationship with repeat consumption.
2305.10064
FlashBench: A lightning nowcasting framework based on the hybrid deep learning and physics-based dynamical models
Lightning strikes are a well-known danger, and are a leading cause of accidental fatality worldwide. Unfortunately, lightning hazards seldom make headlines in international media coverage because of their infrequency and the low number of casualties each incidence. According to readings from the TRMM LIS lightning sensor, thunderstorms are more common in the tropics while being extremely rare in the polar regions. To improve the precision of lightning forecasts, we develop a technique similar to LightNet's, with one key modification. We didn't just base our model off the results of preliminary numerical simulations; we also factored in the observed fields' time-dependent development. The effectiveness of the lightning forecast rose dramatically once this adjustment was made. The model was tested in a case study during a thunderstorm. Using lightning parameterization in the WRF model simulation, we compared the simulated fields. As the first of its type, this research has the potential to set the bar for how regional lightning predictions are conducted in the future because of its data-driven approach. In addition, we have built a cloud-based lightning forecast system based on Google Earth Engine. With this setup, lightning forecasts over West India may be made in real time, giving critically important information for the area.
Manmeet Singh, Vaisakh S. B., Dipjyoti Mudiar, Deewakar Chakraborty, V. Gopalakrishnan, Bhupendra Bahadur Singh, Shikha Singh, Rakesh Ghosh, Rajib Chattopadhyay, Bipin Kumar, S. D. Pawar, S. A. Rao
2023-05-17T09:09:26Z
http://arxiv.org/abs/2305.10064v1
FlashBench: A lightning nowcasting framework based on the hybrid deep learning and physics-based dynamical models ###### Abstract Lightning strikes are a well-known danger, and are a leading cause of accidental fatality worldwide. Unfortunately, lightning hazards seldom make headlines in international media coverage because of their infrequency and the low number of casualties each incidence. According to readings from the TRMM LIS lightning sensor, thunderstorms are more common in the tropics while being extremely rare in the polar regions. To improve the precision of lightning forecasts, we develop a technique similar to LightNet's, with one key modification. We didn't just base our model off the results of preliminary numerical simulations; we also factored in the observed fields' time-dependent development. The effectiveness of the lightning forecast rose dramatically once this adjustment was made. The model was tested in a case study during a thunderstorm. Using lightning parameterization in the WRF model simulation, we compared the simulated fields. As the first of its type, this research has the potential to set the bar for how regional lightning predictions are conducted in the future because of its data-driven approach. In addition, we have built a cloud-based lightning forecast system based on Google Earth Engine. With this setup, lightning forecasts over West India may be made in real time, giving critically important information for the area. Lightning prediction, deep learning, WRF lightning parameterization, hybrid modelling ## 1 Introduction Lightning discharge is a highly localized natural phenomena manifestation produced created by deep convective storm cloud movement, dust storms, and volcanic eruptions, or other turbulent atmospheric conditions to the Earth through a conductor. Lightning is known for its devastating direct and indirect consequences. Lightning strikes are difficult to prevent since they are created inside the cloud. Lightning strikes the Earth about eight times a second around the globe [1]. The peak discharge current in each stroke ranges from several thousand amperes to 2,00,000 amperes or more, and its passage is very harmful for humans, livestockes, people, trees, and electrical infrastructure, and other living and non-living items. Lightning is the only geophysical phenomena which is constant and ubiquitous enough to account for wild fire on Earth. A rapid quick increase in atmospheric pressure and a consequent the formation of a forceful shock wave, which is perceived as thunder, are the result of lightning striking the air [2]. ### Motivation Lightning is a danger in many African and South American countries, as well as over several places in Asia [3, 11]. The Lightning Imaging Sensor (LIS) on board the Tropical Rainfall Measuring Mission (TRMM) satellite has created global maps of lightning frequency [4]. It shows that all of the high lightning places are concentrated in tropical land areas, especially in high elevation terrains, while the polar regions have essentially no lightning and the oceans have just 0.1 to 1 strike/km2/yr [4]. Studies in different parts of the world have emphasized on the media's under-reporting of lightning events and have reaffirmed the difficulties in obtaining accurate lightning datasets [5, 6, 7]. The reporting of lightning fatalities and injuries is inconsistent and sometimes not mandatory under different jurisdictions. It's because of this that lightning-related occurrences go unreported, and data from medical sources are untrustworthy [8]. Scant media stories are therefore used as a substitute instead [9]. In the Indian context, the Ministry of Home Affairs information technology section, the National Crime Records Bureau (NCRB), Government of India, releases a list of unnatural deaths in India every year. The various natural causes of death as listed in the database are cold and exposure, avalanche, starvation/thirst, cyclone/tornado, epidemic, earthquake, heat stroke, flood, lightning, landslide, torrential rains, forest fire and other natural causes. Around one-tenth of all unnatural fatalities have been estimated to be caused by lightning [10]. In the Maharashtra state of India alone, 72 casualties annually have been reported to occur [12]. Besides certain purely empirical techniques [19], the Indian subcontinent lacks a systematic framework offering an accurate lightning warning prediction system. An understanding of the components that govern lightning generation in India is critical to developing a system for predicting it. A recent study [13] tested a number of existing lightning parameterizations in the Weather Research and Forecasting (WRF) dynamical model based on storm features. Still, not much model accuracy has been reported in the literature. Deep learning has shown great promise in simulating physics of the climate and can be used to a develop hybrid framework for lightning prediction. This study's aim is to eventually lead to the development of physics-inspired deep learning models for lighting prediction. Conditional on the improved lightning prediction by explainable artificial intelligence (AI) models, a hybrid framework incorporating deep learning and dynamical model would be highly valuable. ### Deep learning for lightning prediction During the last decade, deep learning has emerged as a viable method to address complicated, complex challenging problems by unravelling the nonlinearities in different layers of the deep neural network (see also Singh et al. 2021) [14]. The introduction of open-source libraries (TensorFlow, PyTorch, Theano, and others) has led to faster adoption of deep learning for various applications. There are several weather and climate science applications for nonlinear operators that have gained importance in the computer vision field, including the challenge of understanding accurate precipitation forecasts in numerical weather prediction models. #### 1.2.1 Related work and critical analysis Based on air pressure at station level, air temperature, relative humidity, and wind speed, [15] created a four-parameter data-driven model for lightning prediction. The model considered lead durations of up to 30 minutes or less. They compare their machine learning model with empirical methodologies to show the high-fidelity of data-science based approach. Lin et al. 2019 [16] introduced an attention-based dual-source spatiotemporal neural network for until 12-hr lead forecast of lightning. They adopted the RNN encoder-decoder structure, integrating recent lightning observations and numerical simulations, to increase forecast accuracy. In addition, a channel-wise attention method on our model is used to improve the useful information included in the simulation data during forecasting. Lightning forecasting model LightNet was developed by [17] which is based on deep neural networks. LightNet uses dual encoders to extract spatiotemporal aspects of WRF simulation data and recent lightning observation data. These elements are paired with a Fusion Module, which is useful in overcoming the simulation's errors and increasing the accuracy of the forecast. They used real-world lightning data from North China for testing. Pakdaman et al. 2020 [18] used decision trees and neural networks to forecast lightning over Mashhad, Neyshabour, and Quchan in the Khorasan Razavi provinces in Iran. They found that the decision tree outperformed neural networks by taking into consideration unbalanced datasets. Ming et al. 2019 [20] describe The Chinese Academy of Meteorological Sciences Lightning Nowcasting and Warning System (CAMS_LNWS) which predicts lightning activity potential and provides warning products. They linked an electrification and discharge model numerical simulation with several remote sensing data using a decision tree in the lightning prediction system. LightNet+, a data-driven lightning forecasting system based on deep neural networks and a lightning scenario, is proposed by [21]. They use complementary information extracted from several data sources, which may be diverse in spatial and temporal domains. According to their findings, LightNet+ delivers much better forecasts than with the more data sources as input into LightNet+ enhancing its forecasting quality. ### Our Contributions Similar to lightnet, the distinctions of this study is that we found that incorporating the temporal development of observed fields rather than only numerical modelling antecedents, enhances the performance of lightning forecast. A comparison is done using a case of thunderstorm simulation from the WRF model employing lightning parameterization. Further, a cloud-computing enabled Google Earth Engine based lightning prediction system is built to provide real-time prediction of lightning over West India. This is the first research to do so using data-driven lightning forecast for India and may be used as a standard to enhance lightning forecasts over the area. \begin{table} \begin{tabular}{|l|c c c c c c c c c c c c c c c|} \hline **Study** & **A** & **B** & **C** & **D** & **E** & **F** & **G** & **H** & **I** & **J** & **K** & **L** & **M** & **N** \\ \hline **Mostajabi et al.** & ✓ & & & ✓ & ✓ & ✓ & & & & & & & & & **0-0.5** & **xgboost** \\ **2019 [15]** & & & & & & & & & & & & & & & & **hrs** & \\ \hline **Lin et al.** & 2019 & ✓ & & & & & & & ✓ & ✓ & ✓ & & & **0-12** & **ADSNet** \\ **Lin et al.** & 2019 & ✓ & & & & & & & & & & & & & **hrs** & \\ **Geng et al.** & 2019 & ✓ & & & & & & & & & & & & **0-6 hrs** & **LightNet** \\ **Geng et al.** & 2019 & ✓ & & & & & & & & & & & & & **0-6 hrs** & **LightNet** \\ **Geng et al.** & 2019 & ✓ & & & & & & & & & & & & & & **0-6 hrs** & **Decision Tree** \\ **Geng et al.** & 2019 & ✓ & & & & & & & & & & & & & & **0-1 hrs** & **Decision Tree** \\ **Geng et al.** & 2021 & ✓ & & & & & & & & & & & & & **0-6 hrs** & **LightNet+** \\ **Geng et al.** & 2021 & ✓ & & & & & & & & & & & & & & **0-6 hrs** & **FlashBench** \\ **This study** & ✓ & ✓ & ✓ & ✓ & ✓ & & & & & & & & & & **0-6 hrs** & **FlashBench** \\ \hline \end{tabular} _Abbreviations:**A**: Local single-site predictions, **B**: Gridded forecast, **C**: Surface air temperature as predictor, **D**: Relative humidity as predictor, **E**: Wind speed as predictor, **F**: Surface pressure as predictor, **G**: Mixing ratio of ice, snow and graupel as predictor, **H**: Maximum vertical wind speed **I**: Precipitation as predictor, **J**: Comparison with physics-based dynamical model, **K**: CAPE as a predictor, **L**: South Asian domain, **M**: Lead time, **N**: Al/ML model \end{table} Table 1: Comparison of machine learning / deep learning studies for lightning prediction Data and methodology Lightning Detection Network (LDN) data from West India, is utilized to construct a deep learning-based model for predicting lightning strikes. 20 Earth Network Lightning Sensors (ENLS) make up this network. Both cloud-to-ground (CG) and intra-cloud (IC) flashes can be seen by ENLS observations across West India. It is employed for long-range detection of CG discharges at the low frequency (1 kHz). The intermediate frequencies (1 kHz to 1 Mhz) and the highest frequencies (1 Mhz - 12 Mhz) are used to find return strokes and locate in-cloud pulses. CG flashes are defined as having at least one return stroke within 700 milliseconds of each other within a radius of 10 kilometers. ENLS CG flashes have a 90% detection rate, whereas IC flashes have a 50% detection rate (Greeshma et al 2019). In the present study to simulate lightning events, we have used Advanced Research WRF model (ARW), version 3.9.1 developed by the National Center for Atmospheric Research (NCAR). The WRF is a non-hydrostatic, fully compressible, terrain-following 3D cloud resolving model. The lightning simulations are carried out considering four nested domains (d01, d02, d03, d04) with a horizontal grid spacing of 27km, 9km, 3km & 1km, respectively. As the region of interest in the present study is the state of Maharashtra, India, hence the innermost domain (d04) is centered over this region. In the present simulation, the initial and boundary conditions are provided from 6 hourly National Centre for Environmental Prediction (NCEP) Final operational global analysis data with 1\({}^{\circ}\times\) 1\({}^{\circ}\) horizontal resolution. For longwave radiation, the Rapid Radiative Transfer Model (RRTM) has been used (Mlawer et al., 1997) while the Dudhia scheme (Dudhia, 1989) has been used for short wave radiation. The model is integrated up to 24 hours. The first 6 hour of the model integration is considered as model spin-up time. The Kain-Fritsch (KF) cumulus scheme is used for the outer two domains (d01 & d02). The cloud-resolving 3rd and 4th domain are treated with explicit convection. For microphysical parameterization, the Morrison double moment scheme with five classes of cloud hydrometeors (Morrison et al., 2005) has been used. This scheme predicts the mass mixing ratio and number concentration of five hydrometeors species including cloud droplets, cloud ice, snow, rain, and graupel. To simulate lightning dynamically, we have used the PR92 (Price and Rind, 1992) parameterization scheme. Greeshma et al., (2021) also used this scheme over the same geographical region and found to be very skillful for lightning prediction. In this scheme, the formulation of flash rates is different over land and ocean considering the distinct cloud dynamical features. Over the continent, the flash rate is parameterized as follows \[F_{c}=3.44\times\ 10^{-5}H^{4.9} \tag{1}\] Here, \(F_{c}\) is the flash rate (flashes/min), and H is the storm height. Over the continents, the storm height (H) and the maximum updraft velocity, \(W_{max}\) are related by \[W_{max}=1.49\times H^{1.09} \tag{2}\] Hence the flash rate, \(F_{c}\) can be expressed in terms of maximum updraft velocity, \(W_{max}\) as \[F_{c}=5\times\ 10^{-6}W_{max}^{4.54} \tag{3}\] For the marine cloud, flash rate \(F_{m}\) is formulated by following Michalon et al. (1999) as \[F_{m}=6.57\times 10^{-6}H^{4.9} \tag{4}\] ## 3 Results and discussion It can be seen that the observed lightning event is poorly simulated by the WRF. Our model does quite well and captures the spatial patterns. Patches of large flash rates over seen in the observations are also seen in the model. This shows the capability of adopting a hybrid approach for lightening forecasts which otherwise remains a difficult task for state of the art model to capture realistically. Figure 2: Comparison of lightning flash rates from the lightning observation network, WRF model simulations and FlashBench for the period spanning 6 hrs starting 11:00 on the 7\({}^{\rm th}\) April 2014 \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline C & Correct & Number of observed & & \\ & rejection (true & lightning-inactive & \\ & negative) & samples correctly & \\ & & identified by the & \\ & & & classifier & \\ \hline R & Random & The expectation of & \\ & & Forecasts & the number of hit & \\ & & lightnings in random & \\ & & forecasts & \\ \hline Metrics & POD & Probability of & Proportion of \\ used for & & Detection & observed lightning- \\ evaluation & & & active samples & \\ & & & correctly identified & \\ & & & by the classifier & \\ \hline FAR & False Alarm & Proportion of & FA & [0,1] \\ & Ratio & observed lightning- & H+FA & \\ & & inactive samples & \\ & & falsely classified as & \\ & & lightning-active by & \\ & & the classifier & & \\ \hline ETS & Equitable & The ratio of the & H-R & [- \\ & Threat Score & number of hit & H+FA+M-R & 1/3,1] \\ & & lightnings to the & \\ & & number of events & \\ & & except for the correct & \\ & & rejections, and & \\ & & removed the & \\ & & contribution from & \\ & & hits by chance in & \\ & & random forecasts & \\ \hline \end{tabular} \end{table} Table 4: Metrics used for the evaluation of results from the FlashBench lightning prediction system over West India \begin{table} \begin{tabular}{|l|l|l|l|} \hline & POD & FAR & ETS \\ \hline Physical Model & 0.14925 & 0.72972 & 0.08105 \\ \hline ML model & 0.73134 & 0.25757 & 0.55907 \\ \hline \end{tabular} \end{table} Table 5: Comparison between WRF dynamical model with lightning parameterization and the FlashBench model for the period spanning 6 hrs starting 11:00 on 7\({}^{\text{th}}\) April 2014 over West India \begin{table} \begin{tabular}{|l|l|l|l|} \hline & POD & FAR & ETS \\ \hline First hour cumulative score & 0.59643 & 0.44660 & 0.38547 \\ \hline First three hours cumulative score & 0.50492 & 0.55271 & 0.29203 \\ \hline Six hour cumulative score & 0.43695 & 0.58705 & 0.25074 \\ \hline \end{tabular} \end{table} Table 5: Performance of FlashBench for the entire test period corresponding to the year 2014 Both a dynamical model (WRF Lightning) and FlashBench, a machine learning-based model, were tested to see how well they could forecast lightning in Western India. The model's predictive accuracy is quantified by the Probability of Detection (POD), in this example for a lightning strike. If the POD score is high, it indicates that the model can accurately predict future occurrences. The percentage of forecasted but unrealized incidents is known as the False Alarm Ratio (FAR). The FAR score should be as low as possible, as this will result in fewer false positives. The ETS (Equal Threat Score) takes into account both hits and misses, and it accounts for the possibility of hits occurring by coincidence. A better model can be identified by a higher ETS score. A POD of 0.14925, FAR of 0.72972, and ETS of 0.08105 were all attained in the Dynamical Model. With such low scores across the board, it's clear that this model has trouble predicting lightning strikes (low POD), produces a lot of false positives (high FAR), and isn't very reliable (low ETS). When compared to these results, the ML model FlashBench performed better with a POD of 0.73134, FAR of 0.25757, and ETS of 0.55907. These results indicate that FlashBench outperforms the dynamical model in terms of forecasting when lightning will strike (high POD), false alarm rates (low FAR), and overall accuracy and dependability (high ETS). We also compare the accumulated results after 1, 3, and 6 hours. FlashBench beats the Dynamical Model across all time intervals, with greater POD and ETS values and lower FAR. Finally, compared to the conventional dynamical model, it appears that the machine learning-based FlashBench model provides a more accurate and dependable forecast of lightning incidents over Western India. ## 4 Conclusions and future work Our research followed a similar methodology to that of LightNet, a well-known system for predicting lightning. However, our approach includes a new, crucial distinction. We opted to incorporate the temporal evolution of observed fields into our model rather than relying simply on numerical modelling precursors, which are basically statistical or mathematical representations of the atmospheric circumstances preceding a lightning occurrence. Changes in atmospheric pressure, temperature, humidity, wind speed and direction, and other variables are all examples of temporal evolution in meteorology. We hoped that by include these real-time adjustments, we might better represent the underlying volatility of the natural weather system and improve the accuracy of our lightning prediction model. Our findings unequivocally showed that this method was superior than others. Our hybrid lightning forecasting model, which is based on machine learning, has regularly surpassed the state-of-the-art algorithms. Because of its capacity to adapt to new data, machine learning can analyse massive volumes of information and spot intricate patterns that conventional models would overlook. Our hybrid model is able to provide more precise and timely forecasts of lightning incidents when this capacity is supplemented with real-time meteorological data. Overall, we improved the strength, accuracy, and dependability of the lightning forecast model by combining numerical modelling with the evolution of observed fields and the efficacy of machine learning. In terms of public safety and disaster management, this novel technique has the potential to greatly improve our capacity to forecast and respond to lightning risks. ## Acknowledgements The authors acknowledge the use of high-performance computational resources at IITM, particularly the NVIDIA Tesla P100 GPU without which this work would not have been possible. The authors thank Prof Auroop Ganguly, Northeastern University, USA for discussions in the early stages of this work.
2301.02197
Virtual Node Graph Neural Network for Full Phonon Prediction
The structure-property relationship plays a central role in materials science. Understanding the structure-property relationship in solid-state materials is crucial for structure design with optimized properties. The past few years witnessed remarkable progress in correlating structures with properties in crystalline materials, such as machine learning methods and particularly graph neural networks as a natural representation of crystal structures. However, significant challenges remain, including predicting properties with complex unit cells input and material-dependent, variable-length output. Here we present the virtual node graph neural network to address the challenges. By developing three types of virtual node approaches - the vector, matrix, and momentum-dependent matrix virtual nodes, we achieve direct prediction of $\Gamma$-phonon spectra and full dispersion only using atomic coordinates as input. We validate the phonon bandstructures on various alloy systems, and further build a $\Gamma$-phonon database containing over 146,000 materials in the Materials Project. Our work provides an avenue for rapid and high-quality prediction of phonon spectra and bandstructures in complex materials, and enables materials design with superior phonon properties for energy applications. The virtual node augmentation of graph neural networks also sheds light on designing other functional properties with a new level of flexibility.
Ryotaro Okabe, Abhijatmedhi Chotrattanapituk, Artittaya Boonkird, Nina Andrejevic, Xiang Fu, Tommi S. Jaakkola, Qichen Song, Thanh Nguyen, Nathan Drucker, Sai Mu, Bolin Liao, Yongqiang Cheng, Mingda Li
2023-01-05T17:59:57Z
http://arxiv.org/abs/2301.02197v1
# Virtual Node Graph Neural Network for Full Phonon Prediction ###### Abstract The structure-property relationship plays a central role in materials science. Understanding the structure-property relationship in solid-state materials is crucial for structure design with optimized properties. The past few years witnessed remarkable progress in correlating structures with properties in crystalline materials, such as machine learning methods and particularly graph neural networks as a natural representation of crystal structures. However, significant challenges remain, including predicting properties with complex unit cells input and material-dependent, variable-length output. Here we present the virtual node graph neural network to address the challenges. By developing three types of virtual node approaches - the vector, matrix, and momentum-dependent matrix virtual nodes, we achieve direct prediction of \(\Gamma\)-phonon spectra and full dispersion only using atomic coordinates as input. We validate the phonon bandstructures on various alloy systems, and further build a \(\Gamma\)-phonon database containing over 146,000 materials in the Materials Project. Our work provides an avenue for rapid and high-quality prediction of phonon spectra and bandstructures in complex materials, and enables materials design with superior phonon properties for energy applications. The virtual node augmentation of graph neural networks also sheds light on designing other functional properties with a new level of flexibility. ## Introduction The structure-property relationship defines one of the most fundamental questions in materials science [16, 21]. The ubiquitous presence of structure-property relationships profoundly influences almost all branches of materials sciences, such as structural materials [3], energy harvesting and conversion and energy storage materials [17, 5, 19], catalysts [37] and polymers [13], and quantum materials [15]. However, despite its central importance to materials design, building an informative structure-property relationship can be nontrivial. On the one hand, the number of stable structures grows exponentially with unit cell size [22], and the structure design efforts have been largely limited to crystalline solids with relatively small unit cells. On the other hand, certain material properties are challenging to acquire due to experimental or computational complexities. In the past few years, data-driven and machine-learning methods play an increasingly important role in materials science and significantly boost the research on building structure-property relationships [24, 6, 38]. Complex structures such as porous materials [1, 27], nanoalloys [10, 36], and grain boundaries [34] are becoming more feasible to handle, and properties ranging from mechanical strength to quantum ordering can be learned with increased confidence [9, 29]. One particular powerful approach is the graph neural networks (GNNs) [7]. By representing atoms as graph nodes and interatomic bonds as graph edges, GNNs
2301.05820
Quantum entanglement generation on magnons assisted with microwave cavities coupled to a superconducting qubit
We present protocols to generate quantum entanglement on nonlocal magnons in hybrid systems composed of yttrium iron garnet (YIG) spheres, microwave cavities and a superconducting (SC) qubit. In the schemes, the YIGs are coupled to respective microwave cavities in resonant way, and the SC qubit is placed at the center of the cavities, which interacts with the cavities simultaneously. By exchanging the virtual photon, the cavities can indirectly interact in the far-detuning regime. Detailed protocols are presented to establish entanglement for two, three and arbitrary $N$ magnons with reasonable fidelities.
Jiu-Ming Li, Shao-Ming Fei
2023-01-14T05:29:14Z
http://arxiv.org/abs/2301.05820v1
Quantum entanglement generation on magnons assisted with microwave cavities coupled to a superconducting qubit ###### Abstract We present protocols to generate quantum entanglement on nonlocal magnons in hybrid systems composed of yttrium iron garnet (YIG) spheres, microwave cavities and a superconducting (SC) qubit. In the schemes, the YIGs are coupled to respective microwave cavities in resonant way, and the SC qubit is placed at the center of the cavities, which interacts with the cavities simultaneously. By exchanging the virtual photon, the cavities can indirectly interact in the far-detuning regime. Detailed protocols are presented to establish entanglement for two, three and arbitrary \(N\) magnons with reasonable fidelities. magnon, superconducting qubit, quantum electrodynamics, quantum entanglement, indirect interaction ## I Introduction Quantum entanglement is one of the most important features in quantum mechanics. The quantum entangled states [1; 2; 3; 4] are significant ingredients in quantum information processing. Over past decades, various theoretical and experimental proposals have been presented for processing quantum information by using various systems such as atoms [5; 6; 7; 8; 9; 10; 11; 12; 13; 14], spins [15; 16; 17; 18; 19; 20; 21], ions [22; 23; 24; 25; 26; 27; 28; 29], photons [5; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39], phonons [40; 41; 42], and so on. With the development of technologies, the quantum entanglement has been established not only in microscopic systems, but also in the macroscopic systems such as superconducting circuits [43; 44; 45; 46; 47; 48] and magnons system [49; 50; 51; 52; 53; 54]. Hybrid systems exploit the advantages of different quantum systems in achieving certain quantum tasks, such as creating quantum entanglement and carrying out quantum logic gates. Many works have been presented so far for quantum information processing in the hybrid systems [55; 56; 57; 58]. For instance, as an important quantum technology [59], the hybrid quantum circuits combine superconducting systems with other physical systems which can be fabricated on a chip. The superconducting (SC) qubit circuits [60; 61], based on the Josephson junctions, can exhibit quantum behaviors even at macroscopic scale. Generally, the interaction between the SC qubits and the environment, e.g., systems in strong or even ultrastrong coupling regime via quantized electromagnetic fields, would result in short coherence time. Thus many researches on circuit quantum electrodynamics (QED) [62] have been presented with respect to the SC qubits, superconducting coplanar waveguide resonators, \(LC\) resonators and so on. This circuit QED focuses on studies of the light-matter interaction by using the microwave photons, and has become a relative independent research field originated from cavity QED. The hybrid systems composed of collective spins (magnons) in ferrimagnetic systems and other systems are able to constitute the magnon-photon [63; 64], magnon-phonon [65; 66; 67], magnon-photon-phonon [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67] systems and so on, giving rise to new interesting applications. Ferrimagnetic systems such as yttrium iron garnet (YIG) sphere have attracted considerable attention in recent years, which provide new platforms for investigating the macroscopic quantum phenomena particularly. Such systems are able to achieve strong and even ultrastrong couplings [69] between the magnons and the microwave photons, as a result of the high density of the collective spins in YIG and the lower dissipation. The YIG has the unique dielectric microwave properties with very lower microwave magnetic loss parameter. Meanwhile, some important works have been presented on magnon Kerr effect [70; 71], quantum transduction [72], magnon squeezing [73; 74], magnon Fock state [75] and entanglement of magnons. For example, In 2018 Li _et al._[49] proposed a system consisted of magnons, microwave photons and phonons for establishing tripartite entangled states based on the magnetostrictive interaction and that the entangled state in magnon-photon-phonon system is robust. In 2019 Li _et al._[50] constructed the entangled state of two magnon modes in a cavity magnomechanical system by applying a strong red-detuned microwave field on a magnon mode to activate the nonlinear magnetostrictive interaction. In 2021 Kong _et al._[52] used the indirect coherent interaction for accomplishing two magnons entanglement and squeezing via virtual photons in the ferromagnetic-superconducting system. In this work, we first present a hybrid system composed of two YIG spheres, two identical microwave cavities and a SC qubit to establish quantum entanglement on two nonlocal magnons. In this system, two YIGs are coupled to respective microwave cavities that cross each other. And a SC qubit is placed at the center of the crossing of two identical cavities, namely, the SC qubit interacts with the two cavities simultaneously. The magnons in YIGs can be coupled to the microwave cavities in the resonant way, owing to that the frequencies of two magnons can be tuned by biased magnetic fields, respectively. Compared with other works, the SC qubit is coupled to the two microwave cavities in the far-detuning regime, mean ing that the two identical cavities indirectly interact with each other by exchanging virtual photons. Then, we give the effective Hamiltonian of the subsystem composed of the SC qubit and two cavities, and present the protocol of entanglement establishment. In Sec. III, we consider the case of three magnons. In the hybrid system shown in Fig.3, the three identical microwave cavities could indirectly interact via the virtual photons, and each magnon is resonant with the respective cavity by tuning the frequency of the magnon. At last, we get the isoprobability entanglement on three nonlocal magnons. Moreover, the hybrid system composed of \(N\) magnons, \(N\) identical microwave cavities and a SC qubit is derived in Sec. IV. We summarize in Sec. V. ## II Quantum entanglement on two nonlocal magnons ### Hamiltonian of the hybrid system We consider a hybrid system, see Fig.1, in which two microwave cavities cross each other, two yttrium iron garnet (YIG) spheres are coupled to the microwave cavities, respectively. A superconducting (SC) qubit, represented by black spot in the Fig.1, is placed at the center of the crossing in order to interact with the two microwave cavities simultaneously. The YIG spheres are placed at the antinode of two microwave magnetic fields, respectively, and a static magnetic field is locally biased in each YIG sphere. In our model, the SC qubit is a two-level system with ground state \(|g\rangle_{\!q}\) and excited state \(|e\rangle_{\!q}\). The magnetostatic modes in YIG can be excited when the magnetic component of the microwave cavity field is perpendicular to the biased magnetic field. We only consider the Kittel mode [76] in the hybrid system, namely, the magnon modes can be excited in YIG. The frequency of the magnon is in the gigahertz range. Thus the magnon generally interacts with the microwave photon via the magnetic dipole interaction. The frequency of the magnon is given by \(\omega_{m}=\gamma H\), where \(H\) is the biased magnetic field and \(\gamma/2\pi=28\) GHz/T is the gyromagnetic ratio. In recent years, some experiments have already realized the strong and ultrastrong magnon-magnon coupling [77; 78; 79] as well as the magnon-qubit interaction [80; 81], which means that in the hybrid system shown in Fig.1 the magnon is both coupled to the SC qubit and another magnon. However, we mainly consider that the magnons which frequencies are tuned by the locally biased static magnetic fields can be resonant with the cavities. In the meantime, the two cavities modes interact indirectly in the far-detuning regime for exchanging photons. The entanglement of two nonlocal magnons can be constructed by using two cavities and the SC qubit. Given that there are magnon-magnon and magnon-qubit interactions, the magnon can be detuned with the qubit and another magnon in order to neglect their interactions. In the rotating wave approximation the Hamiltonian of the hybrid system is (\(\hbar=1\) hereafter) [82] \[H^{\rm(S)} = H_{0}+H_{\rm int}\] \[H_{0} = \omega_{m_{1}}m_{1}^{\dagger}m_{1}+\omega_{m_{2}}m_{2}^{\dagger} m_{2}+\frac{1}{2}\omega_{q}\sigma_{z} \tag{1}\] \[+\omega_{a_{1}}a_{1}^{\dagger}a_{1}+\omega_{a_{2}}a_{2}^{\dagger }a_{2}\] \[H_{\rm int} = \lambda_{m_{1}}(a_{1}m_{1}^{\dagger}+a_{1}^{\dagger}m_{1})+ \lambda_{m_{2}}(a_{2}m_{2}^{\dagger}+a_{2}^{\dagger}m_{2}) \tag{2}\] \[+\lambda_{q_{1}}(a_{1}\sigma^{+}+a_{1}^{\dagger}\sigma)+\lambda_{ q_{2}}(a_{2}\sigma^{+}+a_{2}^{\dagger}\sigma).\] Here, \(H_{0}\) is the free Hamiltonian of the two cavities, two magnons and the SC qubit. \(H_{\rm int}\) is the interaction Hamiltonian among the cavities, magnons and SC qubit. \(\omega_{m_{1}}\) and \(\omega_{m_{2}}\) are the frequencies of the two magnons, which are tunable under biased magnetic fields, respectively. \(\omega_{a_{1}}\) and \(\omega_{a_{2}}\) are the frequencies of two cavities, and \(\omega_{q}\) is the state transition frequency between \(|g\rangle_{\!q}\leftrightarrow|e\rangle_{\!q}\) of the SC qubit. In the Kittel mode, the collective spins in YIGs can be expressed by the boson operators. \(m_{1}\) (\(m_{2}\)) and \(m_{1}^{\dagger}\) (\(m_{2}^{\dagger}\)) are the annihilation and creation operators of magnon mode 1 (2). \(a_{1}\) (\(a_{2}\)) and \(a_{1}^{\dagger}\) (\(a_{2}^{\dagger}\)) denote the annihilation and creation operators of cavity mode 1 (2), respectively. They satisfy commutation relations \([O,O^{\dagger}]=1\) for \(O=a_{1},a_{2},m_{1},m_{2}\). \(\sigma_{z}=|e\rangle_{q}\langle e|-|g\rangle_{q}\langle g|\). \(\sigma=|g\rangle_{q}\langle e|\) and \(\sigma^{+}=|e\rangle_{q}\langle g|\) are the lowing and raising operators of the SC qubit. \(\lambda_{q_{1}}\) (\(\lambda_{q_{2}}\)) is the coupling strength between the SC qubit and the cavity mode 1 (2). \(\lambda_{m_{1}}\) (\(\lambda_{m_{2}}\)) is the coupling between the magnon mode 1 (2) and the cavity mode 1 (2). As mentioned above, the two microwave cavities are identical ones with the same frequency \(\omega_{a_{1}}=\omega_{a_{2}}=\omega_{a}\). Meanwhile, one can assume that \(\lambda_{q_{1}}=\lambda_{q_{2}}=\lambda_{q}\). In the Figure 1: (Color online) Schematic of the hybrid system composed of two yttrium iron garnet spheres coupled to respective microwave cavities. Two cavities cross each other, and a superconducting qubit (black spot) is placed at the center of the crossing. interaction picture with respect to \(e^{-iH_{0}t}\), the Hamiltonian is expressed as \[H^{\rm(I)} = \lambda_{m_{1}}a_{1}m_{1}^{\dagger}e^{{\rm i}\delta_{1}t}+\lambda_{ m_{2}}a_{2}m_{2}^{\dagger}e^{{\rm i}\delta_{2}t}+\lambda_{q}a_{1}\sigma^{+}e^{{ \rm i}\Delta_{1}t} \tag{2}\] \[+\lambda_{q}a_{2}\sigma^{+}e^{{\rm i}\Delta_{2}t}+H.c.,\] where \(\delta_{1}=\omega_{m_{1}}-\omega_{a}\), \(\delta_{2}=\omega_{m_{2}}-\omega_{a}\), \(\Delta_{1}=\omega_{q}-\omega_{a}\) and \(\Delta_{2}=\omega_{q}-\omega_{a}\). The SC qubit is coupled to the two cavities simultaneously. Owing to \(\Delta_{1}=\Delta_{2}=\Delta_{0}\neq 0\) and \(\Delta_{0}\gg\lambda_{q}\), the two identical microwave cavities indirectly interact with each other in the far-detuning regime. Therefore, the effective Hamiltonian of the subsystem composed of the two microwave cavities and the SC qubit in the far-detuning regime is given by [83] \[H_{\rm eff} = \widetilde{\lambda}_{q}\Big{[}\sigma_{z}(a_{1}^{\dagger}a_{1}+a_ {2}^{\dagger}a_{2}+a_{1}^{\dagger}a_{2}+a_{1}a_{2}^{\dagger})+2|e\rangle_{q} \langle e|\Big{]}\,, \tag{3}\] where \(\widetilde{\lambda}_{q}=\lambda_{q}^{2}/\Delta_{0}\). ### Entangled state generation on two nonlocal magnons We now give the protocol of quantum entanglement generation on two nonlocal magnons. Generally, the magnon can be excited by a drive magnetic field. For convenience the state of magnon 1 is prepared as \(|1\rangle_{\!m}\) via the magnetic field. The initial state of the hybrid system is \(|\varphi\rangle_{0}=|1\rangle_{\!m}\,|0\rangle_{\!m}\,|0\rangle_{\!a}\,|0 \rangle_{\!0}\,|g\rangle_{\!q}\), in which the two cavities are all in the vacuum state, magnon 2 is in the state \(|0\rangle_{\!m}\), and the SC qubit is in state \(|g\rangle_{\!q}\) which is unaltered all the time due to the indirect interaction between the two cavities. _step 1_: The frequency of magnon 1 is tuned to be \(\omega_{m_{1}}\!=\!\omega_{a_{1}}\) so that the cavity 1 could be resonated with it. Therefore, the magnon 1 and cavity 1 are in a superposed state after time \(T_{1}=\pi/4\lambda_{m_{1}}\). The local evolution is \(|1\rangle_{\!m_{1}}|0\rangle_{\!a_{1}}\rightarrow\frac{1}{\sqrt{2}}(|1\rangle_ {\!m_{1}}|0\rangle_{\!a_{1}}-|0\rangle_{\!m_{1}}|1\rangle_{\!a_{1}})\), which means that the states of SC qubit, magnon 2 and cavity 2 are unchanged due to decoupling between the SC and two cavities, and the magnon 2 is far-detuned with cavity 2. The state evolves to \[|\varphi\rangle_{1} = \frac{1}{\sqrt{2}}(|1\rangle_{\!m_{1}}|0\rangle_{\!a_{1}}-{\rm i }|0\rangle_{\!m_{1}}|1\rangle_{\!a_{1}}) \tag{4}\] \[\otimes|0\rangle_{\!m_{2}}\otimes|0\rangle_{\!a_{2}}\otimes|g \rangle_{\!q}.\] _step 2_: The magnons are tuned to far detune with respective cavities. From Eq. (3), the evolution of subsystem composed of two microwave cavities and SC qubit is given by \[|\chi(t)\rangle_{\!\rm sub} = e^{{\rm i}\widetilde{\lambda}_{q}t}\big{[}\cos(\widetilde{ \lambda}_{q}t)|1\rangle_{\!a_{1}}|0\rangle_{\!a_{2}}+{\rm i}\sin(\widetilde{ \lambda}_{q}t)|0\rangle_{\!a_{1}}|1\rangle_{\!a_{2}}\big{]} \tag{5}\] \[\otimes|g\rangle_{\!q}.\] under the condition \(\Delta_{0}\gg\lambda_{q}\). After time \(T_{2}=\pi/2\widetilde{\lambda}_{q}\), the evolution between two cavities is \(|1\rangle_{\!a_{1}}|0\rangle_{\!a_{2}}\rightarrow-|0\rangle_{\!a_{1}}|1\rangle_ {\!a_{2}}\), which indicates that the photon can be indirectly transmitted between the two cavities, with the state of SC qubit unchanged. Therefore, the state after this step changes to \[|\varphi\rangle_{2} = \frac{1}{\sqrt{2}}(|1\rangle_{\!m_{1}}|0\rangle_{\!a_{1}}|0 \rangle_{\!a_{2}}+{\rm i}|0\rangle_{\!m_{1}}|0\rangle_{\!a_{1}}|1\rangle_{\!a_ {2}}) \tag{6}\] \[\otimes|0\rangle_{\!m_{2}}\otimes|g\rangle_{\!q}.\] _step 3_: The frequency of magnon 2 is tuned with \(\omega_{m_{2}}=\omega_{a_{2}}\) to resonate with the cavity 2. In the meantime the cavities are decoupled to the SC qubit and the magnon 1 is far detuned with the cavity 1. After time \(T_{3}=\pi/2\lambda_{m_{2}}\), the local evolution \(|0\rangle_{\!m_{2}}|1\rangle_{\!a_{2}}\rightarrow-{\rm i}|1\rangle_{\!m_{2}}|0 \rangle_{\!a_{2}}\) is attained. The final state is \[|\varphi\rangle_{3} = \frac{1}{\sqrt{2}}(|1\rangle_{\!m_{1}}|0\rangle_{\!m_{2}}+|0 \rangle_{\!m_{1}}|1\rangle_{\!m_{2}}) \tag{7}\] \[\otimes|0\rangle_{\!a_{1}}\otimes|0\rangle_{\!a_{2}}\otimes|g \rangle_{\!q},\] which is just the single-excitation Bell state on two nonlocal magnons. In the whole process, we mainly consider the interactions between the magnons and the cavities, and between the cavities and the SC qubit. However, the SC qubit can be coupled to the magnons. In terms of Ref.[80], the interactions between the magnons and the SC qubit are described as \(H_{qm,1}=\lambda_{qm,1}(\sigma^{+}m_{1}+H.c.)\) and \(H_{qm,2}=\lambda_{qm,2}(\sigma^{+}m_{2}+H.c.)\) where \(\lambda_{qm,1}=\lambda_{q}\lambda_{m_{1}}/\Delta_{0}\) and \(\lambda_{qm,2}=\lambda_{q}\lambda_{m_{2}}/\Delta_{0}\), while the conditions \(\omega_{q}=\omega_{m_{1}}\) and \(\omega_{q}=\omega_{m_{2}}\) are attained. In the meantime, the two magnons are interacts each other by using the SC qubit. Generally, the frequencies of two magnon modes are tuned by the locally biased magnetic fields. Therefore, the magnon can be detuned with the SC qubit and another magnon in order to neglect the interactions between the magnons and the SC qubit. ### Numerical result We here simulate [84] the fidelity of the Bell state on two nonlocal magnons by considering the dissipations of all constituents of the hybrid system. The realistic evolution of the hybrid system composed of magnons, microwave cavities and SC qubit is governed by the master equation \[\dot{\rho} = -{\rm i}[H^{(I)},\rho]+\kappa_{m_{1}}D[m_{1}]\rho+\kappa_{m_{2}}D[ m_{2}]\rho \tag{8}\] \[+\kappa_{a_{1}}D[a_{1}]\rho+\kappa_{a_{2}}D[a_{2}]\rho+\gamma_{q}D[ \sigma]\rho.\] Here, \(\rho\) is the density operator of the hybrid system, \(\kappa_{m_{1}}\) and \(\kappa_{m_{2}}\) are the dissipation rates of magnon 1 and 2, \(\kappa_{a_{1}}\) and \(\kappa_{a_{2}}\) denote the dissipation rates for the two microwave cavities 1 and 2, \(\gamma_{q}\) is the dissipation rate of the SC qubit, \(D[X]\rho\)=\((2X\rho\lambda^{\dagger}-X^{\dagger}X\rho-\rho X^{\dagger}X)/2\) for \(X=m_{1},m_{2},a_{1},a_{2},\sigma\). The fidelity of the entangled state of two nonlocal magnons is defined by \(F=_{3}\langle\varphi|\rho|\varphi\rangle_{3}\). The related parameters are chosen as \(\omega_{q}/2\pi=7.92\) GHz, \(\omega_{a}/2\pi=6.98\) GHz, \(\lambda_{q}/2\pi=83.2\) MHz, 15.3 MHz, \(\lambda_{m_{2}}/2\pi=15.3\) MHz [81], \(\kappa_{m_{1}}/2\pi\)=\(\kappa_{m_{2}}/2\pi=\kappa_{m}/2\pi=1.06\) MHz, \(\kappa_{a_{1}}/2\pi=\kappa_{a_{2}}/2\pi=\kappa_{a}/2\pi=1.35\) MHz [69], \(\gamma_{q}/2\pi=1.2\) MHz [80]. The fidelity of the entanglement between two nonlocal magnons can reach 92.9%. The influences of the imperfect relationship among parameters is discussed next. The Fig.2(a) shows the fidelity influenced by the coupling strength between the microwave cavities and the SC qubit. Since \(\widetilde{\lambda}_{q}=\lambda_{q}^{2}/\Delta_{0}\) in Eq.(3), the fidelity is similar to parabola. In Fig.2(b)-(d), we give the fidelity varied by the dissipations of cavities, magnons, and SC qubit. As a result of the virtual photon, the fidelity is almost unaffected by the SC qubit, shown in Fig.2(d). ## III Entanglement generation for three nonlocal Magnons ### Entangled state of three nonlocal magnons Similar to the protocol of entangled state generation for two nonlocal magnons in two microwave cavities, we consider the protocol for entanglement of three nonlocal magnons. As shown in Fig.3, similar to the hybrid system composed of two magnons coupled to the respective microwave cavities and a SC qubit in Fig.1, there are three magnons in three YIGs coupled to respective microwave cavities and a SC qubit placed at the center of the three identical cavities (\(\omega_{a_{1}}\!=\!\omega_{a_{2}}\!=\!\omega_{a_{3}}\!=\!\omega_{a}\)). Each magnon is in biased static magnetic field and is located at the antinode of the microwave magnetic field. In the interaction picture, the Hamiltonian of the hybrid system depicted in Fig.3 is \[H_{3}^{(\mathrm{I})}= \lambda_{m_{1}}a_{1}m_{1}^{\dagger}e^{\mathrm{i}\delta_{1}t}+ \lambda_{m_{2}}a_{2}m_{2}^{\dagger}e^{\mathrm{i}\delta_{2}t} \tag{9}\] \[+\lambda_{m_{3}}a_{3}m_{3}^{\dagger}e^{\mathrm{i}\delta_{3}t}+ \lambda_{q}a_{1}\sigma^{+}e^{\mathrm{i}\Delta_{3}t}\] \[+\lambda_{q}a_{2}\sigma^{+}e^{\mathrm{i}\Delta_{2}t}+\lambda_{q} a_{3}\sigma^{+}e^{\mathrm{i}\Delta_{3}t}+H.c.,\] where \(\lambda_{m_{3}}\) is the coupling strength between magnon 3 and microwave cavity 3, \(a_{3}\) and \(m_{3}^{\dagger}\) are annihilation operator of the cavity 3 and creation operator of the magnon 3, respectively. \(\lambda_{q}\) is the coupling between the SC qubit and three cavities, \(\delta_{3}=\omega_{m_{3}}-\omega_{a}\). The frequency \(\omega_{m_{3}}\) can be tuned by the biased magnetic field in microwave cavity 3. \(\Delta_{3}=\omega_{q}-\omega_{a}=\Delta_{0}\). At the beginning we have the initial state \(|\psi\rangle_{0}^{(3)}\!=\!|\psi_{m}^{(3)}\otimes|\psi_{m}^{(3)}\otimes|g \rangle_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(|\psi\rangle_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\rho^{(3)}\) is the density operator of realistic evolution of the hybrid system, \(\kappa_{m_{3}}\) is the dissipation rate of magnon 3 with \(\kappa_{m_{3}}/2\pi=\kappa_{m}/2\pi=1.06\) MHz [69], \(\kappa_{a_{3}}\) denotes the dissipation rate for the microwave cavities 3 with \(\kappa_{a_{3}}/2\pi=\kappa_{a}/2\pi=1.35\) MHz [69], \(D[X]\rho^{(3)}\)=(2\(X\rho^{(3)}X^{\dagger}-X^{\dagger}X\rho^{(3)}-\rho^{(3)}X^{ \dagger}X)/2\) for any \(X=m_{1},m_{2},m_{3},a_{1},a_{2},a_{3},\sigma\). The entanglement fidelity for three nonlocal magnons is defined by \(F^{(3)}=^{(3)}_{3}\langle\psi|\rho^{(3)}|\psi\rangle^{(3)}_{3}\), which can reach 84.9%. The fidelity with respect to the parameters is shown in Fig.5. ## IV \(N\) magnons situation In Sec. II and Sec. III, the entanglement of two and three nonlocal magnons have been established. In this section we consider the case of \(N\) magnons. In the hybrid system shown in Fig.6, the SC qubit is coupled to \(N\) cavity modes that have the same frequencies \(\omega_{a}\). A magnon is coupled to the cavity mode in each cavity. Each magnon is placed at the antinode of microwave magnetic field of the respective cavity and biased static magnetic field. In the interaction picture the Hamiltonian of whole system shown in Fig.6 can be expressed as \[H^{(\mathrm{I})}_{N}=\sum_{n}\bigg{[}\lambda_{m_{n}}(a_{n}m_{n}^ {\dagger}e^{\mathrm{i}\delta_{n}t}+H.c.)\] \[+\lambda_{q}(a_{n}\sigma^{+}e^{\mathrm{i}\Delta_{n}t}+H.c.) \bigg{]}, \tag{18}\] where \(a_{n}\) and \(m_{n}^{\dagger}\) (\(n=1,2,3,\cdots,N\)) are the annihilation operator of the _n_th cavity mode and the creation operator of the _n_th magnon, \(\lambda_{m_{n}}\) is the coupling between the _n_th magnon and the _n_th cavity mode, \(\lambda_{q}\) denotes the coupling strength between the SC qubit and the _n_th cavity mode, \(\delta_{n}=\omega_{m_{n}}-\omega_{a}\), \(\omega_{m_{n}}\) is the frequency of the _n_th magnon, \(\Delta_{n}=\Delta_{0}=\omega_{q}-\omega_{a}\). The initial state is prepared as \[|\psi\rangle^{(N)}_{0} = |\psi\rangle^{(N)}_{m}\otimes|\psi\rangle^{(N)}_{\mathrm{a}} \otimes|g\rangle_{\mathrm{d}}, \tag{19}\] \[|\psi\rangle^{(N)}_{m} = |1\rangle_{m}|0\rangle_{m}|0\rangle_{m}\cdots|0\rangle_{m_{N}} \!\!= |100\cdots 0\rangle_{m},\] \[|\psi\rangle^{(N)}_{\mathrm{a}} = |0\rangle_{\mathrm{a}_{1}}|0\rangle_{\mathrm{a}_{2}}|0\rangle_{ \mathrm{a}_{3}}\cdots|0\rangle_{\mathrm{a}_{N}}\!= |000\cdots 0\rangle_{\mathrm{a}}.\] At first, we tune the frequency of magnon 1 under the condition \(\delta_{1}=0\). The magnon 1 is resonant with the Figure 5: (a)-(c) The fidelity of the entanglement on three nonlocal magnons versus the dissipations of cavities, magnons and SC qubit. Figure 6: (Color online) Schematic of the hybrid system composed of \(N\) yttrium iron garnet spheres coupled to respective microwave cavities. A superconducting qubit is placed at the center of the \(N\) identical microwave cavities. cavity 1, which means that the single photon is transmitted to cavity 1, and the SC qubit is decoupled to all the cavities. The state evolves to \[|\psi\rangle_{1}^{(N)}\!=\!-\mathrm{i}|000\cdots 0\rangle_{\!m}|100\cdots 0 \rangle_{\!a}|g\rangle_{\!q} \tag{20}\] after time \(T_{1}^{(N)}=\pi/2\lambda_{m_{1}}\). Next the magnons are tuned to detune with respective cavities. The SC qubit is coupled to the \(N\) microwave cavities at the same time in far-detuning regime \(\Delta_{0}\gg\lambda_{q}\). Under the condition \(\Delta_{n}=\Delta_{0}\), the effective Hamiltonian of the subsystem composed of the SC qubit and \(N\) microwave cavities is of the form [83] \[H_{\mathrm{eff}}^{(N)} = \sum_{n}\widetilde{\lambda}_{q}\bigg{[}\sigma_{z}a_{n}^{\dagger} a_{n}+|e\rangle_{q}\langle e|\bigg{]} \tag{21}\] \[+\!\sum_{l<n}\widetilde{\lambda}_{q}\bigg{[}\sigma_{z}(a_{l}a_{n }^{\dagger}+H.c.)\bigg{]}.\] Consequently, the evolution of the hybrid system is given by \[|\psi\rangle_{2}^{(N)} = \bigg{[}C_{1,t}^{(N)}|100\cdots 0\rangle_{\!a}+C_{2,t}^{(N)}|010 \cdots 0\rangle_{\!a} \tag{22}\] \[+C_{3,t}^{(N)}|001\cdots 0\rangle_{\!a}+\cdots+C_{N,t}^{(N)}|000 \cdots 1\rangle_{\!a}\bigg{]}\] \[\otimes(-\mathrm{i})|000\cdots 0\rangle_{\!m}\otimes|g\rangle_{\!q},\] where \[C_{1,t}^{(N)} = \frac{e^{\mathrm{i}N\widetilde{\lambda}_{q}t}+(N-1)}{N},\] \[C_{2,t}^{(N)} = C_{3,t}^{(N)} = \cdots=C_{N,t}^{(N)}=\frac{e^{\mathrm{i}N\widetilde{\lambda}_{q} t}-1}{N}. \tag{23}\] In addition, we have the following relation \[\sum_{n}|C_{n,t}^{(N)}|^{2} = |C_{1,t}^{(N)}|^{2}+|C_{2,t}^{(N)}|^{2}+|C_{3,t}^{(N)}|^{2}+ \cdots+|C_{N,t}^{(N)}|^{2} \tag{24}\] \[= 1\] by straightforward calculation. At last, the SC qubit is decoupled to the cavities, and the magnons are resonant with the cavities, respectively. Thus, after the time \(T_{3n}^{(N)}=\pi/2\lambda_{m_{n}}\), the final state is given by \[|\psi\rangle_{3}^{(N)} = -\bigg{[}C_{1,t}^{(N)}|100\cdots 0\rangle_{\!m}+C_{2,t}^{(N)}|010 \cdots 0\rangle_{\!m} \tag{25}\] \[+C_{3,t}^{(N)}|001\cdots 0\rangle_{\!m}+\cdots+C_{N,t}^{(N)}|000 \cdots 1\rangle_{\!m}\bigg{]}\] \[\otimes|000\cdots 0\rangle_{\!a}\otimes|g\rangle_{\!q}.\] In the whole process, the state of SC qubit is unchanged all the time. _[Remark]_ Concerning the coefficients Eq. (23), the probabilities with respect to the states \(|100\cdots 0\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), \(|010\cdots 0\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), \(|001\cdots 0\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), \(|001\cdots 0\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), \(\cdots\), \(|000\cdots 1\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), \(\cdots\), \(|000\cdots 1\rangle_{\!m}|000\cdots 0\rangle_{\!a}|g\rangle_{\!q}\), are \(p_{1}^{(N)}=|C_{1,t}^{(N)}|^{2}\), \(p_{2}^{(N)}=|C_{2,t}^{(N)}|^{2}\), \(p_{3}^{(N)}=|C_{3,t}^{(N)}|^{2}\), \(\cdots\), \(p_{N}^{(N)}=|C_{N,t}^{(N)}|^{2}\), respectively, and \(p_{2}^{(N)}=p_{3}^{(N)}=\cdots=p_{N}^{(N)}\). If the condition \(p_{1}^{(N)}=p_{2}^{(N)}\) can be attained, the isoprobability entanglement can be obtained. For instance, for \(N=4\), the entangled state of the four nonlocal magnons is given by \[|\psi\rangle_{3}^{(4)} = -\frac{1}{2}\bigg{[}|1000\rangle_{\!m}\!-\!|0100\rangle_{\!m}\!- \!|0010\rangle_{\!m}\!-\!|0001\rangle_{\!m}\bigg{]} \tag{26}\] \[\otimes|0000\rangle_{\!a}\otimes|g\rangle_{\!q}.\] However, if \(N\geqslant 5\), the isoprobability entanglement does not exist as a result of \(p_{1}^{(N)}\neq p_{2}^{(N)}\), see illustration in Fig.7(b)(c). ## V Summary and discussion We have presented protocols of establishing entanglement on magnons in hybrid systems composed of YIGs, microwave cavities and a SC qubit. By exploiting the virtual photon, the microwave cavities can indirectly interact in far-detuning regime, and the frequencies of magnons can be tuned by the biased magnetic field, which leads to the resonant interaction between the magnons and the respective microwave cavities. We have constructed single-excitation entangled state on two and three nonlocal magnons, respectively, and the entanglement for \(N\) magnons has been also derived in term of the protocol for three magnons. By analyzing the coefficients in Eq. (23), the isoprobability entanglement has been also constructed for cases \(N=2\), \(N=3\) and \(N=4\). In particular, such isoprobability entanglement no longer exists for \(N\geqslant 5\). In the protocol for the case of two magnons discussed in Sec. II, we have firstly constructed the superposition of magnon 1 and microwave cavity 1. Then the photon could be transmitted \(|1\rangle_{\!a}|0\rangle_{\!a}\rightarrow-|0\rangle_{\!a}|1\rangle_{\!a}\) between two cavities. At last, the single-excitation Bell state is finally constructed in resonant way. As for \(N\geqslant 3\), however, such method is no longer applicable because of \(|100\cdots 0\rangle_{\!a}\rightarrow\alpha_{2}|010\cdots 0\rangle_{\!a}+\alpha_{3}|001 \cdots 0\rangle_{\!a}+\cdots+\alpha_{N}|000\cdots 1\rangle_{\!a}\), namely, \(p_{1}^{(N)}\neq 0\). ## Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 12075159 and 12171044, Beijing Natural Science Foundation (Grant No. Z190005), the Academician Innovation Platform of Hainan Province.
2308.06822
Approximate and Weighted Data Reconstruction Attack in Federated Learning
Federated Learning (FL) is a distributed learning paradigm that enables multiple clients to collaborate on building a machine learning model without sharing their private data. Although FL is considered privacy-preserved by design, recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL. However, most existing methods fail to attack the most widely used horizontal Federated Averaging (FedAvg) scenario, where clients share model parameters after multiple local training steps. To tackle this issue, we propose an interpolation-based approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes. Then, we design a layer-wise weighted loss function to improve the data quality of reconstruction. We assign different weights to model updates in different layers concerning the neural network structure, with the weights tuned by Bayesian optimization. Finally, experimental results validate the superiority of our proposed approximate and weighted attack (AWA) method over the other state-of-the-art methods, as demonstrated by the substantial improvement in different evaluation metrics for image data reconstructions.
Yongcun Song, Ziqi Wang, Enrique Zuazua
2023-08-13T17:40:56Z
http://arxiv.org/abs/2308.06822v2
# Approximate and Weighted Data Reconstruction Attack in Federated Learning ###### Abstract Federated Learning (FL) is a distributed learning paradigm that enables multiple clients to collaborate on building a machine learning model without sharing their private data. Although FL is considered privacy-preserved by design, recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL. However, most existing methods fail to attack the most widely used horizontal Federated Averaging (FedAvg) scenario, where clients share model parameters after multiple local training steps. To tackle this issue, we propose an interpolation-based approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes. Then, we design a layer-wise weighted loss function to improve the data quality of reconstruction. We assign different weights to model updates in different layers concerning the neural network structure, with the weights tuned by Bayesian optimization. Finally, experimental results validate the superiority of our proposed approximate and weighted attack (AWA) method over the other state-of-the-art methods, as demonstrated by the substantial improvement in different evaluation metrics for image data reconstructions. ## 1 Introduction With the increasing volume of data generated by distributed personal electronic devices and organizations, traditional centralized approaches for training machine learning models face challenges in terms of data collection, privacy concerns, and scalability. To address them, Federated Learning (FL) [13; 15] has emerged as a promising paradigm and has gained significant attention in recent years. One prominent feature of FL is its ability to facilitate model training on distributed data sources owned by individual clients while keeping the data localized and exchanging only model updates. For example, in the most commonly used Federated Averaging (FedAvg) [15] algorithm, each client trains its local model with its private data and shares the updated model parameters to a server, where the model parameters are aggregated and used to update the global model. In other words, FL enables multiple participants to build a common and robust machine learning model without sharing data, thus addressing critical issues such as data privacy, data security, and data access rights. FL has gained significant attention in recent years to handle the growing amount of data and the increasing concerns about privacy in a number of applications, such as healthcare [1; 22], and learning a controller model across several autonomous vehicles without sharing their history trajectories [25]. Although it was widely believed that model updates in FL are safe to share, recent studies [6, 23, 24, 27] have shown that clients' sensitive training data can be compromised through _data reconstruction attacks_[27]. In these attacks, the adversary randomly initializes dummy samples and labels, and executes forward and backward propagation to obtain dummy model updates. Through an iterative process of minimizing the discrepancy between the dummy model updates and the shared ones, the dummy samples and labels are updated simultaneously. In the literature, some work has already been done for improving the reconstruction performance, we refer to the label inference techniques in [24, 26], the new distance functions and optimizers in [5, 19], and the regularization methods in [5, 9, 21]. By inferring labels in advance, the joint optimization of both samples and labels can be avoided, thus reducing the complexity of the optimization problem. It was first discovered in [26] that the label information of a single sample can be analytically extracted based on the model updates of the last fully connected layer. Later, [24] extended the single sample label extraction to batch samples, under the limiting assumption of non-repeating labels in the batch. The above limitation is further addressed by the batch label inference approach proposed in [6]. To measure the discrepancy between the dummy and shared gradients, the Euclidean distance is commonly used in the loss function for the attack, see [24, 26, 27]. Moreover, the angle-based cosine similarity was suggested in [5, 23] since the high-dimensional direction of the gradients carries more information than the magnitude. In [19], a Gaussian kernel of gradients differences was proposed to measure the discrepancy, allowing the scaling factor in the kernel to be adapted to the distribution of gradients in each attack. As for the optimizers employed in the attacks, the L-BFGS [14] and the Adam [11] are the most commonly used ones, see e.g., [5, 9, 19, 21, 24, 26, 27]. In particular, the reconstruction performance of the above two optimizers was compared in [5, 19, 21]. It has been shown in [21] that the L-BFGS requires fewer attack iterations to achieve high reconstruction quality compared to the Adam when attacking the LFW dataset [8]. On the other hand, in [5], the L-BFGS performs worse than the Adam for the CIFAR-10 dataset [12]. Although there is no definitive analysis guiding the selection of cost functions and optimizers, it is evident that appropriate choices can enhance the effectiveness of attacks in specific scenarios. Another important way to improve the reconstruction performance is to add auxiliary regularization terms to the loss function for the attack based on some prior knowledge of the data. In [21], a label regularizer was proposed to match the dummy samples and labels when both of them are optimized simultaneously. Moreover, some image prior information can be employed for image reconstruction attacks, such as the total variation regularization [5] to reduce the noise in images, and the group registration regularization [24] to center the position of the main object in images. In [24], a prior term is proposed by using the values of the mean and variance of a batch of samples in batch normalization layers. A generative model pre-trained on the raw data distribution was used in [9] to improve the reconstruction. However, in most distributed learning systems, batch normalization information and raw data distribution are not necessarily shared, which makes these methods less practical. Despite remarkable progress, limited attention has been paid to attacking the FedAvg with multiple-step model updates, where clients share accumulated local model parameters after training for multiple epochs, each executed over multiple mini-batches. In this regard, an approximate and weighted method, named AGIC, is proposed in [23]. The AGIC method employs a weighted cosine similarity loss function by assigning linearly increasing weights to different convolutional and fully connected layers. This method assumes that a combined batch consisting of all the mini-batches used in the client's local training process can approximate the received model update in a single local update step. However, such an approximation method works only for scenarios with small local learning rates. Moreover, the weights are chosen empirically rather than systematically. To address the above issues, we propose a novel approximate and weighted data reconstruction attack method against FL systems utilizing FedAvg algorithms. First, we present an interpolation-based approximation method that generates intermediate model updates of clients' local training processes. As a result, attacking against the FedAvg with multiple-step model updates becomes feasible. Then, we propose a layer-wise weighted loss function to enhance the reconstruction quality. Different weights are assigned to model updates in each layer based on the neural network structure. The selection of the weights is optimized using the Bayesian optimization method. Overall, our main contributions are as follows: 1. To attack the FedAvg with multiple-step model updates, we propose an interpolation-based approximation method. The model update corresponding to each epoch is approximated by interpolating the received combined model updates. The proposed approximation method makes attacks against FedAvg scenarios feasible and effective, as demonstrated by numerical experiments. 2. To further improve the attack performance after approximation, we employ a layer-wise weighted loss function for the attack. Different weights are assigned to different layers, and these weights are determined by the Bayesian optimization [4]. Additionally, we enhance the weights of layers with relatively larger errors, improving the attack's adaptability and performance. 3. Our method demonstrates environment generality by being compatible with various neural network architectures, such as Convolutional Neural Networks (CNNs) and Residual Neural Networks (ResNets). Furthermore, it is capable of reconstructing training data based on the model updates leaked at different stages of the training process. The rest of the paper is organized as follows. In Section 2, we provide a comprehensive background on FL and data reconstruction attacks. In Section 3, various attack scenarios are analyzed to identify the most challenging one. Section 4 presents our proposed method, including the approximation method and the layer-wise weighted loss function. The experimental setup and simulation results are presented in Section 5. Finally, we conclude the paper in Section 6. ## 2 Preliminaries In this section, we first provide a detailed description of the mathematical formulation and training process of FL. Then, we introduce the formulation and setup of data reconstruction attacks. ### Problem Statement of FL FL aims to learn a machine learning model \(h:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{b}}\rightarrow\mathbb{R}^{d_{y}}\) parameterized by \(\theta\in\mathbb{R}^{d_{b}}\) such that, given any data \(x\in\mathbb{R}^{d_{x}}\), the value \(h(x;\theta)\) offers an accurate prediction about the label \(y\in\mathbb{R}^{d_{y}}\). A crucial constraint in FL is that the training data and labels are stored across \(C\) distributed clients, and each client's data and labels can only be accessed and processed locally during the training process. Mathematically, the training process of FL with \(C\) clients can be formulated as the following minimization problem [13]: \[\min_{\theta}\sum_{k=1}^{C}p_{k}F_{k}(\theta). \tag{1}\] where \(F_{k}:\mathbb{R}^{d_{b}}\rightarrow\mathbb{R}\) is the local loss function for client \(k\), and \(p_{k}\geq 0\) with \(\sum_{k=1}^{C}p_{k}=1\) specifies the relative impact of client \(k\). In practice, \(F_{k}\) is typically defined as the empirical risk over client \(k\)'s local dataset \(\{(x_{i}^{(k)},y_{i}^{(k)})\}_{i=1}^{N^{(k)}}\), i.e., \(F_{k}(\theta)=1/N^{(k)}\sum_{i=1}^{N^{(k)}}\ell(h(x_{i}^{(k)};\theta),y_{i}^{ (k)})\), where \(\ell\): \(\mathbb{R}^{d_{y}}\times\mathbb{R}^{d_{y}}\rightarrow\mathbb{R}\) is a prescribed loss function. Common choices of \(\ell\) include the \(l_{2}\) and the cross-entropy loss function, see [2] for more options. The relative impact \(p_{k}\) is often chosen as \(p_{k}=N^{(k)}/N_{C}\), where \(N_{C}=\sum_{k=1}^{C}N^{(k)}\) is the total size of all the clients' datasets. ### FedAvg Algorithm For solving (1), FL algorithms normally combine local model update processes performed by each client with model aggregation steps performed by a central server. To fix ideas, we focus on the FedAvg [15], which is the most commonly used algorithm in FL. The FedAvg presented in Algorithm 1 involves a series of global rounds, in which the server first dispatches the latest global model parameters to a group of selected clients. Then, the selected clients compute model updates to the current global model with their private data and send the updated model parameters back to the server. Finally, the server aggregates the received model parameters to update the global model parameters, which serves as the initializer for the next round. Next, we elaborate on the implementation of Algorithm 1. At each round \(t=1,2,\ldots,T\), the server selects a set of clients \(\mathcal{K}\subseteq\{k\}_{k=1}^{C}\) to participate the training and sends them the current global model parameters \(\theta_{t}\). Then, each selected client \(k\in\mathcal{K}\) sets its local model parameters \(\theta_{t}^{(k)}=\theta_{t}\) and updates \(\theta_{t}^{(k)}\) for \(E\) epochs, each consisting of \(B^{(k)}\) mini-batches. In each epoch \(e=1,2,\ldots,E\), client \(k\) first shuffles its dataset \(\mathcal{D}^{(k)}\) and partitions it into \(B^{(k)}=N^{(k)}/M\) mini-batches (without loss of generality, we assume that the dataset is divisible into \(B^{(k)}\) mini-batches of size \(M\)): \[\mathcal{D}^{(k)}=\{(X^{(k)},Y^{(k)})\}=\{(X^{(k)}_{t,e,b},Y^{(k)}_{t,e,b})\} _{b=1}^{B^{(k)}}, \tag{2}\] where \(X^{(k)}=\{x^{(k)}_{i}\}_{i=1}^{N^{(k)}}\) is the set of the training data, \(Y^{(k)}=\{y^{(k)}_{i}\}_{i=1}^{N^{(k)}}\) is the set of the labels, and \(\{(X^{(k)}_{t,e,b},Y^{(k)}_{t,e,b})\}\) represents the training set for the \(b\)-th mini-batch at round \(t\), epoch \(e\). If not otherwise stated, the subscripts \(t\), \(e\), and \(b\) in the following discussions indicate the index of the round, the epoch, and the mini-batch, respectively. For each mini-batch \(b=1,2,\ldots,B^{(k)}\), client \(k\) updates its local model parameters \(\theta_{t,e,b}^{(k)}:=\theta_{t}^{(k)}\) using the mini-batch gradient descent: \[\theta_{t,e,b}^{(k)}\leftarrow\theta_{t,e,b}^{(k)}-\eta\nabla_{\theta_{t,e,b} ^{(k)}}\ell\left(X^{(k)}_{t,e,b},Y^{(k)}_{t,e,b}\right), \tag{3}\] where \(\eta>0\) is the local learning rate, and we use \(\ell(X,Y):=1/M\sum_{i=1}^{M}\ell(h(x_{i};\theta),y_{i})\) to represent the averaged loss of a mini-batch of size \(M\) for simplicity. After training for \(E\) epochs (each epoch consists of \(B^{(k)}\) mini-batches), client \(k\)'s model parameters update \(\Delta\theta_{t}^{(k)}\) can be obtained as \[\Delta\theta_{t}^{(k)}=-\eta\sum_{e=1}^{E}\sum_{b=1}^{B^{(k)}}\nabla_{\theta_ {t,e,b}^{(k)}}\ell\left(X^{(k)}_{t,e,b},Y^{(k)}_{t,e,b}\right). \tag{4}\] As a result, client \(k\)'s local model parameters \(\theta_{t+1}^{(k)}\) become \[\theta_{t+1}^{(k)}=\theta_{t}^{(k)}+\Delta\theta_{t}^{(k)}. \tag{5}\] Then, client \(k\) sends its updated local model parameter \(\theta_{t+1}^{(k)}\) back to the server for averaging. After receiving updated local model parameters \(\{\theta_{t+1}^{(k)}\}_{k\in\mathcal{K}}\) from the clients, the server performs the weighted model parameters averaging as follows: \[\theta_{t+1}=\sum_{k\in\mathcal{K}}\frac{N^{(k)}}{N_{K}}\theta_{t+1}^{(k)}, \tag{6}\] where \(N_{K}=\sum_{k\in\mathcal{K}}N^{(k)}\) is the total size of \(K\) participated clients' datasets. Finally, the aggregated global model parameters \(\theta_{t+1}\) are used as the initializer for the next round. ### Data Reconstruction Attack Despite the fact that the clients only share the updated model parameters with the server, their private training data are still vulnerable to data reconstruction attacks [5; 24; 27]. In this subsection, we introduce the formulation and the general procedure of a data reconstruction attack. As shown in (4), during the local training process at round \(t\), client \(k\)'s model parameters update \(\Delta\theta_{t}^{(k)}\) consists of the averaged gradients computed on \(E\times B^{(k)}\) mini-batches. Let \(G_{t}^{(k)}\) be a mapping from the training data \(\{(X^{(k)},Y^{(k)})\}\) defined in (2) to the model parameters update \(\Delta\theta_{t}^{(k)}\), then we can rewrite (4) in a compact manner as \[\Delta\theta_{t}^{(k)}=G_{t}^{(k)}\left(X^{(k)},Y^{(k)}\right). \tag{7}\] For an attacker with access to \(\Delta\theta_{t}^{(k)}\), reconstructing \(\{(X^{(k)},Y^{(k)})\}\) is essentially an inverse problem. In particular, if \([G_{t}^{(k)}]^{-1}\) exists and is known analytically, the attacker can recover \(\{(X^{(k)},Y^{(k)})\}\) directly as follows: \[\left(X^{(k)},Y^{(k)}\right)=[G_{t}^{(k)}]^{-1}\left(\Delta\theta_{t}^{(k)} \right). \tag{8}\] Notice that the attacker can independently attack any client \(k\in\mathcal{K}\) that participated in the training at round \(t\). However, since neural networks are highly nonlinear and the model updates are aggregated over multiple mini-batches, it is generally difficult to identify \([G_{t}^{(k)}]^{-1}\). To address this issue, we introduce a numerical approach for solving (7). In the sequel we omit the superscript \(k\) to simplify the notation. Assuming an attacker can get access to the client's training process and hence knows \(G_{t}\), then problem (7) can be solved by the numerical approach elaborated below. First, to launch the attack, the attacker randomly initializes a dummy dataset \((\hat{X},\hat{Y})\) with the same dimension as that of the client's original dataset \((X,Y)\). The attacker uses \(G_{t}\) to calculate the dummy model update \(\Delta\hat{\theta}_{t}\) given by \[\Delta\hat{\theta}_{t}=G_{t}(\hat{X},\hat{Y}). \tag{9}\] Proceeding as in [27], the attacker can reconstruct the client's dataset by matching the dummy model update \(\Delta\hat{\theta}_{t}\) with the ground-truth model update \(\Delta\theta_{t}\), minimizing a model update matching loss function \(\ell_{m}\) given by \[\ell_{m}(\hat{X},\hat{Y})=\|\Delta\hat{\theta}_{t}-\Delta\theta_{t}\|^{2}= \left\|G_{t}(\hat{X},\hat{Y})-G_{t}(X,Y)\right\|^{2}. \tag{10}\] In practice, one can also use other loss functions like the cosine similarity loss [5] to evaluate the distance between \(\Delta\hat{\theta}_{t}\) and \(\Delta\theta_{t}\). Therefore, the data reconstruction attack can be conducted by solving the following optimization problem: \[(\hat{X}^{*},\hat{Y}^{*})=\arg\min_{\hat{X},\hat{Y}}\ell_{m}(\hat{X},\hat{Y}). \tag{11}\] This can be done using the gradient descent method with the learning rate \(\hat{\eta}\): \[\hat{X}\leftarrow\hat{X}-\hat{\eta}\nabla_{\hat{X}}\ell_{m}(\hat{X},\hat{Y}), \quad\hat{Y}\leftarrow\hat{Y}-\hat{\eta}\nabla_{\hat{Y}}\ell_{m}(\hat{X},\hat{ Y}).\] **Remark 2.1**.: _In FL, the central server holds a substantial amount of information about the training process. Data reconstruction attacks can typically be developed by a honest-but-curious server [5], acting as attacker, who has access to the following information, and in particular \(G_{t}\) in (9)._ 1. _Model architecture of \(h\): Normally the server decides the architecture of the neural network that is shared among all the clients._ 2. _Initial model parameters \(\theta_{t}\): For each client, its initial local model parameter \(\theta_{t}\) is dispatched from the server._ 3. _Model parameters update \(\Delta\theta_{t}\): Each client sends the updated model parameters \(\theta_{t+1}\) obtained in (5) back to the server. Thus, \(\Delta\theta_{t}=\theta_{t+1}-\theta_{t}\) can be obtained by the server easily._ 4. _Loss function \(\ell\): Similar to \(h\), it is common that the server decides the form of the loss function that is shared among all the clients. The choice made is kept unchanged during the training process._ 5. _Dataset size \(N\): This information is shared with the server for weighted aggregation as shown in (6)._ 6. _Client's local training hyperparameters: As shown in (4) and (7), the knowledge of \(G_{t}\) depends on the hyperparameters listed below. In general cases, the server can assign these hyperparameters to each client._ 1. _Number of epochs_ \(E\)__ 2. _Number of mini-batches_ \(B\)__ 3. _Learning rate_ \(\eta\)__ ## 3 Analysis of Attack Scenarios: Different \(E\) and \(B\) As shown in (9), a data reconstruction attack requires the knowledge of \(G_{t}\) to calculate the dummy model update \(\Delta\hat{\theta}_{t}\). In this section, we discuss the difficulty of knowing \(G_{t}\) for four FedAvg scenarios in terms of different values of \(E\) and \(B\) and, in particular, when \(E>1\) and \(B>1\). Scenario 1: \(E=1,B=1\).As we shall see, the attacker can get access to \(G_{t}\) and replicate the client's training process. Indeed, since \(E=1\) and \(B=1\), the client uses the whole dataset for one epoch as follows: \[G_{t}(X,Y)=-\eta\nabla_{\theta_{t}}\ell\left(X,Y\right)=-\eta\frac{1}{N}\sum _{i=1}^{N}\nabla_{\theta_{t}}\ell\left(x_{i},y_{i}\right). \tag{12}\] Thus, the attacker can replicate the client's training process by replacing \((X,Y)\) with the dummy dataset \((\hat{X},\hat{Y})=\{(\hat{x}_{i},\hat{y}_{i})\}_{i=1}^{N}\) in the following way: \[G_{t}(\hat{X},\hat{Y})=-\eta\nabla_{\theta_{t}}\ell\left(\hat{X},\hat{Y} \right)=-\eta\frac{1}{N}\sum_{i=1}^{N}\nabla_{\theta_{t}}\ell\left(\hat{x}_{ i},\hat{y}_{i}\right). \tag{13}\] Scenario 2: \(E>1,B=1\).The attacker can also obtain the knowledge of \(G_{t}\) in this scenario. To be concrete, the client uses the batch gradient descent for \(E\) epochs: \[G_{t}(X,Y)=-\eta\sum_{e=1}^{E}\nabla_{\theta_{t,e}}\ell\left(X,Y\right). \tag{14}\] In each epoch, since \(B=1\), the gradients are computed on the \(N\) samples, as in Scenario 1. In this case, the attacker can replicate the client's training process by replacing \((X,Y)\) with the dummy dataset \((\hat{X},\hat{Y})\), and train the model for \(E\) epochs as follows: \[G_{t}(\hat{X},\hat{Y})=-\eta\sum_{e=1}^{E}\nabla_{\theta_{t,e}}\ell\left(\hat {X},\hat{Y}\right). \tag{15}\] Scenario 3: \(E=1,b>1\).In this scenario, since \(B>1\), the client uses the mini-batch gradient descent for one epoch in the following way: \[G_{t}(X,Y)=-\eta\sum_{b=1}^{B}\nabla_{\theta_{t,b}}\ell\left(X_{t,b},Y_{t,b}\right) \tag{16}\] with the gradients computed on \(B\) mini-batches. At each epoch of the mini-batch gradient descent method, the client randomly shuffled the dataset and separates it into \(B\) mini-batches. In this scenario, since \(E=1\), the dataset is only shuffled once. The attacker can first separate its dummy dataset \((\hat{X},\hat{Y})\) into \(B\) mini-batches \(\{(\hat{X}_{t,b},\hat{Y}_{t,b})\}_{b=1}^{B}\). Then, by replacing \((X_{t,b},Y_{t,b})\) in (16) with \((\hat{X}_{t,b},\hat{Y}_{t,b})\), it can replicate the client's training process as below: \[G_{t}(\hat{X},\hat{Y})=-\eta\sum_{b=1}^{B}\nabla_{\theta_{t,b}}\ell\left(\hat{ X}_{t,b},\hat{Y}_{t,b}\right). \tag{17}\] The only difference is that the reconstructed \((\hat{X},\hat{Y})\) are in the same order as that of the shuffled client's dataset. Hence, the attacker can still obtain \(G_{t}\) and replicate the local training process in this scenario. Scenario 4: \(E>1,B>1\).In this case, the attacker cannot gain the needed knowledge of \(G_{t}\) to calculate the dummy model update \(\Delta\hat{\theta}_{t}\) defined in (9). Indeed, the client uses the mini-batch gradient descent for \(E\) epochs: \[G_{t}(X,Y)=-\eta\sum_{e=1}^{E}\sum_{b=1}^{B}\nabla_{\theta_{t,e,b}}\ell\left(X _{t,e,b},Y_{t,e,b}\right). \tag{18}\] In each epoch, the client first shuffles its dataset and then separates it into \(B\) mini-batches. As a result, the attacker cannot replicate the client's mini-batch separation when \(E>1\) due to the randomness of the shuffling process. Most existing attack methods in the literature are only applicable to Scenarios 1-3 and limited attention has been given to the more challenging Scenario 4. To address it, we propose an interpolation-based approximation method. By interpolating the received model updates, we can approximate the model update corresponding to each epoch, thereby reducing the problem from Scenario 4 to Scenario 3. The details of the proposed method are presented in Section 4.1. ## 4 Approximate and Weighted Data Reconstruction Method In this section, we propose an approximate and weighted data reconstruction attack method for solving (7). Approximation of the intermediate model update \(\{\Delta\theta_{t,e}\}_{e=1}^{E}\) when \(E>1\) and \(B>1\) As discussed in Section 3, the shuffling of the dataset in each epoch makes it difficult for the attacker to know \(G_{t}\). However, if we know the intermediate model update \(\Delta\theta_{t,e},e\in\{1,2,\ldots,E\}\) of each epoch, the problem can be reduced from Scenario 4 (\(E>1\) and \(B>1\)) to Scenario 3 (\(E=1\) and \(B>1\)) by attacking each epoch separately. To this end, we interpolate between \(\theta_{t+1}\) and \(\theta_{t}\) to approximate the intermediate model parameters \(\{\theta_{t,e}\}_{e=1}^{E}\) corresponding to each epoch. Particularly, consider a client who trains its model for \(E\) epochs, the approximate intermediate model parameters \(\{\tilde{\theta}_{t,e}\}_{e=1}^{E}\) can be obtained as \[\tilde{\theta}_{t,e}=\frac{\theta_{t+1}-\theta_{t}}{E}e+\theta_{t}\quad\text{ for }e=1,2,\ldots,E. \tag{19}\] Then, the approximate model updates \(\{\Delta\tilde{\theta}_{t,e}\}_{e=1}^{E}\) corresponding to each epoch can be obtained as \[\Delta\tilde{\theta}_{t,e}=\tilde{\theta}_{t,e}-\tilde{\theta}_{t,e-1}\quad \text{for }e=1,2,\ldots,E, \tag{20}\] where \(\hat{\theta}_{t,0}=\theta_{t}\) is the initial model parameter at round \(t\). After obtaining the approximate \(\{\Delta\hat{\theta}_{t,e}\}_{e=1}^{E}\), one can use it as the target model update of the corresponding epoch \(e\in\{1,2,\ldots,E\}\). In other words, the problem is reduced from Scenario 4 (\(E>1\) and \(B>1\)) to Scenario 3 (\(E=1\) and \(B>1\)). Let \(G_{t,e}\) denote the client's training process of epoch \(e\) at round \(t\). Then, the dummy model update \(\Delta\hat{\theta}_{t,e}\) can be obtained based on (17), given as \[\Delta\hat{\theta}_{t,e}=G_{t,e}(\hat{X},\hat{Y})=-\eta\sum_{b=1}^{B}\nabla_{ \theta_{t,e,b}}\ell\left(\hat{X}_{t,e,b},\hat{Y}_{t,e,b}\right). \tag{21}\] Finally, the attack can be conducted following the procedures of Scenario 3. ### Improved Weighted Loss Function for the Data Reconstruction Attack The standard loss function (10) for the data reconstruction attack treats different components of the model update \(\Delta\theta_{t}\) equally. However, as observed in [3], different layers in a neural network provide different contributions to boosting the performance. Inspired by this fact, we propose to assign different weights to the model updates in different layers to facilitate the reconstruction. The implementation of our method is elaborated in this section. Consider a neural network with \(L\) layers. Then the model update \(\Delta\theta_{t}\) consists of superposition of the updates \(\Delta\theta_{t}^{(l)}\) of each layer: \[\Delta\theta_{t}=\{\Delta\theta_{t}^{(l)}\}_{l=1}^{L}. \tag{22}\] The same applies to the dummy model updates \(\Delta\hat{\theta}_{t}=\{\Delta\hat{\theta}_{t}^{(l)}\}_{l=1}^{L}\). By assigning the weight \(q^{(l)}>0\) to the model updates at layer \(l\), the loss function (10) for the attack becomes \[\ell_{Q}(\hat{X},\hat{Y})=\sum_{l=1}^{L}q^{(l)}\left\|\Delta\hat{\theta}_{t}^ {(l)}-\Delta\theta_{t}^{(l)}\right\|^{2}. \tag{23}\] Then, the crucial question is how to choose layer weights \(q^{(l)}\) to improve the reconstruction. #### 4.2.1 Design of Layer Weights Increasing weights layer by layer.We consider linearly increasing weight functions for difference types of layers. To expose our ideas clearly, we focus on the commonly used ResNet architecture [7], which contains convolutional, batch normalization, and fully connected layers. We design the following weight functions for each kind of layer: \[q_{cv}^{(l)} =\begin{cases}\frac{q_{\text{\tiny{sc}}}-1}{L_{cv}-1}(l-1)+1,& \text{if }l=1,2,...,L_{cv}>1,\\ q_{cv},&\text{if }l=L_{cv}=1,\end{cases} \tag{24a}\] \[q_{bn}^{(l)} =\begin{cases}\frac{q_{bn}-1}{L_{bn}-1}(l-1)+1,&\text{if }l=1,2,...,L_{bn}>1,\\ q_{bn},&\text{if }l=L_{bn}=1,\end{cases}\] (24b) \[q_{fc}^{(l)} =\begin{cases}\frac{q_{fc}-1}{L_{fc}-1}(l-1)+1,&\text{if }l=1,2,...,L_{fc}>1,\\ q_{fc},&\text{if }l=L_{fc}=1,\end{cases} \tag{24c}\] where \(L_{cv}\), \(L_{bn}\), and \(L_{fc}\) are the numbers of the convolutional, batch normalization, and fully connected layers, respectively, and \(L=L_{cv}+L_{bn}+L_{fc}\). The values of \(q_{cv}>1\), \(q_{bn}>1\) and \(q_{fc}>1\) are the largest weights assigned to the last layer. For a given neural network with a fixed number of layers (\(L_{cv}\), \(L_{bn}\) and \(L_{fc}\)), the values of \(q_{cv}\), \(q_{bn}\) and \(q_{fc}\) determine the slope of the linearly increasing weight functions in (24). Enhancing the weights of layers with larger errors.Adding linearly increasing weights determined by (24) to the loss function (23) may overly emphasize the importance of some layers in the neural network and lead to a biased reconstruction. To strike a balance between adding linearly increasing weights and avoiding biased reconstructions, we propose to modify the weights of layers with larger errors by exploiting the statistical information like the mean \(\mu(\cdot)\) and the variance \(\sigma^{2}(\cdot)\) of the layer-wise model updates \(\{\Delta\theta_{t}^{(l)}\}_{l=1}^{L}\) in (22). The procedure of our enhancing method is elaborated below. First, we calculate the relative error \(e_{mean}^{(l)}\) and \(e_{var}^{(l)}\) of the dummy model update \(\Delta\hat{\theta}_{t}^{(l)}\) and the real model update \(\Delta\theta_{t}^{(l)}\) at each layer \(l=1,2,...,L\) as follows: \[e_{mean}^{(l)}=\frac{|\mu(\Delta\hat{\theta}_{t}^{(l)})-\mu(\Delta\theta_{t}^ {(l)})|}{|\mu(\Delta\theta_{t}^{(l)})|},\quad l=1,2,...,L, \tag{25}\] \[e_{var}^{(l)}=\frac{|\sigma^{2}(\Delta\hat{\theta}_{t}^{(l)})-\sigma^{2}( \Delta\theta_{t}^{(l)})|}{|\sigma^{2}(\Delta\theta_{t}^{(l)})|},\quad l=1,2,...,L. \tag{26}\] Next, we select a subset \(\mathcal{P}\subseteq\mathcal{L}=\{l\}_{l=1}^{L}\) of layers with the largest relative errors in terms of \(\{e_{mean}^{(l)}\}_{l=1}^{L}\) and \(\{e_{var}^{(l)}\}_{l=1}^{L}\), and set their layer weights \(q^{(l)}\) to \(q_{en}\in\mathbb{R}\), i.e., \[q^{(l)}=q_{en},\quad l\in\mathcal{P}. \tag{27}\] The choice of the subset \(\mathcal{P}\) can be decided by the proportional parameters \(p_{mean}\in[0,1]\) and \(p_{var}\in[0,1]\) in the following way: For a given \(p_{mean}\), we first select \(N_{mean}=\lceil p_{mean}\cdot L\rceil\) layers with the largest relative error in terms of \(\{e_{mean}^{(l)}\}_{l=1}^{L}\). Let the set of indices corresponding to the \(N_{mean}\) layers be denoted as \(\mathcal{P}_{mean}\), that is \[\mathcal{P}_{mean}=\{i_{1},i_{2},\ldots,i_{N_{mean}}\}, \tag{28}\] where \(i_{1},i_{2},\ldots,i_{N_{mean}}\in\{1,2,\ldots,L\}\) are the indices selected such that \(e_{mean}^{(i_{1})}\geq e_{mean}^{(i_{2})}\geq\ldots\geq e_{mean}^{(i_{N_{ mean}})}\geq e_{mean}^{(l)},(l\in\mathcal{L}\setminus\mathcal{P}_{mean})\). Similarly, we can get a set \(\mathcal{P}_{var}\) with \(N_{var}=\lceil p_{var}\cdot L\rceil\) elements as \[\mathcal{P}_{var}=\{j_{1},j_{2},\ldots,j_{N_{var}}\}, \tag{29}\] where \(j_{1},j_{2},\ldots,j_{N_{var}}\in\{1,2,\ldots,L\}\) are the indices that satisfy \(e_{var}^{(j_{1})}\geq e_{var}^{(j_{2})}\geq\ldots\geq e_{var}^{(j_{N_{var}})} \geq e_{var}^{(l)},(l\in\mathcal{L}\setminus\mathcal{P}_{var})\). Finally, the subset \(\mathcal{P}\) can be obtained as the intersection of \(\mathcal{P}_{mean}\) and \(\mathcal{P}_{var}\): \[\mathcal{P}=\mathcal{P}_{mean}\cap\mathcal{P}_{var}. \tag{30}\] Hyperparameters to tune.Following (24) and (27), the layer weights \(\{q^{(l)}\}_{l=1}^{L}\) in (23) are determined by the parameter vector \(Q\in\mathbb{R}^{6}\), which is defined as \[Q=(q_{cv},q_{bn},q_{fc},q_{en},p_{mean},p_{var}). \tag{31}\] Given \(Q\), by using the weighted loss function (23), one can obtain the reconstructed data \((\hat{X}^{*},\hat{Y}^{*})\) by solving the following optimization problem: \[(\hat{X}^{*},\hat{Y}^{*})=\arg\min_{\hat{X},\hat{Y}}\ell_{Q}(\hat{X},\hat{Y}). \tag{32}\] Then, the question becomes how to choose a proper \(Q\) for a better reconstruction. 4.2.2 Choice of \(Q\) by Bayesian Optimization Objective function.As shown in (32), one can obtain the reconstructed data \((\hat{X}^{*},\hat{Y}^{*})\) with a given \(Q\). Then, the corresponding dummy model update \(\Delta\hat{\theta}^{*}\) can be calculated as \(\Delta\hat{\theta}^{*}=G_{t}(\hat{X}^{*},\hat{Y}^{*})\). Let \(f:\mathbb{R}^{6}\rightarrow\mathbb{R}\) be the objective function in the form of \(l_{2}\) distance of the dummy model update \(\Delta\hat{\theta}^{*}\) and the ground-truth model update \(\Delta\theta\) as follows: \[f(Q)=\left\|\Delta\hat{\theta}_{t}^{*}-\Delta\theta_{t}\right\|^{2}. \tag{33}\] Finding the optimal \(Q^{*}\) is equivalent to solving the following optimization problem: \[Q^{*}=\arg\min_{Q}f(Q). \tag{34}\] For this optimization problem, \(f\) is a black box function that does not have an analytic expression. Meanwhile, calculating \(f\) is computationally expensive since one has to complete a reconstruction attack to obtain \((\hat{X}^{*},\hat{Y}^{*})\). Hence, traditional parameter determination methods like grid search are not feasible. To overcome the above difficulties, we employ the Bayesian optimization [4] to solve (34). A Bayesian optimization algorithm for (34).Bayesian optimization is a powerful technique for optimizing black-box functions that are expensive to evaluate and may have noise or other sources of uncertainty [4]. In general, Bayesian optimization iteratively uses a surrogate function to approximate the black-box function and then employs an acquisition function to determine the next set of parameters to evaluate. The surrogate model works to approximate the black-box objective function \(f\), which is commonly chosen to be a Gaussian process (GP). Formally, a GP is a collection of random variables, any finite number of which have a joint Gaussian distribution [17]. Given an initial set \(\mathcal{O}=\left\{\left(\mathbf{Q},\mathbf{f}\right)\right\}:=\left\{Q_{i},f(Q_{i}) \right\}_{i=1}^{n}\) that contains \(n\) pairs of the sampling points and their function values, the resulting prior distribution over \(\mathbf{f}\) can be given as \[\mathbf{f}\sim\mathcal{N}(\mathbf{\mu},\mathbf{\Sigma}), \tag{35}\] where \(\mathbf{\mu}=\left(\mu\left(Q_{1}\right),\ldots,\mu\left(Q_{n}\right)\right)\), \(\mu\) is the mean function and commonly set to zero, \(\mathbf{\Sigma}=\kappa\left(\mathbf{Q},\mathbf{Q}\right)\), and \(\kappa\) is a positive definite kernel function usually set to be the Gaussian kernel. Then, we can infer the value of \(f(Q)\) at any point \(Q\) by computing the posterior distribution of \(f(Q)\) given prior observations using Bayes' rule [17]: \[f(Q)\mid\mathbf{f} \sim\mathcal{N}(\mu(Q),\sigma^{2}(Q)), \tag{36}\] \[\mu(Q) =\kappa(Q,\mathbf{Q})\mathbf{\Sigma}^{-1}\mathbf{f},\] \[\sigma^{2}(Q) =\kappa(Q,Q)-\kappa(Q,\mathbf{Q})\mathbf{\Sigma}^{-1}\kappa(\mathbf{Q},Q).\] The role of the acquisition function is to propose the parameters for the next trial by trading off exploitation and exploration. Exploitation means sampling where the surrogate model predicts a high objective and exploration means sampling at locations where the prediction uncertainty is high. One of the most popular acquisition functions is Expected Improvement (EI) [16]. Let \(f_{\min}\) denote the best function value obtained so far. Then, the improvement over \(f_{\min}\) at point \(Q\) can be defined as \[I(Q)=\max(f_{\min}-f(Q),0). \tag{37}\] The improvement \(I(Q)\) is a random variable since \(f(Q)\sim\mathcal{N}\left(\mu(Q),\sigma^{2}(Q)\right)\) as shown in (36). To obtain the expected improvement we can take the expected value as follows: \[\mathrm{EI}(Q)=\mathbb{E}[\max(f_{\min}-f(Q),0)]. \tag{38}\] The expected improvement can be evaluated analytically under the GP [10], given as: \[\mathrm{EI}(Q)=\left(f_{\min}-\mu(Q)\right)\Phi\left(\frac{f_{\min}-\mu(Q)}{ \sigma(Q)}\right)+\sigma(Q)\phi\left(\frac{f_{\min}-\mu(Q)}{\sigma(Q)}\right), \tag{39}\] where \(\phi\) and \(\Phi\) are the probability density and cumulative distribution functions of the standard normal distribution, respectively. It can be seen that the value of the function \(\mathrm{EI}\) is high for a point \(Q\) predicted to have a small \(\mu(Q)\) and a large \(\sigma(Q)\), indicating the trade-off between the exploitation and exploration. Given the \(\mathrm{EI}\), the parameter \(Q\) for the next trial is chosen to be the point with the largest expected improvement: \[Q=\arg\max_{Q}\mathrm{EI}(Q). \tag{40}\] The evaluation of \(\mathrm{EI}(Q)\) is much easier than that of the function \(f\) in (34). The optimization problem (40) can be solved by some classic optimization techniques such as Newton's method. ### Approximate and weighted data reconstruction attack method Based on the discussions in Sections 4.1-4.2, we propose an approximate and weighted data reconstruction attack method and list it in Algorithm 2. ``` 1:Intercept a client's model update \(\Delta\theta_{t}\) and local training process \(G_{t}\) 2:Calculate approximate \(\Delta\tilde{\theta}_{t,e}\) and \(G_{t,e}\quad\triangleright\) (20) 3:Initialize an empty set \(\mathcal{O}\) 4:for trial from 1 to \(n\)do 5: Choose \(Q\) from the search space \(\mathcal{Q}\) randomly 6: Obtain \(\hat{X},\hat{Y},f(Q)\leftarrow\mathrm{WeightedRec}(Q,\Delta\tilde{\theta}_{t,e},G_{t,e})\quad\triangleright\) Algorithm 3 7:\(\mathcal{O}\leftarrow\mathcal{O}\cup\{(Q,f(Q))\}\) 8:endfor 9:for trial from \(n+1\) to \(N_{BO}\)do 10: Fit GP over \(f\) with data in \(\mathcal{O}\quad\triangleright\) (35) 11: Choose \(Q\) with the largest EI \(\triangleright\) (40) 12: Obtain \(\hat{X},\hat{Y},f(Q)\leftarrow\mathrm{WeightedRec}(Q,\Delta\tilde{\theta}_{t,e},G_{t,e})\quad\triangleright\) Algorithm 3 13:\(\mathcal{O}\leftarrow\mathcal{O}\cup\{(Q,f(Q))\}\) 14:endfor 15:\((Q^{*},f(Q^{*}))\leftarrow\min_{f(Q)}\mathcal{O}\) 16:\(\hat{X}^{*},\hat{Y}^{*},f(Q^{*})\leftarrow\mathrm{WeightedRec}(Q^{*},\Delta \tilde{\theta}_{t,e},G_{t,e})\quad\triangleright\) Algorithm 3 17:return\(\hat{X}^{*},\hat{Y}^{*}\) ``` **Algorithm 2** Approximate and Weighted Attack (AWA) ``` 1:Initialize the dummy data \((\hat{X},\hat{Y})\) 2:for attack iteration from 1 to \(N_{AT}\)do 3:\(\Delta\tilde{\theta}_{t,e}=G_{t,e}(\hat{X},\hat{Y})\) 4: Calculate \(\{q^{(t)}\}_{l=1}^{L}\) based on \(Q\quad\triangleright\) (24), (27) 5:\(\ell_{Q}(\hat{X},\hat{Y})=\sum_{l=1}^{L}q^{(l)}\|\Delta\tilde{\theta}_{t,e}^{( l)}-\Delta\tilde{\theta}_{t,e}^{(l)}\|^{2}\) 6:\(\hat{X}\leftarrow\hat{X}-\hat{\eta}\nabla_{\hat{X}}\ell_{Q}(\hat{X},\hat{Y})\) 7:\(\hat{Y}\leftarrow\hat{Y}-\hat{\eta}\nabla_{\hat{Y}}\ell_{Q}(\hat{X},\hat{Y})\) 8:endfor 9:\(f(Q)=\|G_{t,e}(\hat{X},\hat{Y})-\Delta\tilde{\theta}_{t,e}\|^{2}\) 10:return\(\hat{X},\hat{Y},f(Q)\) ``` **Algorithm 3** WeightedRec\((Q,\Delta\tilde{\theta}_{t,e},G_{t,e})\) ## 5 Experimental Tests In this section, we report some numerical results to validate the effectiveness of our proposed AWA method given in Algorithm 2. We first introduce experimental environments and implementation details used in our experiments. We then explain the choice of hyperparameters and describe the evaluation metrics. After that, we test the AWA for image data reconstruction and compare the performance of AWA with two state-of-the-art attack algorithms (AGIC [23] and DLG [27]) in different FedAvg scenarios. ### Setups Hardware.For all the experiments, we use a computer equipped with an Xeon E5-2680 v4 CPU, 32GB of RAM, and an NVIDIA GeForce RTX 1080 Ti GPU. Implementation details.To implement the FedAvg, some images from the dataset CIFAR-10 (size \(32\times 32\), 10 classes) [12] are used as the clients' training data. The model used by each client is the ResNet18 [7], which contains 17 convolutional layers, 17 batch-normalization layers, and 1 fully connected layer. The client's local training uses a stochastic gradient descent with a learning rate of 0.001. For the attack optimizations in (11), the Adam optimizer [11] with a learning rate of 0.1 is used and each attack runs for 1000 iterations. Following the label inference method in [6, 24], we proceed with the assumption that the label information is known. Table 1 lists the simulation scenarios of the data reconstruction attack. Evaluation metrics.In order to evaluate the efficiency of the attach and the quality of the data reconstruction, we use three different metrics to measure the dissimilarity between reconstructed data and the real data: pixel-wise Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR) [18], and Structural Similarity Index Measure (SSIM) [20]. The above three metrics are prevalent and appropriate indicators in evaluating the effect of image reconstruction [18], and they have been widely used in evaluating the existing data reconstruction attacks, see [5, 21, 24]. * MSE is the degree of variation between the original image \(\mathbf{D}\) and the reconstruction image \(\mathbf{D^{\prime}}\) and is defined by \[\mathrm{MSE}(\mathbf{D},\mathbf{D^{\prime}})=\frac{1}{d_{m}d_{n}}\sum_{i=0}^{d_{m}-1} \sum_{j=0}^{d_{n}-1}[\mathbf{D}(i,j)-\mathbf{D^{\prime}}(i,j)]^{2},\] where \(d_{m}\times d_{n}\) is the dimension of images; \(\mathbf{D}(i,j)\) and \(\mathbf{D^{\prime}}(i,j)\) are the pixel values at coordinates \((i,j)\) of \(\mathbf{D}\) and \(\mathbf{D^{\prime}}\). Clearly, a smaller value of MSE indicates a better reconstruction quality. * PSNR represents the rate of the maximum possible signal power to the distortion noise power. PSNR is calculated as \[\mathrm{PSNR}(\mathbf{D},\mathbf{D^{\prime}})=10\log_{10}\frac{\mathrm{Max}_{\mathbf{D}} ^{2}}{\mathrm{MSE}(\mathbf{D},\mathbf{D^{\prime}})},\] where \(\mathrm{Max}_{\mathbf{D}}\) is the maximum possible value of pixels in the original image \(\mathbf{D}\). It is easy to see that the larger the PSNR value, the better the image reconstruction quality. * SSIM measures the structural similarity between the original and reconstructed images. The value of SSIM ranges between zero and one, with a higher value indicating a better reconstruction. To be concrete, SSIM is calculated as \[\mathrm{SSIM}(\mathbf{D},\mathbf{D^{\prime}})=\frac{\left(2\mu_{\mathbf{D}}\mu_{\mathbf{D^{ \prime}}}+c_{1}\right)\left(2\sigma_{\mathbf{D}\mathbf{D^{\prime}}}+c_{2}\right)}{ \left(\mu_{\mathbf{D}}^{2}+\mu_{\mathbf{D^{\prime}}}^{2}+c_{1}\right)\left(\sigma_{ \mathbf{D}}^{2}+\sigma_{\mathbf{D^{\prime}}}^{2}+c_{2}\right)}.\] Here, \(\mu_{\mathbf{D}}\) and \(\mu_{\mathbf{D^{\prime}}}\) are the average pixel values of \(\mathbf{D}\) and \(\mathbf{D^{\prime}}\), \(\sigma_{\mathbf{D}}\) and \(\sigma_{\mathbf{D^{\prime}}}\) are the standard deviations of pixel values of \(\mathbf{D}\) and \(\mathbf{D^{\prime}}\), \(\sigma_{\mathbf{D}\mathbf{D^{\prime}}}\) is the covariance of \(\mathbf{D}\) and \(\mathbf{D^{\prime}}\), \(c_{1}=\left(k_{1}H\right)^{2}\) and \(c_{2}=\left(k_{2}H\right)^{2}\) are constants with \(k_{1}=0.01\), \(k_{2}=0.03\), and \(H\) being the dynamic range of the pixel-values. \begin{table} \begin{tabular}{c c c c c c} \hline **Parameters** & \(q_{cv}\) & \(q_{bn}\) & \(q_{fc}\) & \(q_{en}\) & \(p_{mean}\) & \(p_{var}\) \\ **Search ranges** & [1, 1000] & [1, 1000] & [1, 1000] & [1, 1000] & [0, 0.5] & [0, 0.5] \\ \hline \end{tabular} \end{table} Table 2: Search ranges of \(Q\) (31) in Bayesian optimization \begin{table} \begin{tabular}{c c c c c} \hline **Dataset** & **Neural network** & **Attack optimizer** & **Attack learning rate** & **Attack iterations** \\ CIFAR-10 & ResNet18 & Adam & 0.1 & 1000 \\ \hline \end{tabular} \end{table} Table 1: Simulation scenarios of the reconstruction attack ### Results for Image Data Reconstruction Attacks To verify the feasibility and effectiveness of the proposed AWA method, we test it in image data reconstruction attacks and compared it with two state-of-the-art attack methods (AGIC [23] and DLG [27]) under the following four different FedAvg scenarios: * Case 1: \(N=4\), \(E=1\), and \(B=1\), * Case 2: \(N=4\), \(E=4\), and \(B=1\), * Case 3: \(N=4\), \(E=1\), and \(B=4\), * Case 4: \(N=4\), \(E=2\), and \(B=2\). Recall the analysis in Section 3, when \(E>1\) and \(B>1\), the attacker has to use an approximate method for the attack. Therefore, only AWA and AGIC can conduct the attack for Case 4. The approximate strategy of each method is given below. Our proposed AWA method uses (20) to get the approximate intermediate model updates \(\Delta\tilde{\theta}_{t,e}\) in each epoch \(e=1,2,\ldots,E\). Then, the attacker can reconstruct the client's dataset by using any of the \(\Delta\tilde{\theta}_{t,e}\) and the corresponding \(G_{t,e}\). In Case 4 with \(E=2\), we have two optional approximate intermediate model updates: \(\Delta\tilde{\theta}_{t,1}\) and \(\Delta\tilde{\theta}_{t,2}\). In the test, we choose \(\Delta\tilde{\theta}_{t,1}\) as the target model update for the reconstruction. On the other hand, AGIC assumes that a combined batch consisting of all the mini-batches used in the client's local training process can approximate the received model update in a single local update step. For this purpose, it initializes the dummy data with the same size as the combined mini-batches (\(N\times E\)) and calculates the dummy model update by performing one local update step using the combined dummy data. In all the tests, the DLG method utilizes the unweighted loss function (10) for the attack, while the AGIC method employs a weighted cosine similarity loss function by assigning linearly increasing weights to convolutional and fully connected layers. However, these weights are assigned empirically and not tailored case by case. In contrast, the AWA method employs the weighted and enhanced loss function (23) for the attack, with the layer weights determined by (24) and (27). The values of parameter \(Q\) are selected by utilizing the Bayesian optimization [4]. The cumulative minimum loss \(f(Q)\) of the Bayesian optimization in 50 trials are presented in Figure 1. It can be seen that the Bayesian optimization approach successfully finds a trend of smaller \(f(Q)\) values as the trial count increases. Then, the parameter settings in our AWA method for each case are listed in Table 3. The numerical comparisons of the above three data reconstruction attack methods for Cases 1-4 are presented in Figure 2 and Table 4. We observe that in all four cases, our proposed AWA method consistently yields images with substantially enhanced resolution compared to those obtained by DLG and AGIC. This is further validated by the consistently highest values of PSNR and SSIM achieved by AWA in each case. Another noteworthy finding is the consistency in the reconstruction qualities of images produced by our AWA method across the diverse FedAvg cases, which demonstrates the robustness of our proposed approach. In Case 4, where \(E>1\) and \(B>1\), DLG encounters difficulties in performing the attack due to the absence of an approximate strategy. On the other hand, although AGIC incorporates an approximate strategy, it fails to reconstruct images with identifiable objects effectively. In sharp contrast, the proposed AWA method demonstrates its capability to successfully reconstruct images with reasonable resolutions, overcoming the limitations faced by DLG and AGIC. Furthermore, the PSNR of images reconstructed in Case 4 is found to be comparable to that achieved in the first three cases, which strongly suggests the effectiveness of our proposed approximate strategy. Furthermore, we implement the AWA method with 3000 iterations for four cases. All the other parameters maintain consistency with the original experiment setup. The resulting reconstructed images and the corresponding SSIM metrics for each case are represented in Figure 3. Evidently, the attained SSIM values surpass those presented in Table 4 for the 1000-iteration attack. The results clearly show a significant visual proximity between the reconstructed images and their original counterparts. Additionally, Figure 4 presents the evaluation metrics PSNR and SSIM, as well as the cumulative minimum value of the weighted loss during the attack across the four cases. It is evident that our method achieves a satisfactory level of reconstruction, with SSIM values exceeding 0.9 within 1500 attack iterations for all four cases. The outcomes of Case 4 exhibit a marginally inferior performance compared to Cases 1-3 due to the need of approximation. However, the reconstructed images in Case 4 remain sufficiently lucid to facilitate object identification. These results further validate the efficiency and effectiveness of our proposed AWA method. Figure 4: Evaluation metrics of the AWA method for four cases. (Attack iterations: 3000) Figure 3: Reconstruction results of the AWA method for four cases. (Attack iterations: 3000) Overall, the above results show the significant enhancement in reconstruction performance achieved by our proposed AWA method. The comprehensive evaluation provides strong evidence supporting the superiority of our method AWA in data reconstruction attacks for the considered FedAvg scenarios. ## 6 Conclusions Although Federated Learning (FL) has emerged as a contemporary paradigm for safe distributed learning, its potential privacy benefits are compromised by data reconstruction attacks. In this paper, we first formulate the attack as an inverse problem (8), allowing us to reconstruct the client's training data iteratively by solving an optimization problem (11). To attack the widely used FedAvg scenario, we propose an interpolation-based approximation method (20), where the intermediate model updates corresponding to each epoch are approximated by interpolating the model parameters. Furthermore, we propose a layer-wise weighted and enhanced loss function (23) for the attack to improve the quality of reconstructed data. By assigning appropriate weights to model updates in different layers solving (34), using the Bayesian optimization method, we achieve superior reconstruction results compared to the existing state-of-the-art methods. Moreover, our method is compatible with various neural network architectures like Convolutional Neural Networks and Residual Neural Networks. Numerical results validate that our proposed approximate and weighted data reconstruction attack method is effective for adversaries to exploit the vulnerabilities of FL systems utilizing Federated Averaging algorithms. The ability to reconstruct data from intermediate model updates highlights the need for robust defense mechanisms. Future research could focus on developing countermeasures and enhancing the security of FL frameworks to mitigate the risks associated with such attacks.
2301.03289
Thermal hysteresis and front propagation in dense planetary rings
Saturn's rings are composed of icy grains, most in the mm to m size ranges, undergoing several collisions per orbit. Their collective behaviour generates a remarkable array of structure over many orders of magnitude, much of it not well understood. On the other hand, the collisional properties and parameters of individual ring particles are poorly constrained; usually N-body simulations and kinetic theory employ hard-sphere models with a coefficient of restitution $\epsilon$ that is constant or a decreasing function of impact speed. Due to plastic deformation of surface regolith, however, it is likely that $\epsilon$ will be more complicated, at the very least a non-monotonic function. We undertake N-body simulations with the REBOUND code with non-monotonic $\epsilon$ laws to approximate surfaces that are friable but not sticking. Our simulations reveal that such ring models can support two thermally stable steady states for the same (dynamical) optical depth: a cold and a warm state. If the ring breaks up into radial bands of one or the other state, we find that warmer states tend to migrate into the colder states via a coherent travelling front. We also find stationary `viscous' fronts, which connect states of different optical depth, but the same angular momentum flux. We discuss these preliminary results and speculate on their implications for structure formation in Saturn's B and C-rings, especially with respect to structures that appear in Cassini images but not in occultations.
Rémy Larue, Henrik Latter, Hanno Rein
2023-01-09T12:10:18Z
http://arxiv.org/abs/2301.03289v1
# Thermal hysteresis and front propagation in dense planetary rings ###### Abstract Saturn's rings are composed of icy grains, most in the mm to m size ranges, undergoing several collisions per orbit. Their collective behaviour generates a remarkable array of structure over many orders of magnitude, much of it not well understood. On the other hand, the collisional properties and parameters of individual ring particles are poorly constrained; usually \(N\)-body simulations and kinetic theory employ hard-sphere models with a coefficient of restitution \(\epsilon\) that is constant or a decreasing function of impact speed. Due to plastic deformation of surface regolith, however, it is likely that \(\epsilon\) will be more complicated, at the very least a non-monotonic function. We undertake \(N\)-body simulations with the REBOUND code with non-monotonic \(\epsilon\) laws to approximate surfaces that are friable but not sticking. Our simulations reveal that such ring models can support two thermally stable steady states for the same (dynamical) optical depth: a cold and a warm state. If the ring breaks up into radial bands of one or the other state, we find that warmer states tend to migrate into the colder states via a coherent travelling front. We also find stationary 'viscous' fronts, which connect states of different optical depth, but the same angular momentum flux. We discuss these preliminary results and speculate on their implications for structure formation in Saturn's B and C-rings, especially with respect to structures that appear in Cassini images but not in occultations. keywords: instabilities - waves - planets and satellites: rings ## 1 Introduction Saturn's rings flaunt an extraordinary array of axisymmetric structure, both quasi-regular and chaotic, ranging over some four orders of magnitude in length - from 10 m to 100 km (Colwell et al. 2009, Cuzzi et al. 2018). Yet despite several decades of theoretical effort, their origins are only partially understood (Schmidt et al. 2009, Estrada et al. 2018, Salo et al. 2018). In particular, the disjunct bands of high and low optical depth in the B-ring (Horn and Cuzzi 1996, Colwell et al. 2007), the plateaus in the C-ring (Tiscareno et al. 2019), and the irregular intermediate scale striations in the A and B-rings (Porco et al. 2005) are presently without plausible explanations. Simply put, there is too much observed structure and too few suitable instabilities (or related processes) in our theoretical models. Perhaps it is time to re-assess some of our fundamental assumptions and explore a wider range of alternative scenarios. It is probable, though not assured, that much of the ring's unexplained structure arises spontaneously due to its peculiar granular flow. Since the 1980s researchers have turned to kinetic theory or \(N\)-body simulations to model this flow, initially calculating the thermal balances underlying ring equilibria, and then the (viscous) instabilities that might generate structure (e.g., Hameen-Antila 1982, Araki & Tremaine 1986, Wisdom & Tremaine 1988, Salo 1991, Hameen-Antila & Salo 1993, Salo et al. 2001, Latter & Ogilve 2006, 2008). These studies have made several strong assumptions, especially regarding the nature of the ring particles and their collisional behaviour, for instance rarely deviating from a hard-sphere model with either a constant coefficient of restitution \(\epsilon\) or a 'Bridges law' (Bridges et al. 1984), whereby collisions below some critical impact speed are perfectly elastic. In reality, ring particles are likely to be irregularly shaped and coated in a regolith of small par ticles \(\lesssim 1\) cm (e.g. Doyle et al.1989, Nicholson et al. 2008, Morishima et al. 2012; Deau 2015) and, being irregular and fluffy, their surfaces should produce an _enhanced inelasticity_ at low impact speeds, and indeed possible particle adhesion. In light of this, the adoption of a constant \(\epsilon\), or a Bridges law, may significantly misrepresent some of the ring's collective collisional dynamics. Our paper tests this idea by exploring other, physically motivated, prescriptions for \(\epsilon\). We find, in fact, that even very simple changes to the collision law can give remarkably different outcomes. Continuum mechanical models of viscoelastic collisions that account for fluffy and/or sticky surfaces demonstrate that \(\epsilon\) is a non-monotonic function of impact speed \(v_{\rm coll}\). Beneath some critical speed we have \(\epsilon=0\), but on increasing \(v_{\rm coll}\), \(\epsilon\) rises, plateaus, and then decreases again (Gorkavyi 1985, Hertzsch 2002, Albers & Spahn 2006, Brilliantov et al. 2007). Laboratory experiments appear to confirm this picture (Gorkavyi 1989, Hatzes et al. 1991, Bridges et al. 1996). We implement collision laws of this basic form in our paper and term them'regolith laws'. In addition, at or below the critical speed colliding particles may stick, but we neglect this important effect in order to avoid the vexed and complicated issue of size-distribution dynamics (e.g. Brilliantov et al. 2015). Our approach is mainly numerical, via \(N\)-body simulations of monodisperse, spherical, indestructible particles with the code REBOUND; but we also employ a dense gas kinetic theory, where appropriate. Note that we do not include self-gravity and thus our simulations fail to exhibit wakes, nor do they support viscous overstability, both important phenomena we hope to test in the future. Our study is distinct but complementary to recent \(N\)-body simulations that explicitly test the role of adhesion, especially on instabilities (Ballouz et al. 2017, Lu et al. 2018; see also Section 16.7.1.7 in Salo et al. 2018). Our main focus, in contrast, will be on disk thermodynamics. Our first main result is that regolith laws permit a dense ring to fall into one of two thermally stable states at the same optical depth: (a) a very dense state with filling factors \(\sim 0.3\) and low temperatures, \(c\lesssim a\Omega\) (where \(c\) is velocity dispersion, \(a\) is particle radius, and \(\Omega\) is orbital frequency) and (b) a moderately dense state with lower filling factors (\(\lesssim 0.1\)) and a slightly warmer temperature, \(c\gtrsim 4a\Omega\). This bistability generally favours optical depths less than 1, but can be pushed up to higher values if we broaden our parameter range. We also find in certain circumstance that the cold state at low optical depth is metastable: shot noise permits the ring to spontaneously jump into the hot state. Our second set of results explores what happens when different thermal states spatially adjoin. If two states of the same optical depth but different temperature connect, a travelling 'thermal front' develops that can reach speeds of \(\lesssim a\Omega\), while maintaining a steady spatial structure. If the front is too slow, the disparity in the angular momentum flux between the two states reorganises the front profile so that the flux is uniform but the optical depth undergoes a jump, what we term a static 'viscous front'. Some of the latter behavior mirrors that witnessed by Salo and Schmidt (2010) in their simulations of viscous instability. The plan of the paper is as follows. The next section begins with a review of the extant literature on low-impact collisions between regolith covered and/or sticky particles, moving on to a presentation of the model collision laws we use, and then our numerical methods. Subsequently, we detail out results: the calculation of thermal equilibria and hysteresis in smallish boxes (Section 3), potential metastability (Section 4), and finally results on spatially adjoining states, i.e. thermal and viscous front (Section 5). We conclude in Section 6. ## 2 Background and Methods This section presents the physical set-up and numerical model by which we attack the thermal equilibria of rings composed of regolith-coated particles. We first devote some space to set the scene, by reviewing the theoretical and experimental literature and explaining the key ideas and parameters that underlie work in this area. The model collision laws we adopt are then exhibited, followed by the details of the \(N\)-body simulations with REBOUND we conduct. ### Collisional physics and the coefficient of restitution We aim to describe the collisional dynamics of many ring particles in a local patch of a planetary ring. From the outset we make several strong assumptions that we concede may distort our results: the particles are taken to be identical, spherical, and frictionless. Most of the ring mass is in metresized particles, and thus it is that population that we track. Only binary collisions are considered, and these are deemed inelastic, so that \({\bf g}^{\prime}\cdot{\bf k}=-\epsilon({\bf g}\cdot{\bf k})\), where \({\bf g}\) is the relative velocity of two colliding particles before the collision and \({\bf g}^{\prime}\) afterwards, \({\bf k}\) is the unit vector pointing between the two particles centres at the moment of collision, and \(\epsilon\) is the coefficient of restitution. This coefficient lies between 0 and 1 and is usually a function of the impact speed \(v_{\rm coll}=|{\bf g}\cdot{\bf k}|\). We neglect the possibility of two particles sticking and assume that all the specifics of the particle surfaces can be encapsulated in the functional behaviour of \(\epsilon\). Because we find the ring dynamics are so sensitive to \(\epsilon\), we now spend some time discussing this important physical input. #### 2.1.1 Theoretical and experimental background Research exploring the collisional behaviour of regolith-covered particles can be separated into analytical calculations, drawing on continuum mechanics, and laboratory experiments, approximating Saturnian conditions. We attempt to review and synthesise this body of work. The seminal experiments in this area were described in Bridges et al. (1984) and collided smooth ice spheres with an ice block at temperatures \(\sim 170\)K. This work produced the collision law \(\epsilon=\min\left[1,\,(v_{\rm coll}/v_{\rm crit})^{-0.234}\right]\), for \(v_{\rm crit}=0.008\) cm s\({}^{-1}\), a defining feature of which is perfect elasticity at sufficiently low collision speeds (\(v_{\rm coll}<v_{\rm crit}\)). This collision law became the standard for subsequent \(N\)-body simulations and other theoretical work. Subsequently, broken power laws of this type were shown to arise naturally in generalisations of the Hertz theory to viscoelastic solids (Dilley 1993, Hertzsch et al. 1995, Brilliantov et al. 1996, Thornton 1997). However, such theoretical work must posit that the surfaces of the colliding spheres are smooth and that irreversible energy losses arise solely from viscoelastic deformations inside the spheres. Shortly after the Bridges experiments, two neglected but insightful papers by Gorkavyi (1985, 1989) highlighted the importance of regolith and argued against perfectly elastic restitution at low impact speed. Gorkavyi emphasised that \(\epsilon\) can be dramatically altered at small \(v_{\rm coll}\) because (a) impact energy can be used up when reshaping a soft reliable surface (leaving nothing left over for elastic rebound) and/or (b) rebounding motion can be countered by surface stickiness. Using energy arguments, the 1985 paper sketches out three regimes: (a) at sufficiently low \(v_{\rm coll}\), there is total energy loss and thus \(\epsilon=0\) (sticking/adhesion is not considered); (b) at slightly larger \(v_{\rm coll}\), \(\epsilon\) increases with \(v_{\rm coll}\); and then (c) after a turning point, \(\epsilon\) decreases with \(v_{\rm coll}\) (traditional restitution). The collision law is hence non-monotonic. Gorkavyi (1989) followed this up with simple experiments using powders, metals, and marble at room temperature and pressure, which agree with earlier lab work by Hartmann (1978, 1985), in a different context, using rocks. Subsequent papers from the Bridges research group examined how the state of the particle surface influenced collisions, with a particular focus on the adhesive effect of frost, a thin layer of microscopic structure that might behave similarly to the thicker regolith layer expected on larger ring particles. Hatzes et al. (1991) showed frosty particles can stick at impact speeds below some critical level (a few mm s\({}^{-1}\)), but did not examine explicitly how it changed the form of \(\epsilon\). Bridges et al. (1996) conducted a large set of experiments for different kinds of ices and \(v_{\rm coll}\) at relevant temperatures, which further strengthened the case for sticking, and also showed that \(\epsilon\) exhibited the three main features predicted by Gorkavyi. On the theoretical side, the 2000s witnessed various extensions of Hertz contact mechanics, accounting for both viscoelasticity and particle adhesion via JRK theory (Albers and Spahn 2006, Brilliantov et al. 2007; see also Thornton and Ning 1998, and Chokshi et al. 1993, the latter in the context of ISM grains). Notable is the work by Hertzsch (2002) who modelled the two effects of sticking and of passive regolith deformation, as discussed by Gorkavyi, treating the passive regolith as a deformable viscous non-sticky'soft layer'. Both physical effects appear to influence the form of \(\epsilon\) similarly. In all cases, non-monotonic \(\epsilon\) laws were mathematically derived. Brilliantov et al. (2007) provides estimates for solid water-ice particles of various sizes that, despite several strong assumptions, help with Saturnian applications. For metre-sized water-ice impactors, the theory predicts that the maximum value \(\epsilon\) takes is relatively large, potentially above 0.7. For cm sized particles, it drops to \(\approx\) 0.3. On the other hand, the critical \(v_{\rm coll}\) for sticking is roughly \(10^{-2}\) cm s\({}^{-1}\) for metre-sized ice impactors, and this rises to greater than 0.1 cm s\({}^{-1}\) for cm-sized particles. Because of the model assumptions care must be taken, however, when applying these estimates, and in fact the quoted critical collision speeds are probably gross lower limits. The theory omits the energy dissipation channel associated with irreversible regolith deformation (as well as internal fracture) by treating the particles as solid-ice non-spinning viscoelastic spheres. It also sets the unknown dissipative constant \(A\) by fitting a (non-sticking) viscoelastic model (Brilliantov et al. 1996) to the (non-sticking) experimental data of Bridges et al. Nonetheless, the Brilliantov results provide a useful starting point for our study. Before moving on, we flag additional physics not yet discussed. In applying the above ideas and prescriptions to an ensemble of colliding particles, one must acknowledge that, by virtue of the collisions themselves, particles' surface properties will evolve. Repeated collisions will presumably 'compactify' particle regolith and hence reduce the mean critical sticking speed. On the other hand, bombardment by micrometeoroids will disturb the surfaces and there will be accretion of very small floating particles, processes that will rejuvenate regolith. It follows that, in addition to the size distribution dynamics (e.g. Longaretti 1989, Bodrova et al. 2012, Brilliantov et al. 2015), there will take place related dynamics controlling the mean surface properties. We do not attempt to construct a model for this interesting process here. #### 2.1.2 Important scales This subsection briefly outlines the key velocity scales relevant for our problem. We assume that there is a single critical sticking speed \(v_{\rm stick}\) below which two impactors will adhere. We also assume a second critical impact speed \(v_{\rm crit}\) below which \(\epsilon=0\). It may be that these two speeds are the same, though in general we expect \(v_{\rm stick}<v_{\rm crit}\), i.e. it is possible for all the energy of the impact to be used up reshaping the surface and resisting the adhesive attraction of the regolith, thereby allowing the impactors to roll clear of each other. Particle spin and tidal shear may facilitate such non-sticking \(\epsilon=0\) encounters. A third key speed is the velocity dispersion \(c\), as impact speeds will be distributed around it. Thus the relative size of \(c\) relative to \(v_{\rm crit}\) will determine which collisional regime (sticking, non-sticking, etc.) the particles are in. Partly controlling \(c\) is the orbital shear speed across a particle, \(a\Omega\) (recall \(a\) is particle radius and \(\Omega\) the orbital frequency). The importance of this scale issues from the fact that dense cold rings adopt a velocity dispersion \(c\sim a\Omega\), in the absence of gravity wakes, and \(c\lesssim 5a\Omega\), when gravity wakes are present (e.g., Araki and Tremaine 1986, Salo et al. 2018)1. It follows that if \(c\sim a\Omega\gg v_{\rm crit}\) then the regolith is not going to feature much in the mean thermal dynamics, and hence the determination of \(c\). On the other hand, if \(c\sim a\Omega\ll v_{\rm crit}\) then the surface properties are going to be important. Complicating this picture, of course, is the size dependence of both \(a\Omega\) and \(v_{\rm crit}\). In a polydisperse ring, however, the velocity dispersion of smaller particles will be similar to the metre-sized particles (Salo et al. 2018). We now obtain some bounds on the important parameter \(v_{\rm crit}/(a\Omega)\). Footnote 1: The second estimate can be obtained by assuming a gravitationally unstable ring settles into a state where the Toomre \(Q\) is \(\sim 1\), and then taking typical values for the surface density (e.g. Hedman and Nicholson 2013, 2016) First we situate ourselves at a representative location in the C-ring, in which gravity wakes are likely absent, and set \(\Omega\approx 10^{-4}\) s\({}^{-1}\). If \(a=1\) m, the most dynamically important size, \(a\Omega\) is roughly 0.01 cm/s. Next, applying the estimates from Brilliantov et al. (2007) (cf. Section 2.1.1) and setting \(v_{\rm crit}=v_{\rm stick}\), we obtain \(v_{\rm crit}/(a\Omega)\sim 1\). For cm sizes, \(v_{\rm crit}/(a\Omega)\gtrsim 10\) (noting that the velocity dispersion of this population is set by the metric sizes). As argued earlier, the Brilliantov estimates for \(v_{\rm crit}\) only provide lower bounds, and hence we conclude that it is likely that the C-ring is in a regime where surface regolith properties will matter. At a representative location in the A or B-ring, we must take into account gravity wakes. Thus we find ourselves in a more ambiguous situation: the Brilliantov estimates yield \(v_{\rm crit}/c\gtrsim 0.1\) for metre-sized particles, and \(v_{\rm crit}/c\gtrsim 1\) for cm-sized particles. Depending on how badly the Brilliantov results underestimate \(v_{\rm crit}\), we could be in a marginal regime or in a regolith-dominated regime. Certainly, further work on the collisional dynamics of ice would help decide on this point. As we do not simulate self-gravity, for now we just assume that \(a\Omega<v_{\rm crit}\), and leave open its importance to future work. #### 2.1.3 Model coefficients of restitution This section presents the two classes of non-monotonic'regolith' \(\epsilon\)-law we use in this paper. We have attempted to paramaterise these laws in two readily understandable quantities: \(v_{\rm crit}\), the impact speed at which collisions are perfectly inelastic (cf. Section 2.1.2); and \(\epsilon_{\rm max}\), the turning point value of \(\epsilon\) (i.e., its maximum). A broken power law (BPL) for \(\epsilon\), though somewhat crude has the benefits that it has few input parameters and some headway can be made with it using kinetic theory. We define the law in the following way: \[\epsilon(v_{\rm coll})=\begin{cases}\epsilon_{0},&\mbox{if $v_{\rm coll}<v_{\rm crit }$}.\\ \epsilon_{\rm max}\left(v_{\rm coll}/v_{\rm crit}\right)^{-p},&\mbox{if $v_{\rm coll }\gg v_{\rm crit}$}.\end{cases} \tag{1}\] We set the exponent \(p=0.234\), following Bridges et al. (1984), though it could take other values. The quantity \(\epsilon_{0}\) we set equal to either 1, to obtain the Bridges et al. law itself, or equal to 0, to get the opposite perfectly inelastic law. The Bridges BPL is plotted in Fig. 1 in blue. A more realistic non-monotonic \(\epsilon\) law that is smoother and exhibits something of a plateau near its maximum can be defined in several ways. We choose the following: \[\epsilon(v_{\rm coll})=\begin{cases}0,&\mbox{if $v_{\rm coll}<v_{\rm crit}$}\\ 1.625\,\epsilon_{\rm max}\,\zeta/(1+\zeta^{1.234}),&\mbox{$v_{\rm coll}\gg v _{\rm crit}$},\end{cases} \tag{2}\] where \(\zeta=(v_{\rm coll}-v_{\rm crit})/b\) and \(b\) is the plateau 'width', usually set to \(a\Omega\). Constants have been chosen so that \(\epsilon\) approaches the Bridges law for large \(v_{\rm coll}\). To facilitate the discussion later, when we compare the different models, we refer to Eq. (2) as a'realistic' law (though it is yet to be determined how realistic it is). We plot it in Fig. 1 in red. ### The potential for bistability Before presenting our numerical methods and the results that ensue, we briefly explain why a non-monotonic collision law, such as given in Eq. (2) and displayed in Fig. 1, potentially yields two stable states for the same parameters. At lower optical depths, \(N\)-body simulations and kinetic theory show that the Bridges law yields equilibria with \(c>a\Omega\), and thus most collisions sample the power-law decreasing segment of the \(\epsilon\) curve (Salo, 1991, Latter & Ogilvie, 2008). As mentioned above, the realistic regolith law we adopt approaches the Bridges law for impact speeds larger than the turning point in \(\epsilon\), and is a reasonable approximation near the turning point. One might then expect that collisions employing the regolith law would sample similar values of \(\epsilon\) and the resulting thermal equilibria will resemble the Bridges equilibria, giving us a 'warm' ring. In Fig. 1 we superimpose a mock impact velocity distribution at larger \(v_{\rm coll}\) to indicate such a state. On the other hand, when \(\epsilon\) is a constant and taken to be equal to zero the thermal equilibria are especially cold, with \(c\sim a\Omega\) (e.g. Araki & Tremaine, 1986). It follows that our regolith law might be capable of supporting these very cold equilibria as well. This should certainly be the case if \(v_{\rm crit}\) is much larger than \(a\Omega\). In this circumstance, most impact speeds will fall below \(v_{\rm crit}\) and thus yield perfectly inelastic collisions with \(\epsilon=0\), never sampling the non-zero segment of the \(\epsilon\) curve. Fig. 1 indicates a schematic velocity distribution for this state, centred on a value less than \(v_{\rm crit}\). Both the warm state and the cold state are thermally stable, as has been shown separately in \(N\)-body simulations. _And thus a non-monotonic law may yield bistability._ The disk may fall into either the cold or the warm homogeneous state for exactly the same parameters (most notably optical depth \(\tau\))2. Which is chosen depends on the initial conditions. Moreover, it follows there must also be an intermediate thermally unstable state separating the two stable states, though this will not normally be observed. The argument for bistability is strongest in a regime where \(v_{\rm crit}\gg a\Omega\). A question then is: what is the minimum value of \(v_{\rm crit}\) that yields bista Figure 1: Two forms of the coefficient of restitution \(\epsilon\) as a function of impact speed \(v_{\rm coll}\). The solid blue curve is the Bridges law, see Eq. (1), with \(\epsilon_{0}=1\). The red solid curve is the ‘regolith’ law, Eq. (2), with \(b=(1/4)v_{\rm crit}\) and \(\epsilon_{\rm max}=0.75\). In addition, we have sketched two velocity distribution functions with black dotted curves; see discussion in Section 2.2. bility? Our simulations results in Section 3 aim to answer this and other questions. ### N-body simulations In this subsection we further outline the physical model we adopt and the numerical methods used to calculate its non-trivial thermal dynamics. We seek to determine the evolution of a large number of inelastically colliding particles, and thus our main tool will be local \(N\)-body simulations. #### 2.3.1 Equations of motion We solve the equations of motion in the Hill approximation (Hill, 1878), a local coordinate system that is co-rotating with a particle on a circular orbit. The gravity from the central object is linearized in local coordinates and the orbital frequency is a constant. This allows, but does not restrict, us to use shear-periodic boundary conditions. In that case, the Hill approximation is also referred to as the shearing sheet. In our notation, the \(x\), \(y\), and \(z\) coordinates point in the radial, azimuthal and vertical direction, respectively. Treating the central object, Saturn, as a point source, the equations of motion for a test particle can be written as \[\ddot{x} =2\Omega\dot{y}+3\Omega^{2}x+F_{x}^{\rm coll}, \tag{3}\] \[\ddot{y} =-2\Omega\dot{x}+F_{y}^{\rm coll},\] (4) \[\ddot{z} =\Omega^{2}z+F_{z}^{\rm coll}, \tag{5}\] where \({\bf F}^{\rm coll}\) is the (intermittent) acceleration exerted on a particle during a collision. In the absence of collisions, the solution to these equations can be written as epicycles (e.g. Rein & Tremaine, 2011). The particles move within a finite-size numerical domain/box. We denote the radial length of the box by \(L_{x}\) and the azimuthal length by \(L_{y}\). In all our experiments, the vertical length of the box \(L_{{}_{2}}\) has been chosen to be large enough so that no particle ever crosses the vertical boundaries. Otherwise, the box is periodic in \(y\) and shear-periodic in \(x\). The only further ingredients needed are the finite particle radius \(a\) and a collision model. We treat particles as hard spheres (they are not permitted to overlap) and the outcome of a collision is described using a normal coefficient of restitution, as described in Section 2.1.3. The particles have no spin. #### 2.3.2 Numerical method We use the freely available \(N\)-body code REBOUND (Rein & Liu, 2012) to perform all of the simulations presented in the paper. To evolve the equations of motion forward in time, we use the Symplectic Epicycle Integrator (SEI, Rein & Tremaine, 2011) which is well suited for simulations of particle motion within the Hill approximation. Collisions are detected using a nearest neighbour tree search. We randomize the order in which collisions are resolved after each timestep. We found that this removes spurious correlations which might otherwise be introduced when choosing a specific order in which collisions are resolved (i.e. resolving them from left to right, by a numerical particle identifier, or by the position in memory). #### 2.3.3 Diagnostics In order to probe the collective behaviour of the granular flow, we require a number of averaged quantities. We define the mean normal geometrical optical depth \(\tau\) as the total projected area of the particles on the \((x,y)\) plane divided by the total area of the \((x,y)\) plane. In other words, \[\tau=N\pi a^{2}/(L_{x}L_{y}), \tag{6}\] where \(N\) is the number of particles. Thus, \(\tau\) is stipulated at the beginning of each run and does not change. We also define the radially and temporally varying optical depths, by subdividing the radial domain into thin strips of radial length \(L_{S}\): \[\tau(x_{i},t)=N_{i}(t)\pi a^{2}/(L_{S}L_{y}), \tag{7}\] where \(x_{i}\) is the radial location of, and \(N_{i}(t)\) is the number of particles in, the \(i\)'th strip at time \(t\). The filling factor is defined as the proportion of volume taken up by the particles. For spherical particles it can be defined as \(FF=(4\pi/3)na^{3}\), where \(n\) is volumetric number density. Particularly useful is the filling factor at the mid-plane \(FF_{0}\), which requires the calculation of the number density at \(z=0\). The mean velocity dispersion tensor is computed via \[W_{ij}=\langle\dot{x}_{i}\dot{x}_{j}\rangle \tag{8}\] where \((\dot{x}_{1},\dot{x}_{2},\dot{x}_{3})=(\dot{x},\dot{y}+\frac{3}{2}\Omega x, \dot{z})\) is the velocity relative to the shear and the angle brackets indicate a suitable average over the particles and possibly over time. The velocity dispersion \(c^{2}\) is then \(W_{ii}/3\). Note that this definition is only correct if there are no mean flows additional to the Keplerian shear. If such flows are slow (as in viscous instability), the error will be small, however. The translational (local) component of the kinematic viscosity is \[\nu_{\rm trans}=(2/3)W_{xy}/\Omega. \tag{9}\] The collisional (non-local) component of the viscosity is \[\nu_{\rm coll}=\frac{2}{3\Omega N\delta t}\sum(x_{\rm v}-x_{\rm v})\Delta p_{y} \tag{10}\] where the sum is taken over all binary collisions that occur in a time interval \(\delta t\). Here \(M\) is the total mass of all ring particles, \(\Delta p_{y}\) is the transfer of specific \(y\) momentum from the inner to the outer particle in each collision, and \(x_{\rm v}\) and \(x_{\rm v}\) are the radial locations of the two impacting particles (Wisdom & Tremaine, 1988; Daisaka, Tanaka & Ida, 2001). As we neglect self-gravity, there is no gravitational or wake contribution to the overall momentum transport. The total viscosity is hence \(\nu_{\rm tot}=\nu_{\rm trans}+\nu_{\rm coll}\). To determine the thermal conductivity of a given equilibrium state, we follow the method of Salo et al. (2001) and create a steady non-uniform temperature \(T\) profile in the radial (\(x\)) direction, where \(T=c^{2}\). In our cold-state simulations, we achieve this by making \(v_{\rm crit}\) radially dependent in the collision law. In our hot-state simulations, we vary \(\epsilon_{\rm max}\) by a small amount in the radial direction. In either case, we end up with a steady-state sinusoidal radial temperature profile, though some experimentation is required to find the right amplitude for the variations in \(v_{\rm crit}\) and \(\epsilon_{\rm max}\). The goal is to keep the perturbations in the temperature \(\Delta T\) small, but not too small so that they are dominated by shot noise. We typically use a simulation with \(L_{x}=L_{y}=200a\) and run it for at least 1000 orbits. After setting up the nonuniform temperature profiles, we then measure specific translational (local) and collisional (non-local) heat fluxes, \[q_{i}^{\rm trans} =\frac{1}{2}\sigma(c^{2}c_{i}) \tag{11}\] \[q_{i}^{\rm coll} =\frac{\sigma\sum\Delta x_{i}\delta E^{s}}{N\delta t} \tag{12}\] where \(\sigma=N/(L_{x}L_{y})\) is the number surface density, \(\delta x_{i}\) is the absolute difference of the \(i\)-coordinates of the two particles involved in a collisions, and \(\delta E^{s}\) is the change in transported energy (as opposed to dissipated energy) during the collision for the particle with the larger \(x_{i}\) coordinate. Finally, we assume the heat flux is linearly dependent on the temperature gradient, \[{\bf q}=-\kappa\nabla T. \tag{13}\] We can then correlate the measured \(q_{\kappa}\) and \(\partial_{x}T\) and retrieve the conductivity \(\kappa\) using a least squares fit. Finally, to verify our set up was working properly, we successfully reproduced Fig. 8 in Salo et al. (2001), though omit these results for the sake of space. #### 2.3.4 Parameters and initial conditions In all our \(N\)-body simulations, we adopt units so that \(a=1\) and \(\Omega=1\), though in what follows \(a\) and \(\Omega\) reappear occasionally in order to make a point. As a consequence, the main physically relevant input is the collision law. Specifically, we have some combination of \(v_{\rm crit}/(a\Omega)\), \(b\), and \(\epsilon_{\rm max}\) for non-constant collision laws. We also have the sizes of the numerical domain \(L_{x}\) and \(L_{y}\) and a constant dimensionless time-step \(\Omega dt\). We use initial conditions where particles are arranged uniformly in the plane with a uniform optical depth \(\tau\). Therefore an important initial input is particle number \(N\) while keeping the computational domain fixed. Particles are normally distributed in the \(z\)-direction. The initial velocities are also normally distributed with an initial velocity dispersion \(c_{0}\). In most cases we initialize the particles close to the thermal equilibrium we believe to be present. We present convergence tests in Appendix A. These tests shows that our simulations are converged as we vary numerical parameters for both extremely high and low optical depth, as well as hot and cold equilibria. For the regimes that we are interested in, we found that a dimensionless timestep of \(10^{-3}\) and a box size of 10s to 100s particle radii are sufficient. The large box sizes are needed only for very hot and dilute rings. ### Kinetic theory Though not the focus of this paper, it is useful to have some kinetic theoretical results, especially as they reveal the existence of the additional (thermally unstable) middle branch of equilibrium solutions. The formalism adopted is Latter and Ogilvie's (2008) reformulation of Araki and Tremaine (1986), which does not attempt to solve the Boltzmann-Enskog equation but rather a truncated moment hierarchy of continuum equations. In previous deployments of this approach, the dependence of \(\epsilon\) on the impact speed was only approximately incorporated via a 'pre-averaging' procedure (see Section 2.2.7 in Latter and Ogilvie 2008). Though convenient, this introduces unacceptable errors when using complicated non-monotonic laws as in Section 2.1.3. Thus the complete formalism is adopted. This does require completing three (instead of two) integrations in the collision term. The other main approximations adopted are'vertical locality' and a triaxial Gaussian for the velocity ellipsoid (see Araki and Tremaine 1986 and Latter and Ogilvie 2008 for more details). ## 3 Homogeneous steady states In this section we simulate various thermodynamic equilibria and demonstrate that a non-monotonic epsilon law supports up to two equilibria for a given optical depth. We characterise these several states with respect to not only their velocity dispersion, but also their packing fraction \(FF_{0}\) and transport properties, especially with respect to angular momentum and heat. We begin by reproducing previous results in the literature with both a constant and monotonic epsilon law so as to verify that our code is working properly. Moreover, as argued in Section 2.2, some of the equilibria obtained are limiting cases of those appearing in the bistable circumstances explored later and are thus useful in setting the scene. Figure 2: Velocity dispersion and total angular momentum flux \(\tau u_{\rm tot}\) versus optical depth \(\tau\) for various hard-sphere \(\epsilon\) laws, calculated from \(N\)-body simulations. In the top panel the appended numbers ‘1-20’ describe the values of \(v_{\rm crit}/(a\Omega)\) when using the standard Bridges law, whereas ‘0’ indicates runs with a constant \(\epsilon=0\). In the bottom panel, the ordering of the curves is retained. The green symbols indicate that the viscous flux is decreasing and the disk viscously unstable. ### Comparison with previous calculations Our reference cases include the simulations of Salo (1991), who employed a Bridges law but with a variable scale velocity, i.e. Eq.(1) with \(\epsilon_{0}=1\) and \(v_{\rm crit}=1,5,10,20\) (see Section 2.1), and also simulations with a constant \(\epsilon=0\), which brings about a very cold state. The results of our calculations are plotted in Fig. 2, in which we show the velocity dispersion \(c\) and angular momentum flux \(\tau\nu_{\rm tot}\) versus optical depth \(\tau\). The simulations were run until they were collisionally relaxed, and then continued for the same length of time to obtain averaged quantities. When \(\tau\) was low, and collisions relatively infrequent, the total run time was \(>1000\)\(\Omega^{-1}\); but at higher \(\tau\) (\(\sim 2\)) runs could be as a short as 50-80 \(\Omega^{-1}\). Direct comparison of Fig. 2 with the numerical results of Salo (1991; cf. his Figs 3-5) shows good agreement, and also consistency with the kinetic theory of Latter & Ogilvie (2008) (note that both these works denote \(v_{\rm crit}\) by \(v_{\rm b}\)). An interesting feature of the 'warmer' solution branches is the decreasing viscosity with \(\tau\). In fact, the hottest case, \(v_{\rm crit}=20\), is viscously unstable because the gradient of the angular momentum flux \(\tau\nu\) is negative in an interval of \(\tau\) (green markers). By inflating \(v_{\rm crit}\) in the Bridges law the velocity dispersion of the system can be controlled and, in particular, set to 'warm' values greater than \(a\Omega\) and, consequently, greater than the temperature of the very cold \(\epsilon=0\) states. These warm and cold states help illustrate the arguments presented in Section 2.2. If we take one of the two non-monotonic collision laws and set \(v_{\rm crit}\) ten or more times \(a\Omega\), then start the simulation with a hot initial condition, we might expect the subsequent spread of impact speeds to be sufficiently far from \(\epsilon\)'s turning point (cf. Fig. 1) so that the system settles into a warm 'Bridges equilibrium', similar to those plotted in Fig. 2. On the other hand, if we begin the same simulation but with very cold initial velocities (\(\ll v_{\rm crit}\)), the subsequent spread of impact speeds will remain less than \(v_{\rm crit}\) and \(\epsilon\) will almost always take the value of \(0\); the system Figure 4: The distribution of impact velocities in simulations using the ‘realistic’ law at \(\tau=1\) with parameters \(v_{\rm crit}=5\), \(b=1\), and \(\epsilon_{\rm max}=0.923\). The left panel shows the system in the cold state. The right panel shows the system in the hot state. The red line corresponds to the \(\epsilon\) law adopted. Figure 3: Selected equilibrium properties as functions of \(\tau\) for three regolith \(\epsilon\)-laws (the three columns). The leftmost column shows equilibria computed with the broken-power law model (BPL) with \(\epsilon_{0}=0\) and \(\epsilon_{\rm max}=0.8\), whereas the other two columns show the realistic model with \(\epsilon_{\rm max}=0.75\) and \(0.923\). In all cases \(v_{\rm crit}=5\). In the top row the joined circles denote the velocity dispersion calculated by \(N\)-body simulations, with the colours indicating hot (red) or cold (blue) branches. The second and third rows show the filling factor and total angular momentum flux respectively. The dashed curve indicates equivalent solutions obtained from the kinetic theory (in the BPL case only). In the bottom row, a green symbol indicates expected viscous instability. will then converge to the appropriate constant \(\epsilon=0\) state in Fig. 2. Note that the Bridges law produces a velocity dispersion \(c\) that decreases with \(\tau\) and we may then expect that for sufficiently large \(\tau\) the upper 'hot state' will be too close to the 'cold state' and bistability may disappear. ### Non-monotonic collision laws In this section we calculate equilibria for'regolith' epsilon laws that are non-monotonic: either the broken power law (BPL) with \(\epsilon_{0}=0\) or the realistic law (2). The parameters are \(\epsilon_{\rm max}=0.75,0.8\) or \(0.923\), \(v_{\rm crit}=5a\Omega\), and \(b=1\), though we examine a broader spread of values in Section 3.2.2. We first examine in some detail the thermal properties of the states, then their transport of angular momentum and heat. #### 3.2.1 Thermal hysteresis Figure 3 constitutes the first main results of the paper. Here we plot the equilibrium velocity dispersions (top row), filling factors (middle row), and total radial angular momentum fluxes (\(\tau\nu_{\rm tot}\); bottom row) obtained in a sequence of simulations at different optical depths and for different \(\epsilon\) models and parameters. Each circular marker corresponds to a different simulation. These values are obtained by time averaging a quantity once the system has become collisionally mature, as earlier. For example, \(\tau=0.1\) runs were run for \(1600\Omega^{-1}\) and averaged for the last \(800\Omega^{-1}\), while at \(\tau=2\) the total run length was \(80\Omega^{-1}\), with the averaging taking place over the last \(40\Omega^{-1}\). As is clear, in the three models presented, two steady state branches (distinguished by red and blue) are possible within a certain range of optical depth. Which of the two the system selects depends on the initial condition: a 'cold start' (low initial \(c\)) usually (but not always) takes the system to the nearby cold state, whereas a 'hot start' (initial \(c\) sufficiently high) settles on the hot state. Typically, runs starting with \(c=0.5a\Omega\) converged to the nearby cold state, while runs beginning with \(c=10a\Omega\) migrated to the hot state, if one was available, even if that state's velocity dispersion was significantly larger than the initial \(c\). The direction of migration is discussed further in Section 4. The apparent bistability extends over a range of small to intermediate optical depths. Beyond a special \(\tau\) the hot state disappears, and all hot start simulations landed on the cold branch. At small \(\tau\) we never found that the cold state disappeared, except in the case of the realistic model with \(\epsilon_{\rm max}=0.923\) and \(\tau=0.1\); this equilibrium was metastable (explored in more detail in Section 4). The bistable regime's width (in \(\tau\)) depends on the parameters. From Fig. 3, increasing the \(\epsilon_{\rm max}\) in the realistic model from \(0.75\) to \(0.923\) moved the special \(\tau\) from roughly \(0.5\) to \(1.6\) (cf. middle and right columns). The cold equilibria take \(c\) values very much in agreement with the constant \(\epsilon=0\) states simulated in the previous subsection, while the hot state resembles a Bridges law, with \(c\) decreasing with \(\tau\). In fact, the hot simulations of the realistic model with \(\epsilon_{\rm max}=0.923\) take a similar \(c\) as the Bridges \(v_{\rm crit}=10\) runs, while those with \(\epsilon_{\rm max}=0.75\) resemble a Bridges law with (roughly) \(v_{\rm crit}=5\). These similarities bolster our interpretation of the two states as'separated' by the turning point of the \(\epsilon\) curve: only a minority of collisions in the hot state occur with the low impact speeds that would trigger \(\epsilon=0\), while collisions in the cold state rarely occur with impact speeds sufficiently large to trigger larger \(\epsilon\). To flesh out this point further we plot in Fig. 4 the distribution function of impact speed for a hot state (right panel) and a cold state (left panel) for the same \(\tau=1\) (and other parameters). Superimposed in red is the \(\epsilon\) law used. As the left panel indicates, cold state collisions are almost completely inelastic; the narrow spread in impact speeds barely overlaps the portion of the curve for which \(\epsilon\neq 0\). In contrast, the hot state (shown in the right panel) is much broader and thus samples a wide range of \(\epsilon\), but importantly peaks at speeds which yield collisions with a small dissipation of energy. The filling factors in the middle row of Fig. 3 reveal that the hot branches are far less dense than the cold branches. For example, in the realistic model with \(\epsilon=0.923\), at \(\tau=1\) the hot state possesses a filling factor of \(0.08\), while the cold state has \(0.35\). The difference, of course, is not due to the surface number density (which is the same) but because the disk semi-thickness is so different between these two states: in the hot state it is \(\approx 6a\), compared to \(\sim a\) in the cold state. The ratio of the two filling factors should scale roughly with the ratio of semi-thicknesses and that is indeed what we see. The hot state branch terminates when its velocity dispersion approaches a critical value \(\sim 3\). In reality the system here encounters a saddle-node bifurcation and the solution curve bends 'backwards' thus forming an intermediate branch of thermally unstable solutions. Because these solutions are unstable they cannot manifest in \(N\)-body simulations3, but they can be calculated by kinetic theory. Kinetic Figure 5: Grids of simulations undertaken with different \(v_{\rm crit}\) and \(\epsilon_{\rm max}\) using the realistic regolith law with widths \(b=1\) (top) and \(b=2\) (bottom). Colours correspond to values of \(|c_{\rm hot}-c_{\rm cold}|\) (see text). The contour is a conservative boundary between cases that support bistability (to the right and above) and those that do not. theoretical equilibria are plotted in the leftmost column with a dashed black curve; the top and middle panels show clearly an intermediate cool, semi-dense branch. The agreement between theory and simulations is qualitative good, with the biggest deviation in the translational viscosity in the hot state, a discrepancy that has been noted in previous comparisons (Latter and Ogilvie 2008, Rein and Latter 2013).4 Footnote 4: Unfortunately, numerical difficulties prevented us calculating kinetic solutions for the realistic model. #### 3.2.2 Parameter survey In the preceding subsection we examined only three parameter sets/models; in this subsection we adopt the realistic \(\epsilon\) law and scan through \(v_{\rm crit}\) and \(\epsilon_{\rm max}\) for two different widths \(b\). Our aim is to determine how representative the thermal hysteresis explored in the previous subsection really is. Of particular interest are the lowest values of \(v_{\rm crit}\) and \(\epsilon_{\rm max}\) that yield bistability. In Fig. 5 we present 'bistability plots' for \(b=1\) and \(2\). Each square in the grid corresponds to a parameter pair \((v_{\rm crit},\epsilon_{\rm max})\), and for each square we conduct two simulations with \(\tau=0.1\), one with a hot initial condition and the other with a cold initial condition. Each simulation has been run until thermal equilibrium has been obtained, and the difference in final velocity dispersion calculated, \(|c_{\rm hot}-c_{\rm cond}|\). Finally, the square is coloured accordingly (cf. the colour bar). If the difference in final \(c\) is between \(0\) and \(5\), we interpret that the two simulations are converging on to the same (cold) equilibrium. Values larger than \(5\) (admittedly, a rather large value, given Fig. 3) we assume correspond to a bistable situation: the two simulations are settling on different thermal states. In both panels we have superimposed the contour of \(|c_{\rm hot}-c_{\rm cond}|=5\). The reader should then assign bistability to regions of the parameter plane above and/or to the right of this curve. The plots indicate, as expected, that bistability is favoured by larger values of \(v_{\rm crit}\) and \(\epsilon_{\rm max}\). Increasing both parameters helps to separate the typical impact speeds of the hot state from those of the cold state. Interestingly, the bistable region is quite rectangular. Thus when \(b=1\), bistability is guaranteed (roughly) if both \(v_{\rm crit}>4\) and \(\epsilon_{\rm max}>0.7\). We expect that these parameter restrictions should hold roughly for other non-monotonic laws. Finally, the range of bistability is also sensitive to the width of the epsilon law, as the \(b=2\) plot demonstrates. Increasing the width also helps separate out the two states. In the \(b=2\) case bistability occurs when \(v_{\rm crit}>3\) and \(\epsilon_{\rm max}>0.65\). #### 3.2.3 Viscous properties The equilibrium states discussed in the previous subsection support a viscous stress that, by acting on the background orbital shear, transports angular momentum radially across the numerical domain. The viscous properties of the flow are important thermodynamically because the stress extracts free energy from the shear, thus providing the heating source in the thermal balances undergirding these states. But the viscous stress is also important dynamically because it can beget instabilities, such as the viscous overstability and instability (Schmidt et al. 2009). In particular, if \(d(\tau\nu_{\rm tot})/d\tau\) is negative then viscous instability occurs (Lin and Bodenheimer 1981, Lukkari 1981, Ward 1981). The angular momentum flux is plotted in the bottom row of Fig. 3. Note that a subset of hot states possess a decreasing flux and are thus viscously unstable; these are marked in green. In the BPL model, the unstable interval encompasses \(\tau\) of \(0.4\) and \(0.5\), whereas in the realistic model only the \(\epsilon_{\rm max}=0.923\) case yields instability and then for \(\tau\) between approximately \(0.8\) and \(1.6\). Instability here is associated with a dominant _translational_ viscosity, which can decline at sufficiently large \(\tau\). Growing modes do not appear in these simulations, however, because the numerical domain size is smaller than the shortest unstable wavelength; in Section 5.2.2 we simulate larger domains and recover the instability. #### 3.2.4 Thermal conductivity Anticipating later sections which explore different thermal states that spatially adjoin, we compute the radial flux of thermal energy. In the absence of any mean spatial gradients, such as in the homogeneous equilibria calculated, the flux must be zero. But if two states connect in radius the flux must control, in part, how their interface evolves. As explained in Section 2.3.3, we adopt the approach of Salo et al (2001) and impose a radial sinusoidal temperature structure upon the box, through the parameters \(v_{\rm crit}\) and \(\epsilon_{\rm max}\). In Fig. 7 we show calculations of the radial thermal flux \(q_{\rm z}\) and the thermal conductivity \(\kappa\) for a fixed set of parameters (\(\epsilon_{\rm max}=0.75\), \(v_{\rm crit}=5\), \(b=1\)) and for the same optical depth \(\tau=0.2\). The left four panels correspond to the cold state (\(c\approx 1\)), and the right to the hot state (\(c\approx 6\)). The top left panel in each case describes the temperature profile across the box, while the top right panel shows the temperature gradient (solid blue), the translational (local, 'L') heat flux (dashed gold), and the collisional (nonlocal, 'NL') heat flux (dotted green). The latter two are plotted separately as functions of the temperature gradient in the bottom panels; a best-fit line extracts the conductivities. In both the hot and cold cases, the translational heat flux dominates the collisional flux. This means that the heat flux in the two states differs significantly, despite possessing the same \(\tau\). In Table 1 we list \(\kappa\) for a range of \(\tau\) and otherwise with the same parameters as in Fig. 6. \begin{table} \begin{tabular}{|c||c|c|c||c|} \hline \(\tau\) & \(\kappa_{\rm L}\) (C) & \(\kappa_{\rm NL}\) (C) & \(\kappa_{\rm L}\) (H) & \(\kappa_{\rm NL}\) (H) \\ \hline \hline 0.1 & 4.75 & 0.42 & 77.94 & 0.72 \\ 0.2 & 5.92 & 0.62 & 119.26 & 2.01 \\ 0.3 & 7.42 & 1.15 & 111.88 & 3.39 \\ 0.4 & 6.83 & 1.61 & 89.95 & 3.59 \\ \hline \end{tabular} \end{table} Table 1: Calculated translational (local) thermal conductivities \(\kappa_{\rm L}\) and collisional (non-local) thermal conductivities \(\kappa_{\rm NL}\) in cold (C) and hot (H) equilibria at various optical depths \(\tau\). A realistic collision law is adopted with \(\epsilon_{\rm max}=0.75\), \(v_{\rm crit}=5\), and \(b=1\). ## 4 Metastability In the last section we calculated steady states that appear to be thermally stable, at least linearly according to a continuum interpretation. However, \(N\)-body systems are replete with small but finite amplitude shot noise that continually tests the _nonlinear_ stability of any steady state. If the basin of attraction of a linearly stable state is small relative to the amplitude of these fluctuations, the system can potentially jump out of the state and migrate elsewhere. Many physical and biological systems offer similar examples of noise destabilising what should be linearly stable fixed points (e.g. Mel'nikov 1991, May 1973, De Swart and Grasman 1987, Majda, Timofeyev and Vanden-Eijinden 1999, 2003). In this section we investigate this possibility. Our focus will be on cold states of low-optical depth and on the hot states near the saddle node bifurcation. The reason is that these states are close to the unstable middle branch which can serve as the boundary of the basin of attraction in each case. We find that, for the parameters and models we employ, _metastability_ is relatively uncommon, only occurring in certain dilute and cold states. In particular, states near the saddle node are generally stable to shot noise perturbations. Before presenting our results we emphasise that we only explore the effect of intrinsic shot noise, but in real rings there are several other sources of finite amplitude disturbances that may work similarly, e.g. meteoroid bombardment, embedded moonlets, density waves, and gravity wakes. Figure 6: Thermal diffusivity measurements for \(\tau=0.2\) in the cold state (left panels) and hot state (right panels) for the realistic model with \(\epsilon_{\rm max}=0.75\), \(v_{\rm crit}=5\), and \(b=1\). Figure 7: Velocity dispersion as a function of time for runs with \(\tau=0.1\) (top panel) and \(\tau=0.2\) (bottom panel). The realistic model is adopted with \(\epsilon_{\rm max}=0.923\), \(v_{\rm crit}=5\), and \(b=1\). ### Cold to hot transitions We find spontaneous transitions from the cold lower branch to the hot upper branch in only a few low \(\tau\) cases when adopting a realistic collision law and \(\epsilon_{\rm max}=0.923\). Specifically, when \(\tau=0.1\) the system can hover about the cold steady state for several hundred orbits before jumping to the hot state. To probe this behaviour we ran 24 runs with slightly different initial conditions (varying both particles' locations and velocities) but all starting with the same low \(c\). To make doubly certain that the system is as close to the cold equilibrium as possible, and that any future transition is not the result of a wayward initial condition, we force \(\epsilon=0\) (a constant) for several orbits at the start. The evolution of these runs are plotted in the top panel of Fig. 7, with the shaded region indicating when \(\epsilon=0\). As is clear from the figure, all but three runs jumped to the hot state by 500 orbits (roughly \(>25\) collision times), though there was a wide spread of transition times, indicative that the process is stochastic and issues from the noise: ultimately, after some period, an overenthusiastic collision, dissipating insufficient velocity dispersion, seeds a patch of more energetic particles, which then spreads spatially and takes over the system. Of course, this is only part of the story, because energetic events must happen at slightly larger \(\tau\) but do not appear to instigate runaway heating. Indeed, we undertake a similar experiment at \(\tau=0.2\), plotted in the lower panel of Fig. 7, and witness no transitions at all. What is key is the overall basin of attraction of the cold state; as shown by the kinetic curves in the top left panel of Fig. 3, the middle unstable branch and the cold lower branch become closest at low \(\tau\). The middle branch acts as the boundary of the lower state's basin of attraction (at least in this simple phase space projection); thus at low \(\tau\) it becomes more likely that a finite amplitude perturbation can tip the system over this boundary. That said, it is not straightforward to firmly connect microphysical fluctuations (shot noise) to such a mean finite-amplitude perturbation in this phase space. ### Hot to cold transitions We now check if it is possible to obtain spontaneous hot to cold transitions. We focus on states near the tip of the saddle node, i.e. the termination of the hot branch (see top row in Fig. 3), and examine a range of \(\tau\) between 1.61 to 1.65 in the realistic model with \(\epsilon_{\rm max}=0.923\). We simulate several runs with slightly different initial conditions, as before, and plot the results in Fig. 8, top and bottom panels. As in the previous subsection, to ensure that we start the simulations in a hot state we set \(v_{\rm crit}\) to a very small value initially. Over several orbits (indicated by the shaded area in the figures), we slowly increase \(v_{\rm crit}\) to the nominal value. Unlike cold to hot transitions, the systems either immediately drop to the cold state or relax into the hot state on a timescale of 10 orbits or so (a handful of collision times). At \(\tau=1.65\) all the simulations ended up in the cold state, while at 1.64, some stayed in the hot state, while at lower tau again (1.61) most stay in the hot state. Putting aside the percentages in one or the other, the system transitions promptly or not at all. We attribute this more to the initial condition at the end of the blue phase, rather than having to wait for a more sluggish group of collisions that lead to a 'chain reaction' and a switching of states. The difference with the low \(\tau\) runs explored earlier may partially be explained by the separation between the middle and hot branches, which is relatively large, even near the tip of the saddle node (see kinetic theory curves in top left panel of Fig. 3). Once a system settles on to the hot state, and its initial conditions mostly forgotten, its intrinsic shot noise is insufficient to tip it out of its basin of attraction and into the cold state. Figure 8: Velocity dispersion as a function of time for runs of different initial conditions with \(\tau=1.61\) (top panel) and \(\tau=1.64\) (bottom panel). The realistic model is adopted with \(\epsilon_{\rm max}=0.923\), \(v_{\rm crit}=5\), and \(b=1\). ## 5 Thermal and viscous fronts Having computed several homogeneous states, we now explore the dynamics when different states spatially adjoin. If a ring region is bistable, then it is likely that such situations occur, given the varying dynamical histories at different radii. Our main focus is on the structure and evolution of the transition (or front) between two states. We will consider two cases: (a) thermal fronts, which join two states of the same \(\tau\) but different \(c\), and (b) viscous fronts, which connect two states of the same angular momentum flux \(\tau\nu\), but different \(\tau\) and \(c\) Thermal fronts involve a hot and a cold state, with the pair joined by a vertical line in the top panels of Fig. 3. Though sharing the same optical depth, they possess distinct vertical thicknesses that may produce a photometric variation, and thus observable structure (e.g. Salo and Karjalainen 2003). However, the two states will support different angular momentum fluxes \(\tau\nu\), and thus mass may pile up or evacuate near the thermal front, potentially leading to nonsteadiness and a complete break down of the structure. We find that this is avoided if the front itself moves sufficiently fast. One might expect radial mass redistribution is negated if two adjoining states possess the same angular momentum flux, with the pair joined by a horizontal line in the bottom panels of Fig. 3. In fact, similar structures have already been witnessed in simulations of the viscous instability with monotonic \(c\) laws (Salo and Schmidt 2010). We find, however, that the finite width of the front itself spoils the exact matching of fluxes and makes the establishment of such fronts more complicated. ### Thermal fronts In order to explore the structure and dynamics of fronts connecting equilibria of different temperatures but the same surface density, we concentrate on a single parameter set. The behaviour obtained is then interpreted using a simple continuum model, before other parameters are trialled. #### 5.1.1 Fiducial case Our fiducial run employs a realistic \(\epsilon\) law with the following parameters: \(\epsilon_{\rm max}=0.75\), \(v_{\rm crit}=5\), and \(b=1\). We examine a hot and cold state of the same \(\tau=0.2\), with the former possessing \(c=6.7\) and the latter \(c=0.87\). We adopt a wide box of radial size \(1000a\) and insert a strip of particles from the (previously computed) hot state in the centre (with radial extent \(100a\)), while distributing particles from the cold state throughout the rest of the numerical domain. Figure 9 plots this initial condition as a projection of the particle locations in the \((x,z)\) plane. Away from the borders of the hot/cold zones, the ring is in thermal equilibrium. The subsequent evolution of the ring is shown in Fig. 10, which presents four snapshots at different times on each row. The left panels describe the \((x,z)\) projections of the particles, while the right panels plot the radial variation of \(\tau\) (blue) and \(c\) (red). As is clear, the two fronts move radially into the cold state, until the hot state takes over the box entirely. Meanwhile, \(\tau\) remain roughly constant throughout, except for some minor deviations around the front itself. The front speed is constant until the moment that the cold state evaporates. This is demonstrated in Figure 11, which plots the location of the rightmost front as a function of time. A \(c\) intermediate between the \(c\) in the hot and cold states was selected (here \(c=4\)) and its \(x\) location was determined at each time-step, which provided a means to capture the movement of the front as a whole. The front speed is \(0.685a\Omega\), thus slightly less than \(c\) in the cold state. Generally, in bistable systems, the conductivity controls the structure of fronts; a small conductivity yields a narrow transition, while a large conductivity gives a more diffuse transition (e.g. Latter and Balbus 2012). In our granular gas, the thermal conductivity \(\kappa\) depends on \(c\), and thus jumps by at least an order of magnitude as we go from the cold to the hot state (see Table 1). This explains why the front structure is sharp near the cold state (though always longer than the 'granularity scale', \(a\)), while broader and smoother near the hot state. The overall width of the front (\(\gtrsim 100a\)) is hence determined approximately by \(\kappa\) in the hot phase. #### 5.1.2 Physics of front motion; a simple continuum model The basic mechanism driving the movement of a thermal front relies on the finite-amplitude perturbations arising from the proximity of the different states. These perturbations can only be communicated via thermal diffusion. For example, near a front, the cold state will receive thermal energy (via diffusion) from the adjacent hot state. If the energy received is sufficient to push the cold ring material out of the cold state's basin of attraction, then one might expect it to heat up and settle on the hot state; as a consequence, the front advances into the cold phase. But, by the same token, on the other side of the front, material in the hot state will also be perturbed by the heat flux and will cool down. If this cooled material is pushed beyond the hot state's basin of attraction, then it will undergo a runaway cooling and then we might expect the front to advance into the hot state. Figure 9: Initial condition for the fiducial thermal-front simulation described in Section 5.1.1 in the form of an \((x,z)\) projection of the particle positions. Figure 10: Snapshots of a thermal front at \(t=0.8,8,80\) and 191 orbits. Panels on the left describe a projection of ring particles on to the \((x,z)\) plane. Panels on the right depict the \(x\)-dependent velocity dispersion \(c\) (red) and optical depth \(\tau\) (blue). Which thermal runaway is favoured on average depends on the relative sizes of the hot and cold state's basins of attraction, which can be approximated (roughly) by how close the intermediate unstable state is to either state (see discussion in the section on metastability, and also Latter and Balbus 2012). These ideas can be illustrated by a continuum model. The energy equation of the gas may be written as \[\partial_{t}E=\Lambda(E)+\partial_{x}(k\partial_{x}E),\] where \(E=(3/2)c^{2}\), \(\Lambda\) combines viscous heating and collisional cooling, and \(k\) is thermal diffusivity (\(=2\kappa/(3\sigma)\)). Thus \(\Lambda=0\) when \(E\) is equal to the stable hot, cold, and unstable intermediate steady states, \(E_{H}\), \(E_{C}\), and \(E_{I}\), respectively. Moreover, \(d\Lambda/dE<0\) when \(E=E_{H}\) or \(E_{C}\). We assume a steady front, moving at speed \(v_{f}\), with the hot state to the right and the cold state to the left, and thus introduce the comoving variable \(\xi=x-v_{f}t\), which transforms the energy equation into a type of Stefan problem for the front shape \(E(\xi)\) and speed \(v_{f}\), \[\partial_{\xi}(k\partial_{\xi}E)+v_{f}\partial_{\xi}E+\Lambda(E)=0. \tag{14}\] The boundary conditions are \(E\to E_{\rm H}\) as \(\xi\to\infty\) and \(E\to E_{\rm C}\) as \(\xi\to-\infty\) (hot to the right and cold to the left). This is a nonlinear eigenvalue problem that, after specifying the functional forms of \(\Lambda(E)\) and \(k(E)\), would normally require a numerical solution. In Appendix B we adopt simple prescriptions for these functions and solve the equation, thereby illustrating some of the main features discussed below and qualitatively reproducing our \(N\)-body results. An illuminating expression for the speed \(v_{f}\) can be obtained by multiplying Eq. (14) by \(dE/d\xi\) and integrating between \(-\infty\) and \(\infty\). After some manipulation, one gets \[v_{f}=-\frac{f_{E_{C}}^{E_{H}}\,\Lambda\,dE}{\int_{-\infty}^{\infty}(dE/d\xi)^ {2}d\xi}-\frac{\int_{-\infty}^{\infty}(dk/dE)(dE/d\xi)^{3}d\xi}{2\int_{-\infty }^{\infty}(dE/d\xi)^{2}d\xi}. \tag{15}\] If the thermal diffusivity is a constant, the second term is zero. In this case, the sign of \(v_{f}\) is determined solely by the integral of the heating/cooling term \(\Lambda\). Because \(\Lambda(E_{H})=\Lambda(E_{I})=\Lambda(E_{C})=0\), the integral can be subdivided into (a) a positive part (between \(E_{C}\) and \(E_{I}\)) that measures the'size' of the cold state's basin of attraction, and (b) a negative part (between \(E_{I}\) and \(E_{H}\)) that measures the hot state's basin of attraction. The proximity of \(E_{I}\) to either \(E_{C}\) or \(E_{H}\) indicates the basins' relative sizes. If \(E_{I}\) is closer to \(E_{C}\), then the integral is dominated by the positive area, \(v_{f}<0\), and the front moves into the cold state. Physically, cold ring material near a front finds it easier to undergo a heating runaway, when perturbed by the front, than hot material finds a cooling runaway; thus, the front advances into the cold material. If \(E_{I}\) is closer to \(E_{H}\), then the converse holds and the front moves into the hot state. Turning now to the top row of Fig. 3 (first panel especially), one naively expects that at low \(\tau\) fronts initially move into the cold state, but at higher \(\tau\) fronts are slower and then at some critical \(\tau\) may reverse direction. If \(k\) depends on \(E\) then things are more complicated. The second term in Eq. (15) is a weighted average of \(dk/dE\), and shows that a non-uniformity in the transport of heat moderates the effect discussed above. If the front shape is monotonic in \(\xi\), then \(dE/d\xi>0\) throughout and the sign of the second term is determined by \(dk/dE\). As demonstrated in Section 3.2.3 and Table 1, \(dk/dE>0\), and so the second term in Eq. (15) is always positive, thus biasing the front's movement into the cold state. The underlying mechanism here rests not on the system's bistability but on exacerbating the imbalance in the heat flux throughout the front structure: at any given point more heat is arriving from the hot state than is being evacuated. The discussion above suggests that the sharp region at the foot of the front controls the front speed. Taking an order of magnitude approach and equating the three terms in Eq. (14) yields the estimate \(v_{f}\sim\sqrt{k_{C}/t_{th}}\), where the thermal timescale is defined as \(t_{th}=E/\Lambda\sim c^{2}/(\nu\Omega^{2})\), and \(k_{C}\) is the diffusivity evaluated in the cold state. Putting in values for the cold state gives us \(v_{f}\sim\alpha\Omega\), which is consistent with the value calculated numerically. The width \(\lambda\) of the front extending through the hot phase can then be estimated by balancing the first two terms in Eq. (14); we find \(\lambda\sim k_{H}/v_{f}\gtrsim 500a\), which is also consistent with the simulation. #### 5.1.3 Front stability We conducted a short survey of fronts at different \(\tau\) and calculated their speeds. When \(\tau=0.1\) we found \(v_{f}=0.518\), and when \(\tau=0.3\), \(v_{f}=0.591\). While no clear trend could be observed between \(\tau=0.1-0.3\), we expected at larger \(\tau\), as we approached the saddle node, that the front speed should decrease. In fact, what we found for \(\tau=0.4\) or larger is that the front would slow to a halt and then viscously reshape; i.e. \(\tau\) would evolve away from a uniform profile. Ultimately, the system moves to a state of constant angular momentum flux \(\tau\nu\), and the thermal front dissolves. As mentioned earlier, the issue here is that across a thermal front \(\tau\) is constant, but \(\tau\nu\) is not. As a consequence, mass can potentially build-up/evacuate. If the front moves faster than \(\tau\) can be viscously redistributed, then we expect the front to remain coherent and to travel unimpeded. If the front speed is too slow, then it will be viscously reshaped and will collapse. For the model chosen, \(\tau\leq 0.3\) corresponds to the first case, and \(\tau>0.3\) to the latter. A rough criterion for the'stability' of the front to vis Figure 11: Outer front radial location as a function of time in the simulation shown in Fig. 10. cous redistribution would tension the relative sizes of the front speed \(v_{f}\) and the viscous diffusion speed. To determine an estimate on the latter, we employ the lengthscale of the abrupt transition at the foot of the structure and thus estimate the diffusion speed as \(\sim(\nu_{C}/\kappa_{C})v_{f}\). A simple criterion for front dissolution requires that this speed is greater than \(v_{f}\), and hence depends solely on the size of the Prandtl number \(\mathrm{Pr}=\nu/\kappa\) in the cold state: when \(\mathrm{Pr}\) is greater than a critical value \(\mathrm{Pr}_{c}\), we expect the front to dissolve. Indeed, \(\mathrm{Pr}\) increases monotonically between \(\tau=0.1\) and \(0.4\), though takes relatively small values. At \(\tau=0.4\), we find that \(\mathrm{Pr}\sim 0.04\), which must be near \(\mathrm{Pr}_{c}\). ### Viscous fronts and viscous instability Given the issue of the unbalanced angular momentum in thermal fronts, it is natural to explore fronts that join states with the same viscous transport properties, specifically \(\nu\tau\). We present simulations of such joined states in this subsection, in addition to a short treatment of viscous instability. A simple continuum model can guide our expectations. In the shearing sheet, the one-dimensional diffusion equation for viscous Keplerian disks is \[\partial_{t}\tau=3\partial_{x}^{2}(\nu\tau)\] (e.g. Lynden-Bell and Pringle 1973). Suppose a viscous front moves with speed \(v_{f}\) with \(\tau\to\tau_{A}\) as \(x\to-\infty\) and \(\tau\to\tau_{B}\) as \(x\to\infty\). As earlier, we adopt a comoving variable \(\xi=x-v_{f}t\), which permits the complete integration of the problem. We find that \(v_{f}=0\) (the structure must be stationary) and \(\nu\tau\) (\(=\nu_{A}\tau_{A}=\nu_{B}\tau_{B}\)) is a constant throughout the entirety of the front. The last constraint is a potential difficulty: while it is possible to find two homogeneous steady states of the same \(\nu\tau\) (cf. panels in the bottom row of Fig. 3), a realistic front will have a finite width in which \(\tau\) will vary and thus \(\nu\tau\) will deviate from the required constant value. Our simulations show, in fact, that the system can overcome this problem by settling on a front structure in which the _average_\(\nu\tau\) equals \(\nu_{A}\tau_{A}=\nu_{B}\tau_{B}\). #### 5.2.1 Fronts We present a fiducial simulation with the realistic law, and parameters \(b=1\), \(\epsilon_{\mathrm{max}}=0.923\), and \(v_{\mathrm{crit}}=5\). To construct a suitable initial condition that might produce a viscous front, we select two thermally and viscously stable states with the same \(\nu\tau\) from the bottom right panel of Fig. 3. Such pairs are joined by horizontal lines. We select two states of the same angular momentum flux \(\nu\tau\approx 2\), with optical depths \(\tau=1.5\) and \(\tau=0.16\). The numerical domain is chosen to be sufficiently large (\(L=800\)) to accommodate relatively undisturbed expanses of the two states, in addition to the front itself; the low \(\tau\) state is placed between \(x=-100\) and \(100\), with the high \(\tau\) state taking up the remainder of the box. Figure 12 shows eight snapshots of the resulting simulation at different times. In each panel we plot \(\tau\) (red) and \(\tau\nu\) (blue). At \(t=0\), the angular momentum flux \(\tau\nu\) is a constant, but \(\tau\) undergoes two jumps (at \(x=\pm 100a\)). As the system evolves, the two jumps/fronts relax and exhibit a characteristic width, with \(\tau\) taking values between those of the two steady states. An immediate consequence is that the angular momentum flux within the fronts begins to deviate from the fixed value \(\approx 2\). In fact, the first four panels show that it takes significantly larger values than 2, in agreement with the bottom right panel of Fig. 3, which shows that states with \(\tau\) between \(0.16\) and \(1.5\) exhibit \(\nu\tau>2\). Because of the enhanced flux in the fronts, mass is being transported out of the fronts, which then appear to move as the system evolves far way from the initial condition. Ultimately, we find that the system redistributes the mass throughout the numerical domain so that \(\tau\nu\) is roughly constant (\(\approx 7\)), but still allows for strong variations in \(\tau\). This outcome is not a constant \(\tau\) state, but consists of two static viscous fronts joining two homogeneous states of \(\tau\approx 0.4\) and \(2.7\), which according to Fig. 3 possess the same angular momentum flux (\(\sim 7\)). Evidently, the front that joins the two states also possesses a similar approximate flux, though this is difficult to determine from Fig. 3. A similar final state was found by Salo and Schmidt (2010) when simulating the viscous instability directly (see next subsection). This static structure is an interesting outcome for the system, but we stress that it is possible only because of the periodicity of the numerical domain. Owing to those boundary conditions, mass in the whole domain can be redistributed until the desired constant \(\nu\tau\) state can be found. In a more realistic setting, the system is unlikely to come to steady state and the front will continue to move until it encounters large-scale variations in background disk properties, etc. #### 5.2.2 Viscous instability In the previous subsection we explored two adjoined viscously stable states, but the lower right panel of Fig. 3 indicates that there is a branch of viscously unstable states of intermediate \(\tau\) between roughly \(0.8\) and \(1.6\). An obvious question is: to where does the system evolve if started from one of these states? We thus present a simulation with the same collisional parameters as earlier, but with a homogeneous \(\tau\) of \(1.4\). According to Fig. 3, this state is viscously unstable. Figure 13 shows 5 snapshots of the system's evolution. Despite possessing a constant \(\tau\nu\), the system moves slowly away from this state and begins to develop growing patches of high and low \(\tau\). Unlike the previous subsection, where the evolution is being driven by large-scale flux imbalances, here there is an instability mechanism, in which small-scale fluctuations in the flux self-reinforce (Lin and Bodenheimer 1981, Lukkari 1981, Ward 1981). Ultimately, the system settles on a sequence of distinct high-\(\tau\) islands surrounded by relatively dilute regions, but both with roughly the same flux (\(\approx 6\), in this case), as is necessary for a steady state. These results are very similar to those predicted by Hameen-Anttila (1982) and witnessed in Lukkari (1981) and Salo and Schmidt (2010), though they use a monotonic collision law. A key difference is that in the monotonic \(\epsilon\) simulations, the final outcome joins states from the same branch, while in our non-monotonic simulations states from different branches adjoin. An interesting consequence of this is that it is still possible for the system to separate into a sequence of high and low \(\tau\) states (of the same \(\nu\tau\)), even when there is no intermediate viscously unstable state. In particular, this appears achievable for the parameters of the middle column in Fig. 3. More generally, systems with non-monotonic collision laws have more freedom to exhibit viscous phase-separation in radius. ## 6 Discussion and Conclusion Most previous work describing the local collisional dynamics of Saturn's rings uses relatively simple collision models. Given the poorly constrained nature of the collisions, and the numerical challenges involved, this is understandable, and indeed some success has been achieved in certain applications (e.g. self-gravity wakes, viscous overstability). However, current models still fail to describe much (if not most) of the irregular axisymmetric structure exhibited in Saturn's B and C rings. This invites us to experiment with other more complicated collision laws, in particular those that account (in a basic way) for surface regolith on ring particles, which is deemed to be present and important (e.g. Nicholson et al. 2008, Morishima et al. 2012, Deau 2015). We conduct \(N\)-body simulations with the REBOUND Figure 12: Snapshots of an example viscous front, showing optical depth and angular momentum flux as a function of \(x\). The initial condition connects two states of different \(\tau\) but the same angular momentum flux \(\tau\nu\). Despite this balance, the system evolves, redistributing mass and angular momentum until a steady state is achieved characterised by a different constant \(\tau\nu\). The collision law employs the realistic model with \(v_{\rm crit}=5\), \(\epsilon_{max}=0.923\), \(b=1\). Snapshots are at \(t=5,20,30,50,100,500,1000\), and \(2000\) orbits. code of a local patch of Saturn's rings in which particles undergo collisions with a prescribed coefficient of restitution \(\epsilon\) depending on impact speed. The main novelty of our approach is to employ an \(\epsilon\) that is a non-monotonic function of impact speed, as is suggested by theoretical and experimental studies of regolith-coated particles (cf. Section 2.1). Below a critical impact speed we set \(\epsilon=0\), though neglect particle sticking. This relatively minor change in the physical set-up immediately introduces major thermodynamical changes. For the same optical depth, the rings yield two thermally stable steady states, a hot \(c\gtrsim 4\alpha\Omega\) and a cold \(c<a\Omega\) state. Which is selected depends on the local thermal and/or dynamical history, and thus different ring radii might fall into one or the other. An obvious follow up question is to ask what happens at the boundaries of two adjoining different states? We run additional simulations in larger domains and find that in general the hot state will engulf the cold state, with the transition front moving at a speed \(\approx 0.5a\Omega\). Slower moving fronts break down because of the imbalance in angular momentum flux across the transition. Stationary 'viscous fronts' are also simulated which join states of different optical depth and \(c\) but the same angular momentum flux. Note that it need not necessarily be the case that hot states always take over: smooth variations in the ring's background properties may change propagation, and large amplitude perturbations (meteoroids, density waves, gravity wakes, etc.) will also complicate the picture. Our simulation results are exploratory, and should be taken as a demonstration of what happens when one relaxes the strong modelling assumptions of previous work. They are perhaps not yet ready for direct application to structure formation in Saturn's rings, not least because of the parameters in our regolith laws are poorly constrained. Nonetheless, it is irresistible to speculate. We anticipate that a thermal front, connecting a warm and cold state of the same dynamical optical depth, gives rise to photometric variation (which the Cassini cameras may have picked up) but no variation detectable by occultation experiments. This is precisely the situation in the C-ring plateaus (Hedman and Nicholson, 2013), and indeed, there is evidence of size segregation across these structures which may tie in to the greater chance of sticking in the colder phase (Marouf et al., 2013; Colwell et al., 2018). It may also be relevant for the 10km striations shown by Cassini's cameras in the A and B-rings (cf. Figs 5A and 5B in Porco et al., 2005). On the other hand, the steady viscous fronts our simulations support, which connect states of high and moderate optical depth, bear some resemblance to the disjunct bands in the middle B-ring (Colwell et al., 2009). A great deal more theoretical work and modelling is needed before these associations can be made secure. In particular, applications to ring regions exhibiting self-gravity wakes must remain tentative until we produce better constrained estimates on typical sticking speeds. Other areas of future work could explore the interplay between the hysteresis and self-gravity wakes, on one hand, and viscous overstability, on the other. For example, we might anticipate wakes appear only in the cold state, changing its viscous properties, and providing energy to jump into the hot state. More generally wake activity will produce enhanced heating and thus a change in the thermodynamic balances calculated in this paper. Viscous overstability generates nonlinear travelling wavetrains which may also favour the cold phase; these waves will reflect off the boundaries between states, hence complicating the nonlinear saturation of the wave turbulence. Simulations including realistic photometry of thermal fronts might help establish if they might correspond to any observable structure (Salo and Karjalainen, 2003). Finally, the robustness of bistability must be established when particle sticking is permitted, as in recent simulations by Ballouz et al. (2017) and Lu et al. (2018). Figure 13: Snapshots showing the progress of viscous instability starting from an unstable state of \(\tau=1.4\). The collisional parameters are \(v_{\rm crit}=5,\epsilon_{\rm max}=0.923,b=1\). The panels describe the \(x\) dependent optical depth \(\tau\) (red) and the angular momentum flux \(\tau\nu\) (blue). Snapshots are at \(50,\,750,1000,1050\), and \(2000\) orbits. ## Acknowledgments The authors thank the reviewer Heikki Salo and Juergen Schmidt, who generously provided a set of helpful and thorough comments that markedly improved the paper. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2305.16415
Performance-Robustness Tradeoffs in Adversarially Robust Control and Estimation
While $\mathcal{H}_\infty$ methods can introduce robustness against worst-case perturbations, their nominal performance under conventional stochastic disturbances is often drastically reduced. Though this fundamental tradeoff between nominal performance and robustness is known to exist, it is not well-characterized in quantitative terms. Toward addressing this issue, we borrow the increasingly ubiquitous notion of adversarial training from machine learning to construct a class of controllers which are optimized for disturbances consisting of mixed stochastic and worst-case components. We find that this problem admits a linear time invariant optimal controller that has a form closely related to suboptimal $\mathcal{H}_\infty$ solutions. We then provide a quantitative performance-robustness tradeoff analysis in two analytically tractable cases: state feedback control, and state estimation. In these special cases, we demonstrate that the severity of the tradeoff depends in an interpretable manner upon system-theoretic properties such as the spectrum of the controllability gramian, the spectrum of the observability gramian, and the stability of the system. This provides practitioners with general guidance for determining how much robustness to incorporate based on a priori system knowledge. We empirically validate our results by comparing the performance of our controller against standard baselines, and plotting tradeoff curves.
Bruce D. Lee, Thomas T. C. K. Zhang, Hamed Hassani, Nikolai Matni
2023-05-25T18:31:16Z
http://arxiv.org/abs/2305.16415v1
# Performance-Robustness Tradeoffs in Adversarially Robust Control and Estimation ###### Abstract While \(\mathcal{H}_{\infty}\) methods can introduce robustness against worst-case perturbations, their nominal performance under conventional stochastic disturbances is often drastically reduced. Though this fundamental tradeoff between nominal performance and robustness is known to exist, it is not well-characterized in quantitative terms. Toward addressing this issue, we borrow the increasingly ubiquitous notion of adversarial training from machine learning to construct a class of controllers which are optimized for disturbances consisting of mixed stochastic and worst-case components. We find that this problem admits a linear time invariant optimal controller that has a form closely related to suboptimal \(\mathcal{H}_{\infty}\) solutions. We then provide a quantitative performance-robustness tradeoff analysis in two analytically tractable cases: state feedback control, and state estimation. In these special cases, we demonstrate that the severity of the tradeoff depends in an interpretable manner upon system-theoretic properties such as the spectrum of the controllability gramian, the spectrum of the observability gramian, and the stability of the system. This provides practitioners with general guidance for determining how much robustness to incorporate based on a priori system knowledge. We empirically validate our results by comparing the performance of our controller against standard baselines, and plotting tradeoff curves. ## 1 Introduction Modern control systems, from mobile robotics to power plants, require controllers that are simultaneously fast, performant, and robust. Many control schemes attempt to achieve these desiderata by combining them into a single objective function and optimizing it, leading to a natural tradeoff. A controller optimized for speed and efficiency may perform poorly in the face of unmodeled phenomena. For instance, Linear-Quadratic Gaussian (LQG) controllers (a special case of \(\mathcal{H}_{2}\) controllers) explicitly prioritize nominal performance by penalizing the expectation of a quadratic function of the state and input. However, such controllers can be arbitrarily fragile to small perturbations of the dynamics (Doyle, 1978). Replacing the LQG objective with one that considers the response of the system to worst-case dynamic uncertainty and external disturbances results in robust control methods, such as \(\mathcal{H}_{\infty}\) and \(\mathcal{L}_{1}\) methods Zhou et al. (1996); Dahleh and Diaz-Bobillo (1994). Such controllers are provably robust, however they tend to be overly conservative in practice. Toward balancing the performance of nominal and robust controllers, various approaches have been introduced, most notably mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) methods. However, the resulting controllers are often complicated to express and compute (Scherer, 1995), and lack a quantitative characterization of the tradeoff between performance and robustness as a function of system properties. Toward addressing these issues, we take inspiration from the notion of adversarial robustness from machine learning (Bigggio et al., 2013; Goodfellow et al., 2014; Madry et al., 2017; Carlini and Wagner, 2017) and formulate a controller synthesis problem that balances performance and robustness. The goal of adversarial robustness in machine learning is to minimize the expected error under the presence of worst-case norm-bounded perturbations to the data, where the perturbations can depend on the underlying stochasticity of the problem, such as the data distribution and additive noise. We consider an analogous adversarially robust output feedback control problem where we aim to minimize an expected quadratic cost subject to linear dynamics driven by process noise composed of two components: a zero-mean stochastic noise term and a norm-bounded adversarial term. We show that the solution to this problem may be expressed in terms of the solutions to a set of coupled Discrete Algebraic Riccati equations (DAREs). For several special cases, this allows for novel quantitative performance-robustness tradeoff bounds to be computed in which system parameters manifest in an interpretable way. ### Contributions Toward analyzing the adversarially robust control problem we propose, we first show that when viewed through the lens of dynamic games (Basar, 1991), adversarially robust LQ control relates to a control problem introduced in Doyle et al. (1989b). We show that the optimal solution to the state feedback version of this problem is given by a central static suboptimal \(\mathcal{H}_{\infty}\) controller, with suboptimality level \(\gamma\) depending on both the stochastic noise statistics and the budget given to the adversary. Furthermore, both the worst-case adversary and the corresponding optimal controller can be computed from the solution of a DARE. We then leverage the state feedback solution to provide sufficient conditions for a solution to the output feedback control problem, along with an algorithm to synthesize this controller. Unlike the state feedback setting, the output feedback adversarially robust controller is distinct from a central suboptimal \(\mathcal{H}_{\infty}\) controller, and involves finding the solution to a set of coupled Riccati equations. We leverage the solution to the adversarially robust control problem to quantify the performance-robustness tradeoffs analytically in two simplified settings: state feedback LQ control, and state estimation. In these settings, we show an interpretable dependence on underlying system-theoretic properties such as controllability, observability, and stability. For the state feedback control setting, we show that the cost gap incurred by the adversarially robust controller in the nominal setting, relative to that achieved by the nominal controller, is upper bounded by \(O(\sigma_{w}^{2}\gamma^{-4}\nu^{-1})\). In this expression, \(\sigma_{w}^{2}\) is the covariance of the additive noise distribution, \(\gamma\) is the suboptimality level of the suboptimal \(\mathcal{H}_{\infty}\) controller, and \(\nu\) is the smallest singular value of the controllability gramian. On the other hand, the cost gap is lower bounded by \(\Omega\big{(}\sigma_{w}^{2}\gamma^{-4}\eta^{2}\big{)}\), where \(\eta\) is the largest singular value of the controllability gramian for closed-loop system under the nominal LQ controller with disturbances as the input. These results quantitatively show that systems with uniformly good controllability have small performance-robustness tradeoffs, while those that have a highly controllable mode in the nominal closed-loop system (when viewing disturbances as inputs) lead to large performance-robustness tradeoffs. Similar bounds are shown for the state estimation setting, with controllability gramians replaced by observability gramians. We conclude with numerical experiments. These simulations indicate that the proposed algorithm for synthesizing the controller is effective, and that the controller performs well relative to standard baselines in stabilizing an inverted pendulum. We additionally illustrate that the analytical trends described above are present in numerical experiments. We demonstrate that in the setting of output feedback control, the impact of poor controllability and poor observability compound to increase the severity of the performance-robustness tradeoff. ### Related Work The mixed stochastic/worst-case problem that we consider is not the only way to strike a balance between the performance of stochastic and robust control methods. Most closely related is Doyle et al. (1989b), which considers a similar problem from a deterministic perspective, in which disturbances are composed of both a bounded power component, and a bounded power spectrum component. A set description of disturbances that also interpolates between \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty}\) approaches is proposed in Paganini (1993). The class of all stabilizing controllers subject to a \(\mathcal{H}_{\infty}\) norm constraint is characterized in Glover and Doyle (1988), while minimizing an \(\mathcal{H}_{2}\) objective subject to a \(\mathcal{H}_{\infty}\) constraint is addressed in Bernstein and Haddad (1988); Rotea and Khargonekar (1991). While conceptually appealing, these methods often lack a simple closed-form stationary solution. A solution to risk-sensitive LQG control which interpolates between \(\mathcal{H}_{2}\) and \(\mathcal{H}_{\infty}\) solutions is offered in in Whittle (1981). Other recent work also attempts to reduce the conservatism of robust control through risk aware approaches (Tsiamis et al., 2021; Chapman and Lessard, 2022) or regret-minimization (Goel and Hassibi, 2020, 2021; Hazan et al., 2020). None of the aforementioned methods characterize the performance-robustness tradeoffs of the resulting controllers. Analogous recent work in the machine learning community has analyzed performance-robustness tradeoffs in adversarially robust learning, including precise characterizations of the generalization errors of standard versus adversarially trained models under various theoretical assumptions (Zhang et al., 2019; Javanmard et al., 2020; Hassani and Javanmard, 2022), and "no free lunch" theorems for obtaining adversarially robust models (Tsipras et al., 2018; Dohmatob, 2019; Yin et al., 2019). The successful characterization of such performance-robustness tradeoffs in machine learning motivates the control objective we consider. However, the theoretical results from this area are largely intended for the supervised learning setting and do not immediately apply to our setting. The existence of performance-robustness tradeoffs in control is shown in Al Makdah et al. (2020), but they are not characterized quantitatively. Our prior work studies performance robustness tradeoffs in the setting of state feedback control (Lee et al., 2022) and finite horizon state estimation (Zhang et al., 2021). This paper expands upon these works to consider performance robustness tradeoffs of both control and state estimation in a unified framework by considering them both as special cases of adversarially robust output feedback control. We end by noting that the extension of adversarial robustness results in machine learning to various control problems has recently received attention (Zhang et al., 2022; Havens et al., 2022; Pattanaik et al., 2017; Tan et al., 2020; Mandlekar et al., 2017; Kuutti et al., 2021). **Notation:** The Euclidean norm of a vector \(x\) is denoted by \(\|x\|\). For a matrix \(A\), the spectral norm is denoted \(\|A\|\) and the Frobenius norm is denoted \(\|A\|_{F}\). The spectral radius of a square matrix \(A\) is denoted \(\rho(A)\). A symmetric, positive semidefinite (psd) matrix \(A=A^{\top}\) is denoted \(A\succeq 0\), and a symmetric positive definite (pd) matrix is denoted \(A\succ 0\). Similarly \(A\succeq B\) denotes that \(A-B\) is positive semidefinite. A sequence of vectors \(x_{t}\) defined for \(t\geq 0\) will be denoted by \(\mathbf{x}=\{x_{t}\}_{t\geq 0}\). The \(\ell^{2}\) signal-norm of a sequence is denoted by \(\|\mathbf{x}\|_{\ell^{2}}:=(\sum_{t\geq 0}\|x_{t}\|^{2})^{1/2}\). For an autonomous system \(x_{t+1}=Ax_{t}\) and symmetric matrix \(Q\), we denote the solution \(P\) to the discrete Lyapunov equation \[A^{\top}PA-P+Q=0\] by \(\mathtt{dlyap}(A,Q)\). Similarly, for a controlled system \(x_{t+1}=Ax_{t}+Bu_{t}\) and symmetric matrices \(Q,R\) of compatible size, we denote the solution \(P\) to the discrete algebraic Riccati equation \[Q+A^{\top}PA-A^{\top}PB(B^{\top}PB+R)^{-1}B^{\top}PA=P\] by \(\mathtt{DARE}(A,B,Q,R)\). ## 2 Adversarially Robust Linear-Quadratic Control Consider a partially observed discrete-time linear-time-invariant (LTI) system with state and measurement disturbances composed of both stochastic and adversarial components: let \(x_{t}\in\mathbb{R}^{n_{x}}\) be the system state, \(u_{t}\in\mathbb{R}^{n_{u}}\) the input, \(y_{t}\in\mathbb{R}^{n_{y}}\) the measurement, \(w_{t}\in\mathbb{R}^{n_{w}}\) and \(\delta_{t}\in\mathbb{R}^{n_{g}}\) the stochastic and adversarial components of the process disturbance, respectively. The initial condition \(x_{0}\) and stochastic component of the process noise \(w_{t}\) are assumed to be i.i.d. zero-mean with covariance matrices \(\Sigma_{0},\ I\), respectively, and \(\mathbb{E}\big{[}x_{0}w_{t}^{\top}\big{]}=0\) for all \(t\geq 0\). The performance will be measured by an output signal \(z_{t}\in\mathbb{R}^{n_{z}}\) which depends on the current state and input. The LTI system is then defined by the following equations: \[\begin{split} x_{t+1}&=Ax_{t}+B_{0}w_{t}+B_{1} \delta_{t}+B_{2}u_{t}\\ z_{t}&=C_{1}x_{t}+D_{12}u_{t}\\ y_{t}&=C_{2}x_{t}+D_{20}w_{t}+D_{21}\delta_{t}. \end{split} \tag{1}\] We denote this system compactly by \[G=\left[\begin{array}{c|ccc}A&B_{0}&B_{1}&B_{2}\\ \hline C_{1}&D_{10}&0&D_{12}\\ C_{2}&D_{20}&D_{21}&0\end{array}\right],\] and adopt the notation \(\mathcal{F}_{l}(G,K)\) to denote its feedback interconnection with a controller \(K\) that takes the signal \(y_{t}\) as input, and outputs control action \(u_{t}\). We assume that the adversarial perturbation sequence \(\boldsymbol{\delta}\) is causal, i.e., it can depend only on the states, inputs, and stochastic noise up to the current timestep; in particular, \(\delta_{t}\) must be a measurable function of the randomness \(x_{0},w_{0:t}\). We consider the infinite horizon objective defined in terms of the performance signal \[\limsup_{T\to\infty}\frac{1}{T}\mathbb{E}_{\boldsymbol{w},x_{0}} \Bigg{[}\sum_{t=0}^{T-1}\|z_{t}\|^{2}\Bigg{]}, \tag{2}\] subject to the dynamics (1). We assume that \(C_{1}=\begin{bmatrix}Q^{1/2}\\ 0\end{bmatrix}\) and \(D_{12}=\begin{bmatrix}0\\ R^{1/2}\end{bmatrix}\) for \(Q\succeq 0\), and \(R\succ 0\), so that (2) reduces to the linear quadratic regulation cost. Therefore, if the adversarial perturbations \(\boldsymbol{\delta}\) are set to zero, then the system (1) with objective (2) forms the nominal Linear Quadratic Gaussian (LQG) problem. If the stochasticity is set to zero (\(\boldsymbol{w},x_{0}=0\)) and \(\boldsymbol{\delta}\) are worst-case perturbations with average power bounded by \(\varepsilon\), then we instead recover the \(\mathcal{H}_{\infty}\) problem. When both stochastic noise and worst-case perturbations are present, we define the resulting control task as the _adversarially robust LQG problem_. We denote the three corresponding objectives by nominal cost (NC), robust cost (RC), and adversarial cost (AC) respectively: \[\text{NC}(K) =\limsup_{T\to\infty}\frac{1}{T}\mathbb{E}_{\boldsymbol{w},x_{0} }\Bigg{[}\sum_{t=0}^{T-1}\|z_{t}\|^{2}\Bigg{]},\ \ \boldsymbol{\delta}=0, \tag{3}\] \[\text{RC}(K) =\limsup_{T\to\infty}\frac{1}{T}\max_{\begin{subarray}{c} \boldsymbol{\delta}\text{ causal}\\ \|\boldsymbol{\delta}\|_{L_{2}}^{2}\leq T\varepsilon\end{subarray}}\sum_{t=0}^ {T-1}\|z_{t}\|^{2},\ \ \begin{bmatrix}x_{0}\\ \boldsymbol{w}\end{bmatrix}=0,\] (4) \[\text{AC}(K) =\limsup_{T\to\infty}\frac{1}{T}\mathbb{E}_{\boldsymbol{w},x_{0} }\Bigg{[}\max_{\begin{subarray}{c}\boldsymbol{\delta}\text{ causal}\\ \|\boldsymbol{\delta}\|_{L_{2}}^{2}\leq T\varepsilon\end{subarray}}\sum_{t=0}^ {T-1}\|z_{t}\|^{2}\Bigg{]}. \tag{5}\] The constraint \(\|\boldsymbol{\delta}\|_{\ell^{2}}^{2}\leq T\varepsilon\) is chosen such that the instance-wise adversarial budget satisfies \(\|\delta_{t}\|^{2}\leq\varepsilon\) on average. In order to ensure that there exists a stabilizing controller, and that minimizing either NC or RC provides a stabilizing controller, we make the following standard assumption (Zhou et al., 1996). **Assumption 2.1**: * \((A,B_{2},C_{2})\) _is stabilizable and detectable._ * \((A,Q^{1/2})\) _is detectable._ * \(R\succ 0\)_,_ \(B_{0}B_{0}^{\top}\succ 0\)_,_ \(D_{20}D_{20}^{\top}\succ 0\)_,_ \(B_{0}D_{20}^{\top}=0\)_._ Under this assumption, there exists a stabilizing controller minimizing NC, of the form \[\hat{x}_{t} =(A+B_{2}F_{\star})(I-L_{\star}C_{2})\hat{x}_{t}+(A+B_{2}F_{\star })L_{\star}y_{t}\] \[u_{t} =F_{\star}(I-L_{\star}C_{2})\hat{x}_{t}+F_{\star}L_{\star}y_{t},\] where \[F_{\star} =-(R+B_{2}^{\top}P_{\star}B_{2})^{-1}B_{2}^{\top}P_{\star}A \tag{6}\] \[L_{\star} =\Sigma_{\star}C_{2}^{\top}(C_{2}\Sigma_{\star}C_{2}^{\top}+D_{2 0}D_{20})^{-1}, \tag{7}\] and \(P_{\star}\) and \(\Sigma_{\star}\) solve the following two Discrete Algebraic Riccati Equations (DARE): \[P_{\star} =\texttt{DARE}(A,B,Q,R) \tag{8}\] \[\Sigma_{\star} =\texttt{DARE}(A^{\top},C^{\top},B_{0}B_{0}^{\top},D_{20}D_{20}^ {\top}). \tag{9}\] The above solution is called the LQG controller. A solution minimizing RC is known as an \(\mathcal{H}_{\infty}\) controller, and may be expressed in terms of the solution to two modified DAREs Basar (1991); Hassibi et al. (1999). The remainder of this section is devoted to finding a controller minimizing AC. Inspired by minimax dynamic games (Basar, 1991), we first find a controller which minimizes a soft-constrained version of the adversarial cost (5): \[\limsup_{T\rightarrow\infty}\frac{1}{T}\mathbb{E}\Bigg{[}\underset{\mathbf{\delta }\text{ causal}}{\max}\sum_{t=0}^{T-1}\left\|z_{t}\right\|^{2}-\gamma^{2}\left\| \delta_{t}\right\|^{2}\Bigg{]}. \tag{10}\] The following lemma provides necessary and sufficient conditions for the existence of a stabilizing controller which minimizes the above objective in the state feedback setting (\(C_{2}=I\), \(D_{20}=0\), and \(D_{21}=0\).) Similar statements in continuous time may be extracted from a result found in Doyle et al. (1989b). **Lemma 2.1**: _Suppose there exists a solution to the following DARE_ \[P_{\gamma}=\texttt{DARE}\bigg{(}A,\begin{bmatrix}B_{1}&B_{2}\end{bmatrix},Q, \begin{bmatrix}-\gamma^{2}I\\ &R\end{bmatrix}\bigg{)} \tag{11}\] _satisfying \(P_{\gamma}\succeq 0\) and \(B_{1}^{\top}P_{\gamma}B_{1}\prec\gamma^{2}I\). When the above condition holds,_ 1. _Let_ \[\Psi=\begin{bmatrix}B_{1}&B_{2}\end{bmatrix}^{\top}P_{\gamma}\begin{bmatrix}B_ {1}&B_{2}\end{bmatrix}+\begin{bmatrix}-\gamma^{2}I&0\\ 0&R\end{bmatrix}.\] (12) _Then the controller_ \(u_{t}=F_{\gamma}x_{t}\) _with_ \(F_{\gamma}\) _given by_ \[\begin{bmatrix}E_{\gamma}\\ F_{\gamma}\end{bmatrix}:=-\Psi^{-1}\begin{bmatrix}B_{1}&B_{2}\end{bmatrix}^{ \top}PA\] (13) _satisfies_ \(\rho(A+B_{2}K)<1\)_, and minimizes objective (_10_)._ 2. _The optimal adversarial perturbation under the controller_ \(u_{t}=F_{\gamma}x_{t}\) _is given by_ \[\Delta_{\gamma} =(\gamma^{2}I-B_{1}^{\top}PB_{1})^{-1}B_{1}^{\top}PB_{0},\] (14) \[\delta_{t} =E_{\gamma}x_{t}+\Delta_{\gamma}w_{t}.\] 3. _The objective value (_10_) achieved under controller (_13_) and adversary (_14_) is_ \(\operatorname{Tr}(M_{\gamma}B_{0}B_{0}^{\top})\)_, where_ \(M_{\gamma}=P_{\gamma}+P_{\gamma}B_{1}(\gamma^{2}I-B_{1}^{\top}P_{\gamma}B_{1}) ^{-1}B_{1}^{\top}P_{\gamma}\)_._ The solution approach for the above problem follows that in Basar (1991) for minimax games. The finite horizon version of the problem is solved by defining a saddle point cost-to-go, then recursing backwards in time, solving both an unconstrained concave maximization problem and convex minimization problem at each time step to determine the optimal adversarial perturbation and control input. The causality of \(\mathbf{\delta}\) is necessary in the recursion to pull \(\delta_{t}\) out of an expectation over future noise terms. Taking the limit as the horizon tends to infinity provides the steady state controller and adversary in the above lemma statement. See Appendix A.1 for the proof. It should be noted that in contrast to most adversarially robust machine learning problems, adversarially robust LQR provides a closed-form expression for the adversarial perturbation. We now leverage the state feedback solution to solve the soft-constrained output feedback problem. **Lemma 2.2**: _Suppose that_ 1. _There exists a solution to the following DARE_ \[P_{\gamma}=\mathtt{DARE}\bigg{(}A,\begin{bmatrix}B_{1}&B_{2}\end{bmatrix},Q, \begin{bmatrix}-\gamma^{2}I&\\ &R\end{bmatrix}\bigg{)}\] _satisfying_ \(P_{\gamma}\succeq 0\) _and_ \(B_{1}^{\top}P_{\gamma}B_{1}\prec\gamma^{2}I\)_. Let_ \(F_{\gamma}\)_,_ \(E_{\gamma}\) _and_ \(\Delta_{\gamma}\) _be the corresponding gains from Lemma_ 2.1_. Also let_ \(\Psi\) _be as in (_12_)._ 2. _There exists a nonsingular matrix_ \(\begin{bmatrix}\hat{T}_{11}&0\\ \hat{T}_{21}&\hat{T}_{22}\end{bmatrix}\) _with_ \(\hat{T}_{11}\in\mathbb{R}^{n_{\delta}\times n_{\delta}}\)_, and_ \(\hat{T}_{22}\in\mathbb{R}^{n_{u}\times n_{u}}\) _such that_ \[\Psi=\begin{bmatrix}\hat{T}_{11}&0\\ \hat{T}_{21}&\hat{T}_{22}\end{bmatrix}^{\top}\begin{bmatrix}-I&\\ &I\end{bmatrix}\begin{bmatrix}\hat{T}_{11}&0\\ \hat{T}_{21}&\hat{T}_{22}\end{bmatrix}.\] (15) 3. _There exists real matrices_ \(L_{\gamma},N_{\gamma},\Sigma_{\gamma},Y_{\gamma}\)_, where_ \(\Sigma_{\gamma},Y_{\gamma}\succ 0\)_, that satisfy_ \[\tilde{A} =\hat{A}+L_{\gamma}\hat{C}_{2},\] \[\tilde{B}_{0} =\hat{B}_{0}+L_{\gamma}\hat{D}_{20},\] \[\tilde{B}_{1} =\hat{B}_{1}+L_{\gamma}\hat{D}_{21}\] \[\tilde{C}_{1} =\hat{C}_{1}+N_{\gamma}\hat{C}_{2}\] \[\tilde{D}_{11} =\hat{D}_{11}+N_{\gamma}\hat{D}_{21}\] \[\tilde{D}_{10} =N_{\gamma}\hat{D}_{20}\] \[\Phi =I-\tilde{D}_{11}^{\top}\tilde{D}_{11}-\tilde{B}_{1}^{\top}Y_{ \gamma}\tilde{B}_{1}\succ 0\] \[Y_{\gamma} =\tilde{C}_{1}^{\top}\tilde{C}_{1}+\tilde{A}^{\top}Y_{\gamma} \tilde{A}+\Big{(}\tilde{D}_{11}^{\top}\tilde{C}_{1}+\tilde{B}_{1}^{\top}Y_{ \gamma}\tilde{A}\Big{)}^{\top}\Phi^{-1}\Big{(}\tilde{D}_{11}^{\top}\tilde{C}_ {1}+\tilde{B}_{1}^{\top}Y_{\gamma}\tilde{A}\Big{)}\] \[\Sigma_{\gamma} =\Big{[}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{\gamma}) \tilde{A}+\tilde{B}_{1}\Phi^{-1}\tilde{D}_{11}^{\top}\tilde{C}_{1}\Big{]} \Sigma_{\gamma}\Big{[}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{\gamma}) \tilde{A}+\tilde{B}_{1}\Phi^{-1}\tilde{D}_{11}^{\top}\tilde{C}_{1}\Big{]}^{\top}\] \[\quad\quad+\Big{[}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y _{\gamma})\tilde{B}_{0}+\tilde{B}_{1}^{\top}\Phi^{-1}\tilde{D}_{11}^{\top} \tilde{D}_{10}\Big{]}\Big{[}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{ \gamma})\tilde{B}_{0}+\tilde{B}_{1}^{\top}\Phi^{-1}\tilde{D}_{11}^{\top} \tilde{D}_{10}\Big{]}^{\top}\] \[L_{\gamma} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! output feedback control problem to that of estimating the optimal state feedback control input from noisy observations of the state. This can in turn be solved via another appeal to the dynamic programming argument from Basar (1991). The main challenge in doing so is that the optimal adversary causes a coupling between the future cost-to-go, and the estimation error-to-arrive. We do not know of existing numerical schemes which provably converge to the optimal \(L_{\gamma},N_{\gamma},\Sigma_{\gamma}\) and \(Y_{\gamma}\). However, an alternating solution which works well in practice is proposed in Algorithm 1. The proof of Lemma 2.2 can be found in Appendix A.2. ``` 1:Input: System \(\hat{G}\) from (17), soft adversarial penalty \(\gamma\), number of iterations \(k\). 2:Initialize \(L_{\gamma}\) and \(N_{\gamma}\) as the Kalman Filter gains for \(\hat{G}\) 3:for\(k\) times do 4: Given \(L_{\gamma}\) and \(N_{\gamma}\), compute \(Y_{\gamma}\) according to (16), neglecting the final two conditions that \(L_{\gamma},N_{\gamma}\) minimize the provided function, and that \(\Sigma_{\gamma}\) solves the provided Lyapunov equation. 5: Given \(L_{\gamma}\), \(N_{\gamma}\), and \(Y_{\gamma}\), compute \(\Sigma_{\gamma}\) according to (16), neglecting the final condition that \(L_{\gamma},N_{\gamma}\) minimize the provided function. 6: Given \(Y_{\gamma}\) and \(\Sigma_{\gamma}\), compute the gains \(L_{\gamma}\) and \(N_{\gamma}\) according to the final condition in (16), neglecting the Riccati equation for \(Y_{\gamma}\) and the Lyapunov equation for \(\Sigma_{\gamma}\). 7:endfor 8:Output:Gains \(L_{\gamma}\), \(N_{\gamma}\) ``` **Algorithm 1** Computing Soft-Constrained Adversarially Robust Controller: SoftAdvController\((\hat{G},\gamma,k)\) We now return our attention to objective (5). It can be shown via strong duality that the hard-constrained problem may be solved by sequentially solving the soft-constrained problem using Lemma 2.2. Note that in contrast to the solution approach to minimize RC, dualizing the constraint in AC results in an optimal dual variable \(\gamma(\varepsilon)\) that is a random variable. Therefore, it is nontrivial to exchange the order of the minimization over the dual variable with the expectation (see Appendix A.3 for details). We propose Algorithm 2 to minimize the adversarial cost (5). ``` 1:Input: System \(G\), adversary budget \(\varepsilon>0\), binary search bounds \(\gamma_{LB}<\gamma_{UB}\), tolerance tol. 2:// Do binary search on \(\gamma\in[\gamma_{LB},\gamma_{UB}]\) to find optimal adversary with average power \(\varepsilon>0\) 3:while\(\gamma_{UB}-\gamma_{LB}\geq\texttt{tol}\)do 4:\(\gamma=(\gamma_{UB}+\gamma_{LB})/2\) 5:if Conditions \(1-3\) of Lemma 2.2 are satisfied at level \(\gamma\)then 6: Compute the optimal controller \(K\) for system \(G\) at level \(\gamma\) according to Lemma 2.2. 7:ifAdvPower\((\mathcal{F}_{l}(G,K),\gamma)<\varepsilon\) (computed via Algorithm 3) then 8:\(\gamma_{UB}=\gamma\) 9:else 10:\(\gamma_{LB}=\gamma\) 11:endif 12:else 13:\(\gamma_{LB}=\gamma\) 14:endif 15:endwhile 16:Output:Adversarially robust controller \(K\), closed-loop system \(G_{cl}=\mathcal{F}_{l}(G,K)\) ``` **Algorithm 2** Computing Adversarially Robust Controller: AdvControl\((G,\varepsilon,\gamma_{LB},\gamma_{UB},\texttt{tol})\) **Theorem 2.1**: _Let \(\gamma_{LB}\) be a small positive number such that the conditions of Lemma 2.2 are satisfied for all \(\gamma\geq\gamma_{LB}\), and \(K_{\gamma_{LB}}\) be the corresponding optimal controller. For any \(\varepsilon>0\) such that \(\texttt{AdvPower}(\mathcal{F}_{l}(G,K_{\gamma_{LB}}),\gamma_{LB})>\varepsilon\), let \(\gamma_{UB}\) be sufficiently large such that the controller \(K_{\gamma_{UB}}\) at level \(\gamma_{UB}\) satisfies \(\texttt{AdvPower}(\mathcal{F}_{l}(G,K_{\gamma_{UB}}),\gamma_{UB})<\varepsilon\). Then the output of Algorithm 2, \([K,G_{cl}]=\texttt{AdvControl}(G,\varepsilon,\gamma_{LB},\gamma_{UB},\texttt{tol})\) satisfies (up to the numerical tolerance)_ 1. _The controller_ \(K\) _minimizes_ \(\mathrm{AC}(K)\)_._ 2. _The minimum value for the adversarial cost (_5_) is given by_ \(\mathrm{Tr}(J_{\gamma})+\gamma^{2}\varepsilon\)_, where_ \[J_{\gamma} =D_{0,cl}^{\top}D_{0,cl}+B_{0,cl}^{\top}P_{\gamma}B_{0,cl}+(D_{1, cl}^{\top}D_{0,cl}+B_{1,cl}^{\top}P_{\gamma}B_{0,cl})^{\top}\Phi^{-1}(D_{1, cl}^{\top}D_{0,cl}+B_{1,cl}^{\top}P_{\gamma}B_{0,cl})\] \[P_{\gamma} =\mathcal{D}\texttt{ARE}(A_{cl},B_{1,cl},C_{cl}^{\top}C_{cl},- \gamma^{2}I,S=C_{cl}^{\top}D_{1,cl})\] \[\Phi =\gamma^{2}I-D_{1,cl}^{\top}D_{1,cl}-B_{1,cl}^{\top}P_{\gamma}B_{ 1,cl}\] (18) \[G_{cl} =\left[\begin{array}{cc}A_{cl}&B_{0,cl}&B_{1,cl}\\ \hline C_{cl}&D_{0,cl}&D_{1,cl}\end{array}\right].\] _Here,_ \(\texttt{D}\texttt{ARE}(A,B,Q,R,S)\) _is overloaded to denote the solution_ \(P\) _to the generalized_ DARE_,_ \[P=Q+A^{\top}PA+(A^{\top}PB+S)(B^{\top}PB+R)^{-1}(B^{\top}PA+S^{\top}).\] We observe from Theorem 2.1 that while the LQG controller is independent of the noise statistics, the adversarially robust controller output by Algorithm 2 depends on the noise statistics through the optimal choice of \(\gamma\). In the subsequent sections, we examine the tradeoff in the objective values from minimizing either the nominal objective or the adversarially robust objective. This motivates the following two results, which allow us to compute the nominal and adversarial costs for an arbitrary LTI controller \(K\). **Proposition 2.1**: _For an LTI controller \(K\), let \(A_{cl}\), \(B_{cl}\), \(C_{cl}\) and \(D_{cl}\) be a state space realization for \(\mathcal{F}_{l}(G,K)\). If \(\rho(A_{cl})<1\), then the nominal cost may be expressed as_ \[\mathrm{NC}(K)=\limsup_{T}\frac{1}{T}\mathbb{E}_{w}\!\left[\sum_{t=0}^{T-1}\| z_{t}\|^{2}\right]=\mathrm{Tr}\big{(}C_{cl}\Sigma_{cl}C_{cl}^{\top}+D_{cl}D_{ cl}^{\top}\big{)},\] _where \(\Sigma_{cl}=A_{cl}\Sigma_{cl}A_{cl}^{\top}+B_{cl}B_{cl}^{\top}\)._ ``` 1:Input: Closed-loop system \(G_{cl}\), adversary penalty \(\gamma\). 2:Determine a state space realization for the closed-loop system \(\left[\begin{array}{cc}A_{cl}&B_{0,cl}&B_{1,cl}\\ \hline C_{cl}&D_{0,cl}&D_{1,cl}\end{array}\right]=G_{cl}\). 3:Set \(P_{\gamma}=\texttt{DARE}(A_{cl},B_{1,cl},C_{cl}^{\top}C_{cl},-\gamma^{2}I,S= C_{cl}^{\top}D_{1,cl})\). 4:Let \(\Phi=\gamma^{2}I-B_{1,cl}^{\top}P_{\gamma}B_{1,cl}\). 5:Let \(\Gamma_{x}=D_{1,cl}^{\top}C_{cl}+B_{1,cl}^{\top}P_{\gamma}A_{cl}\), \(\Gamma_{w}=D_{1,cl}^{\top}D_{0,cl}+B_{1,cl}^{\top}P_{\gamma}B_{0,cl}\). 6:Assign \(\Sigma_{\gamma}=\texttt{d}\texttt{lyap}(A_{cl}+B_{1,cl}\Phi^{-1}\Gamma_{x},(B_ {0,cl}+B_{1,cl}\Phi^{-1}\Gamma_{w})(B_{0,cl}+B_{1,cl}\Phi^{-1}\Gamma_{w})^{ \top})\). 7:Output:Adversarial power \(\mathrm{Tr}(\Gamma_{w}^{\top}\Phi^{-2}\Gamma_{w})+\mathrm{Tr}(\Gamma_{x}^{ \top}\Phi^{-2}\Gamma_{x}\Sigma_{\gamma})\) ``` **Algorithm 3** Computing Adversarially Power: \(\texttt{AdvPower}(G_{cl},\gamma)\) **Proposition 2.2**: _For an LTI controller \(K\), the same results used to solve for the optimal controller minimizing \(\mathrm{AC}(\cdot)\) may be adapted to evaluate \(\mathrm{AC}(K)\). In particular, for a closed-loop system \(G_{cl}=\mathcal{F}_{l}(G,K)\) and under any value of \(\gamma\) such that the Riccati equation for \(P_{\gamma}\) in (18) yields a positive semidefinite solution, the soft-penalized objective (10) evaluates to \(\mathrm{Tr}(J_{\gamma})\), where \(J_{\gamma}\) is given by (18). Similarly, for any \(\varepsilon>0\), the adversarial cost satisfies \(\mathrm{AC}(K)=\mathrm{Tr}(J_{\gamma})+\gamma^{2}\varepsilon\), where \(\gamma\) is the optimal value obtained from Algorithm 2 applied to \(G_{cl}\) with appropriately chosen \(\gamma_{LB}\), \(\gamma_{UB}\) and tol, and the matrix \(J_{\gamma}\) results from (18)._ ## 3 Performance-Robustness Tradeoff Bounds: State Feedback In this section, we summarize the results of our prior work (Lee et al., 2022) to study the tradeoffs that arise in adversarial control by investigating the interplay between the two objectives (3) and (10)1 in the state feedback setting. In particular, we assume that \(C_{2}=I\), \(D_{20}=0\) and \(D_{21}=0\). For ease of exposition, we additionally assume that \(B_{0}=\Sigma_{w}^{1/2}=\sigma_{w}I\) and \(B_{1}=I\). As such, we drop the subscript on \(B_{2}\) such that \(B_{2}\equiv B\) for the remainder of the section. Footnote 1: We note that the \(\gamma\)-adversarially robust controller is not necessarily a \(\gamma\)-adversarially robust controller. However, we conjecture that the dependencies of these tradeoffs on the underlying system-theoretic quantities are similar. This conjecture is supported by the numerical experiments. We consider the gap between the nominal and \(\gamma\)-adversarially robust controllers when applied in the nominal setting, i.e., we seek to bound the gap \(\operatorname{NC}(F_{\gamma})-\operatorname{NC}(F_{\star})\), where \(F_{\gamma}\) is the \(\gamma\)-adversarially robust controller given by Lemma 2.1, and \(F_{\star}\) is the LQR controller given by equation (6). Let \(P_{\star}\) be the nominal DARE solution given by the solution to equation (8), let \(P_{\gamma}\) be the solution to equation (11) and let \[M_{\gamma}=P_{\gamma}+P_{\gamma}(\gamma^{2}I-P_{\gamma})^{-1}P_{\gamma}. \tag{19}\] Given an arbitrary stabilizing linear state feedback controller \(F\), Lemma 12 of Fazel et al. (2018) allows us to characterize the gap in the cost between \(F\) and the optimal LQR controller \(F_{\star}\) as \[\operatorname{NC}(F)-\operatorname{NC}(F_{\star})=\operatorname{Tr}(\Sigma(F) (F-F_{\star})^{\top}(R+B^{\top}P_{\star}B)(F-F_{\star})), \tag{20}\] where \(\Sigma(F):=\texttt{dlyap}(A+BF,\Sigma_{w})\) is the steady state covariance of the closed-loop system under controller \(F\). The following bounds on the gap \(\operatorname{NC}(F_{\gamma})-\operatorname{NC}(F_{\star})\) then follow immediately: \[\operatorname{NC}(F_{\gamma})-\!\operatorname{NC}(F_{\star}) \!\geq\!\sigma_{w}^{2}\sigma_{\min}(R\!+\!B^{\top}\!P_{\star}B) \|F_{\gamma}\!-\!F_{\star}\|_{F}^{2} \tag{21}\] \[\operatorname{NC}(F_{\gamma})-\!\operatorname{NC}(F_{\star}) \!\leq\!\|\Sigma(F_{\gamma})\|\!\|R\!+\!B^{\top}\!P_{\star}B\| \|F_{\gamma}\!-\!F_{\star}\|_{F}^{2}\,. \tag{22}\] We have therefore reduced the task of upper and lower bounding the cost gap between the \(\gamma\)-adversarially robust controller and the nominal LQR controller in the nominal setting to directly bounding the gap between the two controller gains. Recalling that \(\|F_{\gamma}-F_{\star}\|_{F}^{2}\leq\min\{n_{u},n_{x}\}\,\|F_{\gamma}-F_{\star }\|^{2}\), we use the following lemma to bound the difference \(\|F_{\gamma}-F_{\star}\|\) in terms of the difference between the solutions to the corresponding adversarial and nominal DAREs. **Lemma 3.1**: _(Adapted from Lemma 2 of Mania et al. (2019)) Suppose that \(f_{1}(u;x)=\frac{1}{2}u^{\top}Ru+\frac{1}{2}(Ax+Bu)^{\top}M(Ax+Bu)\) and \(f_{2}(u;x)=\frac{1}{2}u^{\top}Ru+\frac{1}{2}(Ax+Bu)^{\top}P(Ax+Bu)\) with \(M\succeq P\). Furthermore, for \(i\in[2]\) and any \(x\), let \(u_{i}=F_{i}x=\operatorname{argmin}_{u}f_{i}(u,x)\). Then_ \[\frac{\left\|B^{\top}(M-P)(A+BF_{2})\right\|}{\|R+B^{\top}MB)\|}\leq\|F_{1}-F_{ 2}\|\leq\frac{\left\|B^{\top}(M-P)(A+BF_{2})\right\|}{\sigma_{\min}(R+B^{\top} PB)}.\] We also define \(\gamma_{\infty}\) as the minimum \(\mathcal{H}_{\infty}\) norm for the closed loop system, i.e., the smallest value of \(\gamma\) for which the conditions of Lemma 2.1 hold. Similarly, we define \(\tilde{\gamma}_{\infty}\) as the \(\mathcal{H}_{\infty}\) norm of the closed loop system under the nominal LQR controller. Additionally, we define the \(\ell\)-step controllability gramian as \(W_{\ell}(A,B):=\sum_{t=0}^{\ell}A^{t}BB^{\top}(A^{t})^{\top}\). If \(\rho(A)<1\) we define the controllability gramian as \(W_{\infty}(A,B):=\lim_{\ell\to\infty}W_{\ell}(A,B)\). ### Upper Bound Applying Lemma 3.1 with \(P=P_{\star}\), the nominal DARE solution (8), and \(M=M_{\gamma}\), the modified robust DARE solution in Lemma 2.1, reduces our goal to bounding the spectrum of \(M_{\gamma}-P_{\star}\). From the definition of \(M_{\gamma}\) (19), we can write \(\|P_{\star}-M_{\gamma}\|\leq\|P_{\star}-P_{\gamma}\|+\frac{\|P_{\star}\|^{2}}{ \gamma^{2}-\|P_{\gamma}\|}.\) For \(\gamma>\gamma_{\infty}\) we have \(P_{\gamma}\prec P_{\gamma_{\infty}}\prec\gamma_{\infty}^{2}I\) (Lemma B.3), and thus \(\|P_{\star}-M_{\gamma}\|\leq\|P_{\star}-P_{\gamma}\|+\frac{\gamma_{\infty}^{4}} {\gamma^{2}-\gamma_{\infty}^{2}}\), reducing our task to bounding \(\|P_{\star}-P_{\gamma}\|\), the gap between solutions to the \(\gamma\)-adversarial and nominal DAREs. Toward bounding the norm difference of solutions to DAREs, we show that the closed-loop dynamics under the adversary \(\delta_{t}\) can be expressed as perturbations of the nominal system matrices. In particular, recall that for a noiseless adversarial LQR instance at level \(\gamma>0\), the adversary can be represented as \(\delta_{t}=\big{(}\gamma^{2}I-P_{\gamma}\big{)}^{-1}P_{\gamma}(Ax_{t}+Bu_{t})\), such that we may write \(x_{t+1}=Ax_{t}+Bu_{t}+\delta_{t}=\tilde{A}x_{t}+\tilde{B}u_{t}\), for \(\tilde{A}:=\big{(}I+(\gamma^{2}I-P_{\gamma})^{-1}P_{\gamma}\big{)}A\) and \(\tilde{B}:=\big{(}I+(\gamma^{2}I-P_{\gamma})^{-1}P_{\gamma}\big{)}B\). This allows us to bound the gaps \(||\tilde{A}-A||,\ ||\tilde{B}-B||\) in terms of \(\gamma\). This is formalized in Lemma B.4 in Appendix B.2. By bounding the gap between system matrices in the adversarial setting and in the nominal setting, we derive bounds on the gap \(\left\|P_{\gamma}-P_{\star}\right\|\) between the adversarial and nominal DARE solutions, which ultimately leads to the following upper bound on \(\text{NC}(F_{\gamma})-\text{NC}(F_{\star})\). **Theorem 3.1**: _Suppose \((A,B,Q^{1/2})\) is controllable and detectable. Define the condition number \(\kappa(Q,R):=\frac{\max\{\sigma_{\max}(Q),\sigma_{\max}(R)\}}{\min\{\sigma_{ \min}(Q),\sigma_{\min}(R)\}}\),_ \[\tau(A,\rho):=\sup\{\left\|A^{k}\right\|\rho^{-k}:k\geq 0\}\text{, and }\beta:=\max\Big{\{}1,\frac{\gamma_{\infty}^{2}\max\{\left\|A \right\|,\left\|B\right\|\}}{\gamma^{2}-\gamma_{\infty}^{2}}\tau(A,\rho)+ \rho\Big{\}}\text{, where }\rho>\rho(A)\text{. Furthermore, let }\ell\) be any natural number \(1\leq\ell\leq n_{x}\). For \(\gamma>0\) satisfying \[\gamma^{2}\geq\gamma_{\infty}^{2}+\frac{3}{2}\delta^{3/2}\beta^{\ell-1}\sigma _{\min}(W_{\ell}(A,B))^{-1/2}\tau(A,\rho)^{2}(\left\|B\right\|+1)\max\{\left\| A\right\|,\left\|B\right\|\}\gamma_{\infty}^{2},\] the following upper bound holds: \[\text{NC}(F_{\gamma})-\text{NC}(F_{\star}) \leq O(1)\sigma_{w}^{2}\Big{(}\frac{\gamma_{\infty}^{2}}{\gamma^ {2}-\gamma_{\infty}^{2}}\Big{)}^{2}n_{u}\ell^{5}\beta^{4(\ell-1)}\Big{(}1+ \sigma_{\min}(W_{\ell}(A,B))^{-1/2}\Big{)}^{2}\] \[\quad\times\left\|A+BK_{\star}\right\|^{2}\left\|W_{\infty}(A+BF_ {\gamma},I)\right\|\tau(A,\rho)^{6}\] \[\quad\times\frac{\left\|R+B^{\top}P_{\star}B\right\|}{\sigma_{ \min}(R+B^{\top}P_{\star}B)^{2}}\kappa(Q,R)^{2}\left\|B\right\|^{2}(\left\|B \right\|+1)^{4}\] \[\quad\times\Big{(}\max\{\left\|A\right\|,\left\|B\right\|\}^{2} \left\|P_{\star}\right\|^{2}+\gamma_{\infty}^{2}\Big{)}.\] As \(\gamma\rightarrow\infty\), our upper bound decays to \(0\) as expected, since the adversarial controller converges to the nominal controller in the limit. However, the steepness of this cost gap is affected by system properties such as the minimum singular value of the \(\ell\)-step controllability gramian. Specifically, poor controllability causes the upper bound to increase, as captured by the minimum singular value of the controllability gramian \(\sigma_{\min}(W_{\ell}(A,B))\) in the bound above. We note that in contrast to the perturbation gap requirements on \(\left\|\tilde{A}-A\right|\), \(\left\|\tilde{B}-B\right\|\) in Mania et al. (2019), our condition on the perturbation gap via lower bounds on \(\gamma\) are much less conservative. We only require a lower bound on \(\gamma\) to guarantee that the controllability of the adversarially perturbed system \((\tilde{A},\tilde{B})\) is on the same order as that of the nominal system \((A,B)\). ### Lower Bound Applying Lemma 3.1 with \(M=M_{\gamma}\) and \(P=P_{\star}\), we conclude that \[\left\|F_{\gamma}-F_{\star}\right\|\geq\frac{\left\|B^{\top}(M_{\gamma}-P_{ \star})(A+BF_{\star})\right\|}{\left\|R+BM_{\gamma}B\right\|}\geq\frac{\left\|B^ {\top}(M_{\gamma}-P_{\star})\right\|\sigma_{\min}(A+BF_{\star})}{\left\|R+BM_{ \gamma}B\right\|}. \tag{23}\] Next, we add and subtract a particular DARE solution to the \(M_{\gamma}-P_{\star}\) term in the above bound. Specifically, for \(\gamma\geq\tilde{\gamma}_{\infty}\), we let \(\tilde{P}_{\gamma}=\texttt{DARE}(A+BF_{\star},I,Q,-\gamma^{2}I)\), and note that \(x^{\top}\tilde{P}_{\gamma}x\) represents the cost of applying controller \(F_{\star}\) in the adversarial setting at level \(\gamma\) starting from state \(x\) with \(w_{t}=0\) for all \(t\geq 0\). Adding and subtracting \(\tilde{P}_{\gamma}\) in the lower bound in (23) yields \[\begin{split}\|F_{\gamma}-F_{\star}\|&\geq\frac{\left\|B ^{\top}(M_{\gamma}-\tilde{P}_{\gamma}+\tilde{P}_{\gamma}-P_{\star})\right\| \sigma_{\min}(A+BF_{\star})}{\left\|R+BM_{\gamma}B\right\|}\\ &\geq\frac{\sigma_{\min}(A+BF_{\star})}{\left\|R+BM_{\gamma}B \right\|}\bigg{(}\left\|B^{\top}(P_{\gamma}(\gamma^{2}I-P_{\gamma})^{-1}P_{ \gamma}+\tilde{P}_{\gamma}-P_{\star})\right\|-\left\|B^{\top}(\tilde{P}_{ \gamma}-P_{\gamma})\right\|\bigg{)}.\end{split} \tag{24}\] To obtain a lower bound on the above expression, we can upper and lower bound the spectra of \(\tilde{P}_{\gamma}-P_{\gamma}\) and \(\tilde{P}_{\gamma}-P_{\star}\), respectively. To upper bound the spectrum of \(\tilde{P}_{\gamma}-P_{\gamma}\), we apply a result similar to equation (20) for the noiseless adversarial setting. We may lower bound the spectrum of \(\tilde{P}_{\gamma}-P_{\star}\) by writing each as the solution to a Lyapunov equation and observing that their difference may also be written as the solution to a Lyapunov equation. This is stated formally and proven in Lemma B.5. Combining the above leads to the following theorem. **Theorem 3.2**: _Suppose_ \[\begin{split}\gamma^{2}&\geq\sigma_{\min}(P_{\star })\\ &+\frac{1}{2}\sigma_{\min}(P_{\star})^{2}\frac{\left\|B^{\top}W_{ \infty}(A+BF_{\star},I)\right\|}{\left\|R+B^{\top}M_{\gamma}B\right\|}\left\|B ^{\top}W_{\infty}\Big{(}\Big{(}I+(\gamma^{2}I-\tilde{P}_{\gamma})^{-1}\tilde{P }_{\gamma}\Big{)}(A+BF_{\star}),I\Big{)}\right\|\sigma_{\min}(A+BF_{\star})^{2}.\end{split}\] _Then the following lower bound holds:_ \[\begin{split}\operatorname{NC}(F_{\gamma})-\operatorname{NC}(F_ {\star})&\geq\frac{\sigma_{w}^{2}}{2}\bigg{(}\frac{\sigma_{\min} (P_{\star})^{2}}{\gamma^{2}-\sigma_{\min}(P_{\star})}\bigg{)}^{2}\frac{\sigma_ {\min}(R+B^{\top}P_{\star}B)}{\left\|R+B^{\top}M_{\gamma}B\right\|^{2}}\left\| B^{\top}W_{\infty}(A+BF_{\star},I)\right\|^{2}\sigma_{\min}(A+BF_{\star})^{2}.\end{split}\] Keeping the nominal system fixed, both the upper and lower bounds decay at a rate \(\gamma^{-4}\). We note that instead of the \(\ell\)-step controllability gramian that manifests in the upper bound, we have instead the system parameter \(W_{\infty}(A+BF_{\star},I)\), which is the controllability gramian of the closed-loop system under controller \(F_{\star}\) with disturbances as inputs. That is, a large \(\left\|B^{\top}W_{\infty}(A+BF_{\star},I)\right\|\) implies that the nominal closed-loop system is quantifiably more controllable by the disturbance input, hence more susceptible to adversarial disturbances of fixed energy. ## 4 Performance Robustness Tradeoff Bounds: State Prediction We now turn to the problem of state prediction, where the goal is to use the history of observations \(y_{-\infty:t}\) to propose a state estimate \(\hat{x}_{t+1}\) such that \(\left\|\hat{x}_{t+1}-x_{t+1}\right\|^{2}\) is small. By denoting \(e_{t}=x_{t}-\hat{x}_{t}\), we represent the state prediction error dynamics as \[e_{t+1}=x_{t+1}-\hat{x}_{t+1}=Ax_{t}+B_{0}w_{t}+B_{1}\delta_{t}-\hat{x}_{t+1}.\] We may perform a change of variables \(\hat{x}_{t+1}=A\hat{x}_{t}-u_{t}\), where \(u_{t}\) is an estimate of \(A\hat{x}_{t}-x_{t+1}\) given the history of measurements. Doing so allows us to express \[\begin{split} e_{t+1}&=Ae_{t}+B_{0}w_{t}+B_{1} \delta_{t}+u_{t}\\ e_{y_{t}}&:=y_{t}-C_{2}\hat{x}_{t}=C_{2}e_{t}+D_{2 0}w_{t}+D_{21}\delta_{t}.\end{split}\] With this representation, the state prediction problem becomes a control problem in the language of (1) and (2) by letting \(B_{2}=I\), \(C_{1}=I\), and \(D_{12}=0\). In particular, the state \(x_{t}\) in (1) is replaced by the state prediction error, \(e_{t}\). The performance signal \(z_{t}\) also becomes \(e_{t}\), as the goal is to make the state estimation error small. The measurement \(y_{t}\) becomes the innovation \(e_{y_{t}}\). The optimal solution to the nominal state prediction problem (no adversarial perturbations) is given by the Kalman Filter, which takes the form \(u_{t}=-A\Sigma_{\star}C_{2}^{\top}(C_{2}\Sigma_{\star}C_{2}^{\top}+D_{20}^{\top }D_{20})^{-1}e_{y_{t}}=:L_{\star}e_{y_{t}}\)2, where \(\Sigma_{\star}\) is given by (9). Meanwhile, the solution to the soft-constrained adversarially robust state prediction problem at level \(\gamma\) is given by \(u_{t}=L_{\gamma}e_{y_{t}}\), as long as there exists real matrices \(L_{\gamma},\Sigma_{\gamma},Y_{\gamma}\) with \(\Sigma_{\gamma}\) and \(Y_{\gamma}\) symmetric positive definite that satisfy Footnote 2: Note that \(L_{\star}\) is slightly different from the Kalman Filter defined in (6). This is to account for the fact that the problem of interest in this section is state prediction rather than current state estimation. \[\tilde{A} =A+L_{\gamma}C_{2}\] \[\tilde{B}_{0} =B_{0}+L_{\gamma}D_{20}\] \[\tilde{B}_{1} =B_{1}+L_{\gamma}D_{21}\] \[\Phi =\gamma^{2}I-\tilde{B}_{1}^{\top}Y_{\gamma}\tilde{B}_{1}\succ 0 \tag{25}\] \[Y_{\gamma} =I+\tilde{A}^{\top}(Y_{\gamma}+Y_{\gamma}\tilde{B}_{1}\Phi^{-1} \tilde{B}_{1}Y_{\gamma})\tilde{A}\] \[\Sigma_{\gamma} =(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{\gamma})\tilde{ A}\Sigma_{\gamma}\tilde{A}^{\top}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{ \gamma})^{\top}+(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y)\tilde{B}_{0} \tilde{B}_{0}^{\top}(I+\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y)^{\top}\] \[L_{\gamma} \in\operatorname*{argmin}_{L}\operatorname*{Tr}\Bigl{(}\tilde{ B}_{0}^{\top}Y_{\gamma}\tilde{B}_{0}+\tilde{B}_{0}^{\top}Y_{\gamma}\tilde{B}_{1} \Phi^{-1}\tilde{B}_{1}^{\top}Y_{\gamma}\tilde{B}_{0}\Bigr{)}+\operatorname*{ Tr}\Bigl{(}\Sigma_{\gamma}\Bigl{[}\tilde{A}^{\top}Y_{\gamma}\tilde{A}+\tilde{A}^{\top}Y_{ \gamma}\tilde{B}_{1}\Phi^{-1}\tilde{B}_{1}^{\top}Y_{\gamma}\tilde{A}\Bigr{]} \Bigr{)},\] where \(\Sigma_{\gamma}\) and \(Y_{\gamma}\) are treated as fixed parameters in the minimization on the final line above. As in the previous section, we bound the nominal performance gap between the nominal state estimator \(L_{\star}\) and the adversarially robust state estimator \(L_{\gamma}\). In this setting, the nominal performance metric reduces to the trace of the covariance of the state estimation error. Under an arbitrary linear stabilizing state estimator \(L\), the state estimation error covariance becomes \(\Sigma_{L}=\mathsf{dlyap}((A+LC_{2})^{\top},(B_{0}+LD_{20})(B_{0}+LD_{20})^{ \top})\). In the nominal setting, the state estimation error covariance induced by the Kalman Filter is \(\Sigma_{L_{\star}}=\Sigma_{\star}\), while the covariance induced by the adversarially robust state estimator \(L_{\gamma}\) is given by \(\Sigma_{L_{\gamma}}\). Then our objective is to upper and lower bound the state estimation error cost gap defined by \[\operatorname*{Tr}(\Sigma_{L_{\gamma}})-\operatorname*{Tr}(\Sigma_{\star}).\] We may leverage Lemma 12 in Fazel et al. (2018) to show the following reduction. **Lemma 4.1**: _For any observer gain \(L\) such that \(\rho(A+LC_{2})<1\),_ \[\operatorname*{Tr}(\Sigma_{L})-\operatorname*{Tr}(\Sigma_{\star})= \operatorname*{Tr}(\mathsf{dlyap}(A+LC_{2},I)(L-L_{\star})(D_{20}D_{20}^{\top }+C_{2}\Sigma C_{2}^{\top})(L-L_{\star})^{\top}).\] _Proof:_ We have that \[\Sigma_{L} =\mathsf{dlyap}((A+LC_{2})^{\top},B_{0}B_{0}^{\top}+LD_{20}D_{20 }^{\top}L^{\top})\] \[\Sigma_{\star} =\mathsf{DARE}(A^{\top},C_{2}^{\top},B_{0}B_{0}^{\top},D_{20},D_{ 20}^{\top}).\] By interpreting this as a control problem for the system \(x_{t+1}=A^{\top}x_{t}+C_{2}^{\top}u_{t}\) with policies \(u_{t}=L^{\top}x_{t}\), and \(u_{t}=L_{\star}^{\top}x_{t}\), \(\operatorname*{Tr}(\Sigma_{L})-\operatorname*{Tr}(\Sigma_{L_{\star}})\) may be interpreted as the expected gap in control costs under initial state distribution \(x_{0}\sim\mathcal{N}(0,I)\). Therefore Lemma 12 in Fazel et al. (2018) applies immediately, yielding the result. \(\blacksquare\) Using the above result and pulling largest or smallest singular values from the trace, we have the following lower and upper bounds on the quantity of interest: \[\operatorname*{Tr}(\Sigma_{L_{\gamma}})-\operatorname*{Tr}(\Sigma_{\star}) \geq\sigma_{\min}\bigl{(}\mathsf{dlyap}((A+L_{\gamma}C_{2})^{\top},I) \bigr{)}\sigma_{\min}\bigl{(}D_{20}D_{20}^{\top}+C_{2}\Sigma_{\star}C_{2}^{ \top}\bigr{)}\left\lVert L_{\gamma}-L_{\star}\right\rVert_{F}^{2},\quad\text{ and}\] \[\operatorname*{Tr}(\Sigma_{L_{\gamma}})-\operatorname*{Tr}(\Sigma_{ \star}) \leq\left\lVert\mathsf{dlyap}((A+L_{\gamma}C_{2})^{\top},I) \right\rVert\left\lVert D_{20}D_{20}^{\top}+C_{2}\Sigma_{\star}C_{2}^{\top} \right\rVert\left\lVert L_{\gamma}-L_{\star}\right\rVert_{F}^{2}.\] We have therefore reduced the problem to upper and lower bounding the Frobenius norm of the gap between the adversarially robust observer gain \(L_{\gamma}\), and the nominal observer gain \(L_{\star}\). The frobenius norm can in turn be upper and lower bounded as \(\left\|L_{\gamma}-L_{\star}\right\|^{2}\leq\left\|L_{\gamma}-L_{\star}\right\|_{F }^{2}\leq\min\{n_{y},n_{u}\}\left\|L_{\gamma}-L_{\star}\right\|^{2}\). Unlike the adversarially robust state feedback controller gain, the adversarially robust filter gain does not have a closed form expression, and instead requires solving the collection of equations in (25). Therefore, rather than presenting a bound on the gap in general, we focus on two special cases where the equations simplify substantially. The first is when the adversary perturbs only the state, and not the measurement (i.e. \(D_{21}=0\).) The second is when both the state and the measurement are scalar. These two simplifications and the resulting bounds on the nominal performance gap are presented in the subsequent sections. ### Adversarial Perturbations Entering The State In this setting, we assume that \(D_{21}=0\), i.e., the adversary impacts only the state of the system. This assumption is reasonable in settings where the sensor noise is stochastic, but the process noise consists of a mix of both stochastic and adversaial components to model environment and model uncertainty. With this simplification, the optimization problem that must be solved for \(L_{\gamma}\) reduces to \[L_{\gamma}\in\operatorname{argmin}f(L)\] where \[f(L)=\operatorname{Tr}((B_{0}+LD_{20})^{\top}M_{\gamma}(B_{0}+LD_{20}))+ \operatorname{Tr}(\Sigma_{\gamma}(A+LC_{2})^{\top}M_{\gamma}(A+LC_{2})),\] and \[M_{\gamma}=Y_{\gamma}+Y_{\gamma}B_{1}(\gamma^{2}I-B_{1}^{\top}Y_{\gamma}B_{1} )^{-1}B_{1}^{\top}Y_{\gamma}.\] As \(M_{\gamma}\) is full rank, the solution reduces to \(L_{\gamma}=-A\Sigma_{\gamma}C_{2}^{\top}(D_{20}D_{20}^{\top}+C_{2}\Sigma_{ \gamma}C_{2}^{\top})^{-1}\). In particular, it has a form identical to the Kalman Filter but using the solution \(\Sigma_{\gamma}\) to a different Riccati equation. Applying Lemma 3.1 to the transposes of \(L_{\star}\) and \(L_{\gamma}\), we find that \[\frac{\left\|(A+L_{\star}C_{2})^{\top}(\Sigma_{\gamma}-\Sigma_{\star})C_{2}) \right\|}{\left\|D_{20}D_{20}^{\top}+C_{2}\Sigma_{\gamma}C^{\top}\right\|} \leq\left\|L_{\gamma}-L_{\star}\right\|\leq\frac{\left\|(A+L_{\star}C_{2})^{ \top}(\Sigma_{\gamma}-\Sigma_{\star})C_{2}\right\|}{\sigma_{\min}(D_{20}D_{20}^ {\top}+C_{2}\Sigma_{\star}C_{2}^{\top})}.\] It therefore remains to upper and lower bound the spectrum of \(\Sigma_{\gamma}-\Sigma_{\star}\). #### 4.1.1 Upper Bound To obtain an upper bound on the cost gap, we must simply upper bound \(\left\|\Sigma_{\gamma}-\Sigma_{\star}\right\|.\) To do so, observe that if we denote \(\Lambda_{\gamma}=B_{1}(\gamma^{2}I-B_{1}^{\top}Y_{\gamma}B_{1})^{-1}B_{1}^{ \top}Y_{\gamma}\), then we may write \[\Sigma_{\gamma}=(I+\Lambda_{\gamma})(A+L_{\gamma}C_{2})\Sigma_{\gamma}(A+L_{ \gamma}C_{2})^{\top}(I+\Lambda_{\gamma})^{\top}+(I+\Lambda_{\gamma})(B_{0}+LD _{20})(B_{0}+LD_{20})^{\top}(I+\Lambda_{\gamma})^{\top}.\] Defining \(\tilde{\Sigma}_{\gamma}=(I+\Lambda_{\gamma})^{-1}\Sigma_{\gamma}\big{(}(I+ \Lambda_{\gamma})^{-1}\big{)}^{\top}\), we have \[\tilde{\Sigma}_{\gamma} =(A+L_{\gamma}C_{2})(I+\Lambda_{\gamma})\tilde{\Sigma}_{\gamma}( I+\Lambda_{\gamma})^{\top}(A+L_{\gamma}C_{2})^{\top}+(B_{0}+LD_{20})(B_{0}+LD_{20})^{\top}\] \[=\texttt{DARE}((A(I+\Lambda_{\gamma}))^{\top},(C_{2}(I+\Lambda_{ \gamma}))^{\top},B_{0}B_{0}^{\top},D_{20}D_{20}^{\top}).\] By the triangle inequality, \[\left\|\Sigma_{\gamma}-\Sigma_{\star}\right\|\leq\left\|\Sigma_{\gamma}-\tilde {\Sigma}_{\gamma}\right\|+\left\|\tilde{\Sigma}_{\gamma}-\Sigma_{\star}\right\|.\] As in Section 3.1, we may bound \(\left\|\tilde{\Sigma}_{\gamma}-\Sigma_{\star}\right\|\) when both \(A\Lambda_{\gamma}^{\top}\) and \(C_{2}\Lambda_{\gamma}^{\top}\) are small. To bound \(\left\|\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}\right\|\), note that \[\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}=\Lambda_{\gamma}\tilde{\Sigma}_{\gamma}+ \tilde{\Sigma}_{\gamma}\Lambda_{\gamma}^{\top}+\Lambda_{\gamma}\tilde{\Sigma}_ {\gamma}\Lambda_{\gamma}^{\top}.\] Therefore, \[\left\|\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}\right\|\leq\left(2\frac{\left\|Y _{\gamma}\right\|\left\|B_{1}\right\|^{2}}{\gamma^{2}-\left\|Y_{\gamma} \right\|\left\|B_{1}\right\|^{2}}+\left(\frac{\left\|Y_{\gamma}\right\|\left\|B _{1}\right\|^{2}}{\gamma^{2}-\left\|Y_{\gamma}\right\|\left\|B_{1}\right\|^{2 }}\right)^{2}\right)\left\|\tilde{\Sigma}_{\gamma}\right\|.\] Combining these results leads to the following upper bound on the cost gap. **Theorem 4.1**: _Define \(\bar{Y}\) such that \(\bar{Y}\geq\left\|Y_{\gamma}\right\|\) for all \(\gamma\) such that the conditions of (25) are satisifed. Suppose that \((A,C_{2})\) is observable. Define \(\kappa(B_{0}B_{0}^{\top},D_{20}D_{20}^{\top})=\frac{\max\left\{\sigma_{\max}(B_ {0}B_{0}^{\top}),\sigma_{\max}(D_{20}D_{20}^{\top})\right\}}{\min\left\{\sigma _{\min}(B_{0}B_{0}^{\top}),\sigma_{\min}(D_{20}D_{20}^{\top})\right\}}\), \(\tau(A,\rho):=\sup\{\left\|A^{k}\right\|\rho^{-k}:k\geq 0\}\), and \(\beta=\max\Bigl{\{}1,\frac{\left\|B_{1}\right\|^{2}\bar{Y}}{\gamma^{2}-\left\| B_{1}\right\|^{2}Y}\max\{\left\|A\right\|,\left\|C_{2}\right\|\}\tau(A,\rho)+ \rho\Bigr{\}}\), where \(\rho>\rho(A)\). Then for \(1\leq\ell\leq n_{x}\) and_ \[\gamma^{2}\geq\left\|B_{1}\right\|^{2}\bar{Y}+\frac{3}{2}\ell^{3/2}\beta^{\ell -1}\sigma_{\min}(W_{\ell}(A^{\top},C_{2}^{\top}))^{-1/2}\tau(A,\rho)^{2}( \left\|C_{2}\right\|+1)\max\{\left\|A\right\|,\left\|C_{2}\right\|\}\left\|B_{ 1}\right\|^{2}\bar{Y},\] _the following bound holds:_ \[\mathrm{Tr}(\Sigma_{L_{\gamma}})-\mathrm{Tr}(\Sigma_{\star})\leq \left\|\mathtt{dlyap}(A+L_{\gamma}C_{2},I)\right\|\frac{\left\|D_{20}D_{20}^{ \top}+C_{2}\Sigma_{\star}C_{2}^{\top}\right\|}{\sigma_{\min}(D_{20}D_{20}^{ \top}+C_{2}\Sigma_{\star}C_{2}^{\top})^{2}}n_{y}\left\|A+L_{\star}C_{2}\right\| ^{2}\left\|C_{2}\right\|^{2}\] \[\cdot O(1)\Bigl{(}\left\|\tilde{\Sigma}_{\gamma}\right\|+\max\{ \left\|A\right\|,\left\|C_{2}\right\|\left\|\Sigma_{\star}\right\|\right)^{2} \Biggl{(}\frac{\left\|B_{1}\right\|^{2}\bar{Y}}{\gamma^{2}-\left\|B_{1} \right\|^{2}\bar{Y}}\Biggr{)}^{2}\ell^{5}\beta^{4(\ell-1)}\Bigl{(}1+\sigma_{ \min}\Bigl{(}W_{l}(A^{\top},C_{2}^{\top})^{-1/2}\Bigr{)}\Bigr{)}^{2}\tau(A, \rho)^{6}\] \[\cdot(\left\|C_{2}\right\|+1)^{2}\kappa(B_{0}B_{0}^{\top},D_{20}D_ {20}^{\top}).\] The proof combines the results of this section with Proposition C.1 in Appendix C.1. The result is roughly a dual result to Theorem 4.1. Instead of the bound decaying with the inverse of the minimum singular value of the \(\ell\)-step controllability gramian, it decays with the inverse of the minimum singular value of the \(\ell\)-step observability gramian, \(W_{\ell}(A^{\top},C_{2}^{\top})\). Therefore, uniformly high observability causes the upper bound to decrease. #### 4.1.2 Lower Bound To obtain our lower bound, we lower bound \(\sigma_{\min}(\Sigma_{\gamma}-\Sigma_{\star})\). To do so, note that \[\Sigma_{\gamma}-\Sigma_{\star}=\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}+\tilde {\Sigma}_{\gamma}-\Sigma_{L_{\gamma}}+\Sigma_{L_{\gamma}}-\Sigma_{\star},\] where \(\tilde{\Sigma}_{\gamma}\) is as in Section 4.1.1. By the fact that \(\Sigma_{L_{\gamma}}\) is the estimation error covariance under the controller \(L_{\gamma}\) for the nominal setting, we have that \(\Sigma_{L_{\gamma}}-\Sigma_{\star}\succeq 0\), and \(\Sigma_{\gamma}-\Sigma_{\star}\succeq\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}+ \tilde{\Sigma}_{\gamma}-\Sigma_{L_{\gamma}}\). Writing \[\tilde{\Sigma}_{\gamma}=\mathtt{dlyap}((I+\Lambda_{\gamma})^{\top}(A+L_{\gamma }C_{2})^{\top},(B_{0}+LD_{20})(B_{0}+LD_{20})^{\top}),\] we may take the difference of the Lyapunov equations to find that \[\tilde{\Sigma}_{\gamma}-\Sigma_{L_{\gamma}}=\mathtt{dlyap}((A+L_{\gamma}C_{2} )^{\top},(A+L_{\gamma}C_{2})(\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma})(A+L_{ \gamma}C_{2})^{\top}),\] and therefore \[\Sigma_{\gamma}-\Sigma_{L_{\gamma}}=\mathtt{dlyap}((A+L_{\gamma}C_{2})^{\top}, \Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}).\] To conclude, we may lower bound \(\sigma_{\min}(\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma})\). **Lemma 4.2**: _We have_ \[\sigma_{\min}(\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma})\geq\sigma_{\min}(Y_{ \gamma})\Biggl{(}\Biggl{(}\frac{\gamma^{2}}{\gamma^{2}-\left\|B_{1}^{\top}Y_{ \gamma}^{1/2}\right\|}\Biggr{)}^{2}\frac{1}{\left\|Y_{\gamma}\right\|}\sigma_{ \min}(\tilde{\Sigma}_{\gamma})-\left\|\tilde{\Sigma}_{\gamma}\right\|/\sigma_{ \min}(Y_{\gamma})\Biggr{)}.\] Combining results, we have the following theorem. **Theorem 4.2**: _Define \(\bar{\kappa}\) such that \(\bar{\kappa}\geq\left\|Y_{\gamma}\right\|/\sigma_{\min}(Y_{\gamma})\) for all \(\gamma\) such that the conditions of (25) are satisifed. Then_ \[\operatorname{Tr}(\Sigma_{L_{\gamma}})-\operatorname{Tr}(\Sigma_ {\star})\geq\sigma_{\min}(\mathtt{dlyap}((A+L_{\gamma}C_{2})^{\top},I))^{3} \frac{\sigma_{\min}(D_{20}D_{20}^{\top}+C_{2}\Sigma_{\star}C_{2}^{\top})}{ \left\|D_{20}D_{20}^{\top}+C_{2}\Sigma_{L_{\gamma}}C_{2}^{\top}\right\|^{2}} \sigma_{\min}(A+L_{\star}C_{2})^{2}\left\|C_{2}\right\|^{2}\] \[\qquad\qquad\qquad\times\left(\left(\frac{\gamma^{2}}{\gamma^{2 }-\left\|B_{1}Y_{\gamma}^{1/2}\right\|}\right)^{2}\frac{\sigma_{\min}(\tilde{ \Sigma}_{\gamma})}{\bar{\kappa}}-\left\|\tilde{\Sigma}_{\gamma}\right\|\right).\] The lower bound above is loose when \(\gamma\) is large. In particular, it becomes negative as \(\gamma\to\infty\). However, for small \(\gamma\), corresponding to a large adversarial power, the bound becomes positive. It scales with the minimum singular value of \(\mathtt{dlyap}((A+L_{\gamma}C_{2})^{\top},I)\) which measures the controllability of the closed-loop state prediction error system under the adversarially robust observer by the external disturbances. ### Scalar State and Measurement Case Here, we let \(A=a\), \(B_{0}=B_{1}=\begin{bmatrix}1&0\end{bmatrix}\), \(B_{2}=1\), \(C_{2}=c\), and \(D_{20}=D_{21}=\begin{bmatrix}0&1\end{bmatrix}\). The values of \(1\) in \(B_{0}\), \(B_{1}\), \(D_{20}\) and \(D_{21}\) are for ease of exposition only, the derivations go through for arbitrary values. In this setting, we may solve for \(L_{\gamma}\) explicitly in terms of the \(a\), \(c\), \(Y_{\gamma}\), \(\Sigma_{\gamma}\), and \(\gamma\). In particular we find that \(L_{\gamma}\) is given by one of the solutions to \[0=\gamma^{2}\big{(}L(1+c^{2}\Sigma_{\gamma})+ac\Sigma_{\gamma} \big{)}+((a^{2}-c^{2})L+ac(L^{2}-1))\Sigma_{\gamma}Y_{\gamma}. \tag{26}\] Solving for the relevant root of this equation yields \[L_{\gamma}=\frac{-\gamma^{2}-c^{2}\gamma^{2}\Sigma_{\gamma}+(c^{ 2}-a^{2})\Sigma_{\gamma}Y_{\gamma}+\sqrt{-4ac\Sigma_{\gamma}Y(ac\gamma^{2} \Sigma_{\gamma}-ac\Sigma_{\gamma}Y_{\gamma})+(\gamma^{2}+c^{2}\gamma^{2} \Sigma_{\gamma}+a^{2}\Sigma_{\gamma}Y_{\gamma}-c^{2}\Sigma_{\gamma}Y_{\gamma}) ^{2}}}{2ac\Sigma_{\gamma}Y_{\gamma}}.\] If the second term in (26) were zero, then the solution would be \(L_{\gamma}=-\frac{a\Sigma_{\gamma}c}{c^{2}\Sigma_{\gamma}+1}\), which has a form identical to the Kalman Filter, \(L_{\star}=-\frac{a\Sigma_{\gamma}c}{c^{2}\Sigma_{\star}+1}\). This will approximately be true when \(\gamma\) is large. This is made concrete in the following lemma. **Lemma 4.3**: _Suppose_ \[\gamma^{2}\geq\max\biggl{\{}2\Sigma_{\gamma}Y_{\gamma}(a^{2}-c^{ 2}),32\Sigma_{\gamma}^{2}Y_{\gamma}a^{2}c^{2},\frac{Y_{\gamma}}{2}\biggr{\}}. \tag{27}\] _Then_ \[L_{\gamma}=-\frac{a\Sigma_{\gamma}c}{c^{2}\Sigma_{\gamma}+1}+\xi,\] _where_ \[|\xi|\leq 64|a||c|\frac{\Sigma_{\gamma}Y_{\gamma}}{\gamma^{2}} \big{(}\Sigma_{\gamma}\big{|}a^{2}-c^{2}\big{|}+1+(ac\Sigma_{\gamma})^{2} \big{)}.\] If we now consider the gap \(L_{\gamma}-L_{\star}\), we find that if the condition on \(\gamma\) from Lemma 4.3 is satisfied, then by using Lemma 3.1, \[\frac{|c||a+L_{\star}c||\Sigma_{\gamma}-\Sigma_{\star}|}{1+c^{2} \Sigma_{\gamma}}-|\xi|\leq|L_{\gamma}-L_{\star}|\leq\frac{|c||a+L_{\star}c|| \Sigma_{\gamma}-\Sigma_{\star}|}{1+c^{2}\Sigma_{\gamma}}+|\xi|.\] To complete our bounds, we must bound \(|\Sigma_{\gamma}-\Sigma_{\star}|\) above and below. #### 4.2.1 Upper Bound To obtain an upper bound, we must upper bound \(|\Sigma_{\gamma}-\Sigma_{\star}|.\) The approach to do so is similar to that in Section 4.1.1. In particular, by again appealing to the Riccati perturbation bounds of Mania et al. (2019), (specialized to this setting in Proposition C.2)and combining with previous results, we obtain the following theorem. **Theorem 4.3**: _Define \(\bar{Y}\) such that \(\bar{Y}\geq Y_{\gamma}\) for all \(\gamma\) such that the conditions of (25) are satisfied and \(\underline{\gamma}\) as the smallest \(\gamma\) such that these conditions are satisfied. Suppose that \(c\neq 0\). Suppose \(\gamma\) is sufficiently large that the condition (27) holds. Then if_ \[\gamma^{2}\geq\bar{Y}(1+((|a|+1)/|c|)^{2}+\frac{3}{2}|c|^{-1}(|c|+1)\max\{|a|, |c|\}\bar{Y}(1+((|a|+1)/|c|)^{2},\] _the following bound holds:_ \[\Sigma_{L_{\gamma}}-\Sigma_{\star}\leq\frac{1}{1-(a+L_{\gamma}c)^ {2}}\bigg{(}16\frac{|a+L_{\star}c|}{1+c^{2}\Sigma_{\star}}\frac{\bar{Y}(1+((|a |+1)/|c|)^{2})}{\gamma^{2}-\bar{Y}(1+((|a|+1)/|c|)^{2})}\bigg{(}\tilde{\Sigma} _{\gamma}+\max\{|a|,|c|\}(|c|+1)^{3}\Sigma_{\star}\bigg{)}\\ +|\xi|\bigg{)}^{2}(1+c^{2}\Sigma_{\star}),\] _where_ \[|\xi|\leq 64|a||c|\frac{\Sigma_{\gamma}\bar{Y}}{\gamma^{2}}\bigg{(}\Sigma_{ \underline{\gamma}}|a^{2}-c^{2}\big{|}+1+(ac\Sigma_{\underline{\gamma}})^{2} \bigg{)}.\] In the scalar setting, \(|c|\) is a measure of observability. Therefore, the lower bound is presented directly in terms of \(|a|\) and \(|c|\) rather than the observability gramian for the system. As with Theorem 4.1, the bound decays to zero as \(\gamma\to\infty\). Additionally, when observability becomes poor, as measured by \(|c|\) becoming small, the bound becomes large due to the appearance of \(|c|\) in the denominator of \(\bar{Y}(1+(|a|+1)/|c|)\). In particular, \(\gamma^{2}\) is constrained to be larger than this quantity for the bound to be valid. Thus as \(|c|\) becomes small, \(\bar{Y}(1+(|a|+1)/|c|)\) becomes large, making the term \(\frac{\bar{Y}(1+(|a|+1)/|c|)^{2}}{\gamma^{2}-\bar{Y}(1+(|a|+1)/|c|)^{2}}\) large. The stability of the closed-loop system under the adversarially robust observer also makes an appearance in the bound. When this closed-loop system is near marginally stable, the bound becomes large due to the term \(\frac{1}{1-(a+L_{\gamma}c)^{2}}\). #### 4.2.2 Lower Bound We have that \[\Sigma_{\gamma}-\Sigma_{\star} =\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}+\tilde{\Sigma}_{\gamma }-\Sigma_{L_{\gamma}}+\Sigma_{L_{\gamma}}-\Sigma_{\star}\] \[\geq\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}+\tilde{\Sigma}_{ \gamma}-\Sigma_{L_{\gamma}},\] where \(\tilde{\Sigma}_{\gamma}=\texttt{DARE}((a(1+\Lambda_{\gamma}),c(1+\Lambda_{ \gamma}),1,1)\) and \(\Lambda_{\gamma}=\frac{Y_{\gamma}(1+L_{\gamma}^{2})}{\gamma^{2}-Y_{\gamma}(1+ L_{\gamma}^{2})}\). By expressing both \(\tilde{\Sigma}_{\gamma}\) and \(\Sigma_{L_{\gamma}}\) as the solution to a Lyapunov equation, we have that \[\tilde{\Sigma}_{\gamma}-\Sigma_{L_{\gamma}}=\texttt{dlyap}(a+L_{\gamma}c,(a+L _{\gamma}c)^{2}\tilde{\Sigma}_{\gamma}(2\Lambda_{\gamma}+\Lambda_{\gamma}^{2})).\] Similarly, \(\Sigma_{\gamma}-\tilde{\Sigma}_{\gamma}=\tilde{\Sigma}_{\gamma}(2\Lambda_{ \gamma}+\Lambda_{\gamma}^{2})\). Therefore, \[\Sigma_{\gamma}-\Sigma_{\star}\geq(2\Lambda_{\gamma}+\Lambda_{\gamma}^{2}) \tilde{\Sigma}_{\gamma}\sum_{t=0}^{\infty}(a+L_{\gamma}c)^{2t}\geq 2\frac{Y_{ \gamma}(1+L_{\gamma}^{2})}{\gamma^{2}-Y_{\gamma}(1+L_{\gamma}^{2})}\tilde{ \Sigma}_{\gamma}\sum_{t=0}^{\infty}(a+L_{\gamma}c)^{2t}.\] We have \(Y_{\gamma}\geq 1\) for all \(\gamma\) and \(\bar{\Sigma}_{\gamma}\geq\Sigma_{\star}\). Therefore, \[\Sigma_{\gamma}-\Sigma_{\star}\geq 2\frac{(1+L_{\gamma}^{2})}{\gamma^{2}-(1+L_{ \gamma}^{2})}\Sigma_{\star}\sum_{t=0}^{\infty}(a+L_{\gamma}c)^{2t}.\] Combining these results, we have the following theorem. **Theorem 4.4**: _Define \(\bar{Y}\) such that \(\bar{Y}\geq Y_{\gamma}\) for all \(\gamma\) such that the conditions of (25) are satisfied and \(\underline{\gamma}\) as the smallest \(\gamma\) such that these conditions are satisfied. Suppose \(\gamma\) is sufficiently large that the condition (27) holds. Then_ \[\Sigma_{L_{\gamma}}-\Sigma_{\star}\geq\frac{1}{1-(a+L_{\gamma}c)^{2}}(1+c^{2} \Sigma_{\star})\Bigg{(}\frac{|c||a+L_{\star}c|}{1+c^{2}\Sigma_{\gamma}}2\frac{( 1+L_{\gamma}^{2})}{\gamma^{2}-(1+L_{\gamma}^{2})}\Sigma_{\star}\texttt{dlyap}( a+L_{\gamma}c,1)-|\xi|\Bigg{)}^{2},\] _where_ \[|\xi|\leq 64|a||c|\frac{\Sigma_{\underline{\gamma}}\bar{Y}}{\gamma^{2}} \Big{(}\Sigma_{\underline{\gamma}}\big{|}a^{2}-c^{2}\big{|}+1+(ac\Sigma_{ \underline{\gamma}})^{2}\Big{)}.\] We can again observe the dependence of the above bound on \(|c|\) as a proxy for observability and \(|a|\) as the measure of stability. We see that when \(|c|\) becomes small, and \(|a|\) approaches 1 both \(L_{\gamma}\) and \(\Sigma_{\star}\) become large. Indeed, if \(|a|=1-\lambda\) and \(|c|=\lambda\), and then \(\Sigma_{\star}\) grows with \(1/\lambda^{2}\) as \(\lambda\) becomes small, and the overall bound becomes large. The bound then grows as observability becomes poor, and the system approaches marginal stability. As with Theorem 4.3, the appearance of \(\frac{1}{1-(a+L_{\gamma}c)^{2}}\) indicates that the bound grows when the closed-loop system under the adversarially robust observer approaches marginal stability. ## 5 Numerical Experiments We now empirically study the trends suggested by our tradeoff bounds in the previous two sections. Plotting the Bounds for Scalar State EstimationIn Figure 1, we demonstrate the upper bound in Theorem 4.3 and the lower bound in Theorem 4.4 for the scalar system with \(a=.9\) and varying values of \(c\). We fix the robustness level using the soft penalty \(\gamma=4\). In both bounds, we substitute the true value of \(\xi\), rather than the upper bound. In the upper bound, we also use the value of \(Y\) that depends on \(\gamma\) rather than the uniform upper bound \(\tilde{Y}\). We see that the lower bound closely tracks the true cost error. The upper bound follows the same trends as \(c\) becomes small, however, it is inflated by several orders of magnitude due to the conservative Riccati perturbation bounds applied. For the upper bound, the lower bound, and the true state prediction error gap, we see that when the observability decreases, as measured by \(|c|\) becoming smaller, the gap nominal cost gap between the robust filter and the nominal filter grows. Compounding Impact of Poor Controllability and ObservabilityWe study the dependence of the tradeoff severity on system controllability and observability in Figure 2. In this example, we consider the two dimensional integrator system defined by \[x_{t+1} =\begin{bmatrix}1&\rho\\ 0&1\end{bmatrix}x_{t}+\delta_{t}^{x}+w_{t}^{x}+\begin{bmatrix}0\\ 1\end{bmatrix}u_{t}, \tag{28}\] \[y_{t} =Cx_{t}+\delta_{t}^{y}+w_{t}^{t}. \tag{29}\] We consider three settings with this system: state feedback control, state prediction, and output feedback control. In all three settings, \(\delta_{t}^{x}\) is an energy budget constrained adversarial input to the state, and \(w_{t}^{x}\) is independent zero mean Gaussian noise with identity covariance. For state feedback control, we have \(C=I\), \(\delta_{t}^{y}=0\) and \(w_{t}^{y}=0\). For both state prediction and output feedback control, we take \(C=\begin{bmatrix}1&0\end{bmatrix}\), with \(\delta_{t}^{y}\) as Figure 1: Upper bound, lower bound, and true excess filtering cost between the robust filter designed with \(\gamma=4\) and the nominal filter for a scalar system with \(a=0.9\) and and varying values of \(c\). As \(|c|\) decreases, observability becomes poor. This fact is captured in our bounds, and reflects what occurs with the true excess filtering cost. Figure 2: We illustrate the impact of controllability and observability on the severity of the performance robustness tradeoff for state feedback control, state prediction, and output feedback control of the simple integrator system in (28). In Figure 1(a), we plot the tradeoff curves for fixed values of the parameter \(\rho\), which determines the controllability and/or observability of the system. The adversarial costs in this setting use \(\varepsilon=0.1\), and the tradeoff curves evaluate adversarially robust controllers/observers designed with \(\varepsilon\in[0,0.1]\). In Figure 1(b), the envelopes of the tradeoff curves are plotted by evaluating the nominal and \(\varepsilon=0.1\)-adversarially robust controllers/observers for \(\rho\in[0.15,1.0]\). The arrows are in the direction of increasing \(\rho\). an adversarial input to the measurement, and \(w_{t}^{y}\) as independent zero mean Gaussian noise with covariance 1. For both the control settings (state feedback and output feedback), we set \(Q=I\) and \(R=1\), and the objective is to minimize the LQR cost. In state prediction, the objective is to minimize the state estimation error with \(u_{t}=0\), as described in Section 4. For the state feedback control problem, we study whether poor controllability increases the severity of the performance robustness tradeoffs, as predicted in Section 3. The controllability of the provided system may be varied through the parameter \(\rho\): when \(\rho\) is small, the system has poor controllability, and as \(\rho\) increases, controllability increases. For the state prediction problem, we determine whether poor observability increases the severity of the tradeoff, as predicted in Section 4. The observability of the system may also be varied through the parameter \(\rho\): when \(\rho\) is small, the observability is poor, and when \(\rho\) increases, observability improves. For the output feedback control setting, we study how the impacts of poor observability and controllability compound to result in a severe performance-robustness tradeoff. In Figure 1(a), we consider the tradeoff curves traced out by fixing \(\rho\) and then evaluating the nominal and adversarial (\(\varepsilon=0.1\)) costs of adversarially robust controllers/observers designed with budget \(\varepsilon\) varying between \([0,0.1]\). We observe that as controllability and observability decrease, the tradeoff curves shift upward and also widen. This corroborates the trends described in Theorem 3.1 and Theorem 4.1, where we show the bound on the nominal cost gap between adversarially robust and nominal controllers/observers (i.e. width of the tradeoff curve) grows larger as controllability and observability decrease, respectively. These trends are further illustrated in Figure 1(b), where we plot the nominal and adversarial (\(\varepsilon=0.1\)) costs attained by the nominal and adversarially robust (\(\varepsilon=0.1\)) controllers/observers as a function of \(\rho\in[0.15,1]\). We observe that for small \(\rho\), the system has poor controllability and observability, hence the distance between the nominal and adversarial costs is large. As controllability and observability increase, this gap decreases monotonically. Note that the costs do not monotonically improve as \(\rho\) increases after some point for the control problems, as the amplification of disturbances from the integrator eventually outstrips the benefits of better controllability or observability. Tradeoffs of Linearized Pole BalancingTo further illustrate the impact of observability upon the severity of the performance-robustness tradeoff, we consider the example of pole-balancing using visual feedback from a single fixation point on the pole, illustrated in Figure 3. The dynamics of a pole mounted on a cart may be represented as \[\begin{split} u+w+\delta&=(M+m)(\ddot{h}+d_{h}\dot{ h})+m\ell((\ddot{\theta}+d_{\theta}\dot{\theta})\cos\theta-\dot{\theta}^{2}\sin \theta),\\ 0&=m((\ddot{h}+d_{h}\dot{h})\cos\theta+\ell(\ddot{ \theta}+d_{\theta}\dot{\theta})-g\sin\theta),\\ y&=h+\ell_{0}\sin\theta.\end{split} \tag{30}\] Figure 3: We study the performance-robustness tradeoffs in visual feedback pole-balancing for various fixation points. As the fixation point decreases (\(\ell_{0}\) becomes smaller), the observability of the system decreases. The adversarial cost is evaluated with \(\varepsilon=0.125\), and the tradeoffs are generated by synthesizing adversarially robust controllers for \(\varepsilon\in[0,0.125]\). We see as the fixation point decreases, the tradeoffs for the linearized system become more severe. This model was proposed in Leong and Doyle (2016) to illustrate the fundamental limitations of control. It was also studied in Xu et al. (2021) to determine how these limitations interact with the sample complexity of learning controllers from data. As depicted in Figure 3a, \(M\) is the mass of the cart, \(h\) is the position of the cart, \(\theta\) is the angle of the pole, \(\ell\) is the length of the pole, and \(\ell_{0}\) is the fixation point which the camera observes. The mass of the pole is denoted by \(m\) and the acceleration due to gravity by \(g\). The damping coefficients are \(d_{h}\) and \(d_{\theta}\). Our measurement \(y\) the distance from the camera to the fixation point. The actuation noise is denoted \(w\) and the adversarial input is denoted \(\delta\). We consider an instance of this system with \((M,m,\ell,g,d_{h},d_{\theta})=(1,0.1,1,-10,0.2,0.2)\), and various fixation points \(\ell_{0}<\ell\). We discretize the dynamics in (30) using Euler discretization with stepsize \(0.04\), and linearize the system about the upright equilibrium point. The stochastic noise \(w\) is i.i.d. \(\mathcal{N}(0,1)\), and the adversarial input \(\delta\) is chosen to maximize the cost while falling within the budget \(\left\|\delta\right\|^{2}\leq\varepsilon\) on average, where \(\varepsilon\) varies as discussed in Figure 3. The cost is specified by \(Q=I\), \(R=1\). Varying the fixation point varies the observability of the system. In particular, as \(l_{0}\) decreases, the observability becomes poor. We plot the tradeoff curves for this system as \(l_{0}\) varies in Figure 3b. As in the integrator experiments, the tradeoff becomes more severe as observability decreases. Performance of Adversarially Robust Control for Pole BalancingWe now use the cartpole system described in the previous example in order to test the performance of the adversarially robust controller in Figure 4. In particular, we simulate the nonlinear continuous time system in (30) under discrete time linear controllers synthesized using the discretized and linearized version of the dynamics. The system parameters and cost matrices are the same as in the previous example, however \(\delta\) is modified to enter all states and measurements of the system through an identity mapping to better enhance robustness to the linearization error. We plot the running average cost on the \(y\)-axis, and the time in seconds on the \(x\)-axis. In the simulations, the only perturbations impacting the system are zero mean stochastic noise with standard deviation \(0.002\) entering the input. The simulations are run for three different fixation points, \(\ell_{0}=0.85,0.9\) and \(0.95\). For all fixation points we see that the adversarially robust controller performs significantly better than the nominal LQG controller. It is also worth noting that as the fixation point decreases, the total cost increases, and the gap between the two controllers also increases. In particular, as observabilitiy becomes poor, choosing the appropriate level of robustness becomes more essential. A mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) controller is included as an additional baseline. The mixed controller is synthesized in continuous time using the matlab function h2hinfsyn to minimize the \(\mathcal{H}_{2}\) norm of the map subject to a constraint that the \(\mathcal{H}_{\infty}\) norm is at most \(20\). We see that the performance of the adversarially robust controller is similar to the mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) controller in all settings. We emphasize that our intent is not to propose a controller which uniformly outperforms mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) controllers, but rather to propose a controller for which we can effectively certify robustness while providing quantitative performance-robustness tradeoffs. The purpose of this experiment is to highlight that the synthesis procedure is effective at producing a controller that enhances general robustness, and achieves performance comparable to existing robust synthesis approaches. Figure 4: Simulations of pole-balancing on the nonlinear continuous time system using linear discrete time controllers. The “nom” controller is the LQG, while the “adv” controller is the adversarially robust controller synthesized with \(\varepsilon=0.015\). A mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) baseline is also included. For three different fixation points \(\ell_{0}\), the adversarially robust controller outperforms the nominal controller, and similarly to the mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) controller. Conclusion We proposed an adversarially robust LQ control problem, and provided sufficient conditions for the optimal solution to this problem, along with an algorithm that converges to the optimal solution when these conditions are satisfied. The solution is closely related to a central suboptimal \(\mathcal{H}_{\infty}\) controller. An interesting aspect of this solution is that unlike pure \(\mathcal{H}_{2}\) controllers, the adversarially robust controller depends upon the noise statistics. Experiments show that the adversarially robust controller performs similarly to mixed \(\mathcal{H}_{2}/\mathcal{H}_{\infty}\) controllers on a simple linear system, and can beat out the LQG in the face of model error arising from linearization. We used the adversarially robust control problem as a means to study performance-robustness tradeoffs in control. In particular, we derived quantitative upper and lower bounds on the performance gap between the nominal state feedback controller and the adversarially robust state feedback controller. The bounds show that systems with uniformly good controllability have small performance-robustness tradeoffs, while closed-loop nominal systems with a highly controllable mode in the disturbance channel will have a large performance-robustness tradeoff. We also derived bounds on the nominal state estimation error gap of the adversarially robust state estimator compared to the Kalman Filter. In this case, it was shown that systems with uniformly good observability have small tradeoffs. These trends are corroborated by experiments on a simple linear system by tracing out tradeoff curves. One direction for future work is to consider how other adversarial training techniques can be translated to robust controller synthesis and analysis. ## Acknowledgements Bruce D. Lee is supported by the DoD through the National Defense Science & Engineering Graduate Fellowship Program. The research of Hamed Hassani is supported by NSF Grants 1837253, 1943064, 1934876, AFOSR Grant FA9550-20-1-0111, and DCIST-CRA. Nikolai Matni is funded by NSF awards CPS-2038873, CAREER award ECCS-2045834, and ECCS-2231349.
2305.01169
Fast quantum gate design with deep reinforcement learning using real-time feedback on readout signals
The design of high-fidelity quantum gates is difficult because it requires the optimization of two competing effects, namely maximizing gate speed and minimizing leakage out of the qubit subspace. We propose a deep reinforcement learning algorithm that uses two agents to address the speed and leakage challenges simultaneously. The first agent constructs the qubit in-phase control pulse using a policy learned from rewards that compensate short gate times. The rewards are obtained at intermediate time steps throughout the construction of a full-length pulse, allowing the agent to explore the landscape of shorter pulses. The second agent determines an out-of-phase pulse to target leakage. Both agents are trained on real-time data from noisy hardware, thus providing model-free gate design that adapts to unpredictable hardware noise. To reduce the effect of measurement classification errors, the agents are trained directly on the readout signal from probing the qubit. We present proof-of-concept experiments by designing X and square root of X gates of various durations on IBM hardware. After just 200 training iterations, our algorithm is able to construct novel control pulses up to two times faster than the default IBM gates, while matching their performance in terms of state fidelity and leakage rate. As the length of our custom control pulses increases, they begin to outperform the default gates. Improvements to the speed and fidelity of gate operations open the way for higher circuit depth in quantum simulation, quantum chemistry and other algorithms on near-term and future quantum devices.
Emily Wright, Rogério de Sousa
2023-05-02T03:07:11Z
http://arxiv.org/abs/2305.01169v1
Fast quantum gate design with deep reinforcement learning using real-time feedback on readout signals ###### Abstract The design of high-fidelity quantum gates is difficult because it requires the optimization of two competing effects, namely maximizing gate speed and minimizing leakage out of the qubit subspace. We propose a deep reinforcement learning algorithm that uses two agents to address the speed and leakage challenges simultaneously. The first agent constructs the qubit in-phase control pulse using a policy learned from rewards that compensate short gate times. The rewards are obtained at intermediate time steps throughout the construction of a full-length pulse, allowing the agent to explore the landscape of shorter pulses. The second agent determines an out-of-phase pulse to target leakage. Both agents are trained on real-time data from noisy hardware, thus providing model-free gate design that adapts to unpredictable hardware noise. To reduce the effect of measurement classification errors, the agents are trained directly on the readout signal from probing the qubit. We present proof-of-concept experiments by designing X and square root of X gates of various durations on IBM hardware. After just 200 training iterations, our algorithm is able to construct novel control pulses up to two times faster than the default IBM gates, while matching their performance in terms of state fidelity and leakage rate. As the length of our custom control pulses increases, they begin to outperform the default gates. Improvements to the speed and fidelity of gate operations open the way for higher circuit depth in quantum simulation, quantum chemistry and other algorithms on near-term and future quantum devices. Superconducting qubits, Optimal control, Reinforcement learning + Footnote †: This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery (RGPIN-2020-04328), CREATE (543245-2020), and CGS M programs. Cette oeuvre a été souténeur par le Consel de recherches en sciences naturelles et en genie du Canada (CRSNG) via ses programmes Discovery (RGPIN-2020-04328), CREATE (543245-2020) et CGS M. ## I Introduction Hardware noise, fabrication variability, and imperfect logic gates are the greatest barriers to performing reliable quantum computations at a large-scale [1]. Presently, gate operations on quantum computers such as those produced by IBM are realized with derivative removal by adiabatic gate (DRAG) pulses calculated analytically from a simple three-level model [2]. The fidelity of these gate operations suffers due to long gate times, imperfect models, and time-dependent changes in the processor parameters such as qubit frequencies. Frequent calibration to combat these fluctuations is costly and even when properly calibrated, the control pulse shapes are sub-optimal and allow for errors. Decoherence and leakage out of the computational sub-space are of particular concern in the context of fault-tolerant quantum computing as they require substantial additional resources to correct and can significantly impact the threshold of certain error correction codes [3, 4, 5, 6]. Thus, engineering faster and higher-fidelity gates is of timely importance. Existing strategies for gateset design are analytic [7, 8, 9, 2, 10] or based on numerical simulations that require precise physical and noise models of the hardware [11, 12, 13, 14, 15, 16, 17]. In large-scale quantum processors, the difficulty of completely characterizing the system prohibits model-based control techniques. Reinforcement learning (RL) [18, 19, 20] is an alternative approach for gate design which operates without prior knowledge of the hardware model. RL and its variants have been applied to myriad quantum control problems using numerically simulated environments [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Such set-ups demonstrate the potential of RL, but suffer from the same modelling constraints as other optimization methods. Recently, a few experiments have been carried out using RL directly on noisy hardware [38, 39, 40, 41]. In this paper, we propose a new RL algorithm to design fast quantum gates. Our algorithm has several advantages over existing proposals including: 1. enabling design of faster gates by rewarding intermediate steps in the control pulse, 2. reducing the impact of measurement errors by training directly on the readout signal rather than classifying the state, 3. mitigating leakage with a dual agent architecture, 4. speeding up training using low measurement overhead and real-time feedback, and 5. accounting for realistic noise in the quantum processor by training directly on hardware. As well, we initialize the agent by pre-training on a calibrated DRAG pulse to capture information about the system dynamics. We optimize \(X\) and \(\sqrt{X}\) gates of different durations as a proof-of-concept; however, our algorithm can easily be extended to two-qubit gates by modifying the reward function and state space to complete a universal gateset. The paper is structured as follows: in Section II we introduce the general RL algorithm, in Section III we review the related literature, in Section IV we describe our deep RL algorithm for fast quantum gate design, in Section V we present experimental results, and in Section VI we describe the extension of our algorithm to two-qubit gates. ## II The RL algorithm In this section, we describe the RL algorithm in detail. RL is a machine learning algorithm wherein an agent interacts with a system and iteratively updates a control policy based on its observations. The entire process is modelled as a controlled Markov Decision Process (MDP). Let \(\mathbb{S}\subset\mathbb{R}^{n}\) be the space of states for the system and \(\mathbb{U}\) the set of possible actions. At each step, the system is in some state \(s_{j}\) and the agent decides on an action \(u_{j}\) according to a policy. The policy \(\pi(\cdot|s_{j})\) is a conditional probability distribution over the possible actions in \(\mathbb{U}\) given the current state. The system moves into the next state \(s_{j+1}\) via a stochastic transition kernel \(\mathcal{T}(\cdot|s_{j},u_{j})\). After observing the next state, the agent receives a corresponding reward \(r_{j}(s_{j},u_{j},s_{j+1})\). The objective of the controller is to maximize the infinite-horizon discounted expected reward \[J_{\beta}(s_{0},\pi)=E_{s_{0}}^{\mathcal{T},\pi}\left[\sum_{j=0}^{\infty}\beta ^{j}r_{j}(s_{j},u_{j},s_{j+1})\right] \tag{1}\] over the set of admissible policies \(\pi\), where \(0<\beta<1\) is a discount factor and \(E_{s_{0}}^{\mathcal{T},\pi}\) denotes the expectation for initial state \(s_{0}\) and transition kernel \(\mathcal{T}\) under policy \(\pi\). The standard RL algorithm is Q-learning [19]. Q-learning involves tracking the "value" of taking an action \(u_{j}\) given the current state \(s_{j}\). The Q-value is stored in a table indexed by states and actions. Given an initial table \(Q_{0}\), the value of each state-action pair is updated according to the Bellman equation \[Q_{j+1}(s_{j},u_{j}) =Q_{j}(s_{j},u_{j})+\alpha_{j}(s_{j},u_{j})\Big{[}r(s_{j},u_{j},s_ {j+1}) \tag{2}\] \[+\beta\max_{v\in\mathbb{U}}Q_{j}(s_{j+1},v)-Q_{j}(s_{j},u_{j}) \Big{]}\] as the agent explores the environment [19]. The coefficient \(\alpha_{j}\) is a hyperparameter called the learning rate and determines how quickly the agent adapts to changes in the environment. Under mild conditions on the learning rate, the algorithm converges to a fixed point denoted \(Q_{*}\), which satisfies \[Q_{*}(s,u)=E\left[r(s,u,s^{\prime})+\beta\max_{v\in\mathbb{U}}Q_{*}(s^{\prime },v)\Big{|}s,u\right]. \tag{3}\] A policy \(\pi\) which satisfies \[\max_{u\in\mathbb{U}}Q_{*}(s,u)=Q_{*}(s,\pi(s)) \tag{4}\] is an optimal policy (see Theorem 4 in [18] and the main Theorem in [19]). The Q-learning algorithm was conceived for finite action and state spaces. For large and/or continuous state spaces, storing the Q-value in a table is not an option. To overcome this challenge, one might quantize the state space [42] or use function approximation [43]. While Q-learning with quantization or function approximation is not guaranteed to converge, there is ample empirical evidence that it can be used to solve quantum control problems [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In this work, we approximate the Q-table using a neural network, in a strategy that has been termed "deep reinforcement learning" (DRL) [44, 45]. The state \(s_{j}\) is the input to the neural network and the output is a probability distribution \(\pi(\cdot|s_{j})\) over the action space \(\mathbb{U}\). The neural network is represented by a set of parameters \(\theta\) which are updated in a manner that approximates the Bellman equation [46]. ## III RL for quantum gate design Having introduced RL, we now review its uses for quantum gate design to-date. Many RL algorithms have been proposed to solve quantum control problems in areas ranging from Hamiltonian engineering [23] to quantum metrology [22]. Theoretical algorithms make use of simulated environments to provide full access to the state of the quantum system [21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Of these proposals, several specifically target unitary gate design [25, 47, 48]. In all cases, the agent has access to the exact unitary operator specifying the Schrodinger evolution of the system. In [25, 47] a simplified Hamiltonian model is used while [48] simulates a gmon environment which mimics noisy control actuation and incorporates leakage errors. The agents receive rewards based on gate infidelity which is inaccessible in experiments. In simulation, these algorithms are able to achieve improvements in gate fidelity [25, 47, 48] and gate time [25, 48] over other gate synthesis strategies. However, these proposals are not compatible with training on real hardware thus necessarily suffer from model bias. More realistic RL set-ups for quantum control only provide access to fidelities and/or expectation values for some observables [31, 32, 33, 34, 35, 36, 37]. Specifically for gate design, Shindi et al. proposed in [31] to probe a gate with a series of input states. The reward incorporates the average fidelity between the output states and the target states. Such algorithms require prohibitive amounts of averaging in experiments so they are also confined to numerical simulations. The next step is to perform RL using stochastic measurement outcomes or low-sample estimators of physical observable [38, 39, 40, 41]. Most recently, some experiments have been carried out using RL directly on noisy quantum hardware [38, 39]. Baum et al. trained a DRL agent on a superconducting computer for error-robust gateset design [39]. The agent was able to learn novel pulse shapes up to three times faster than industry standard DRAG gates with slightly lower error per gate and improvements maintained without re-calibration for up to 25 days. The search space was restricted to 8 and 10 segment piece-wise constant operations for one- and two-qubit gates respectively. Despite the small search space, the optimization algorithm was inefficient. The training was completed in batches and the reward was a weighted mean over fidelities estimated using full state tomography. Subsequently, Reuer et al. developed a latency-based DRL agent implemented via an FPGA capable of using real-time feedback at the microsecond time scale [38]. They demonstrated its effectiveness with a state preparation experiment on a superconducting qubit. In this paper, we look ahead to a future where real-time feedback for quantum control is common. We improve upon the work in [39] by allowing greater flexibility in pulse shapes, training directly on the readout signal to reduce the impact of measurement errors, specifically targeting leakage errors, and relying on real-time feedback to speed-up the learning process. ## IV Our DRL algorithm for fast quantum gate design We now describe our DRL algorithm for design of fast quantum gates in detail. Here, we outline our algorithm specifically for superconducting transmon qubits, but it can easily be adapted to other hardware platforms such as trapped ions [49], quantum dots [50], or neutral atoms [51]. In this case, the quantum system consists of a transmon qubit dispersively coupled to a superconducting resonator and controlled via a capacitively coupled voltage drive line (see Appendix A for details). The voltage drive envelope \[c(t)=\begin{cases}c^{x}(t)\cos(\omega_{d}t)+c^{y}(t)\sin(\omega_{d}t)&0<t<t_{g }\\ 0&\text{otherwise}\end{cases} \tag{5}\] is composed of two independent quadrature controls \(c^{x}(t)\) and \(c^{y}(t)\) on a single drive frequency \(\omega_{d}\) for the duration of the gate operation \(t_{g}\). We seek to design a piece-wise constant (PWC) control pulse with up to \(N_{\text{seg}}\) segments of equal length \(\tau=t_{g}/N_{\text{seg}}\) where \(t_{g}\) is the maximum duration of the gate. Our DRL algorithm consists of two neural networks to decide sequentially on the \(x\)- and \(y\)-quadrature of the control pulse. The dual agent structure enables us to mitigate leakage errors while simultaneously creating faster gates. At each time step, our agents select the amplitudes of the next segment based on real-time feedback about the state of the system. The agents receive rewards throughout the construction of the pulse (rather than just at the end) which opens the possibility to design faster gates. We eliminate model-bias by training on quantum hardware so our DRL algorithm can account for realistic noise in the qubits and for other errors such as over-rotation introduced by the classical drive lines. We train directly on the observation signal resulting from probing the qubit. The signal has two components which we denote \(I\) for "in-phase" and \(Q\) for "quadrature". Previous algorithms for gate design have used the \(|0\rangle\) and \(|1\rangle\) populations imperfectly estimated from the location of the signal in the \((I,Q)\)-plane. Figure 1 shows readout data taken on the IBM Lima quantum computer, where there is overlap between the locations of the \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\) measurements. Our algorithm reduces the impact of measurement errors because we avoid classifying the Figure 1: A plot of the readout signal for different qubit states (10000 shots each) on IBM Lima. Black circles indicate the mean and standard deviation of each cluster. The overlap shows the difficulty of distinguishing between states. signal into \(\ket{0}\) or \(\ket{1}\). It also has a low measurement overhead since we do not perform full state tomography. The network parameters \(\theta^{x},\theta^{y}\) are updated periodically during the training. Each training iteration for \(i=0,\ldots,N_{\text{iter}}-1\) consists of \(N_{\text{ep}}\leq N_{\text{seg}}\) episodes counted using the index \(j\). The ratio \(N_{\text{seg}}/N_{\text{ep}}\) is an integer that sets how many times the neural network parameters are updated before a full waveform is constructed. We use a second index \(k=k(i,j)\) to track the waveform segment. The \(j\)-th input to the first neural network is a state \(s_{j}^{x}=(\bra{I_{j}},\bra{Q_{j}},k)\) composed of the average \((I,Q)\) signal over \(N_{\text{shot}}\) measurements and the current segment index \(k\). The output of the agent is a probability distribution over the action space \(\mathbb{U}^{x}\) which we denote \(\pi_{\theta^{x}_{i}}(\cdot|s_{j}^{x})\) to indicate the dependence on the neural network parameters \(\theta^{x}_{i}\). The agent samples an action \(u_{j}^{x}\in\mathbb{U}^{x}\) according to \(\pi_{\theta^{x}_{i}}(\cdot|s_{j}^{x})\). The state for the second agent \(s_{j}^{y}=(u_{j}^{x},\mathcal{L}_{j})\) is formed of the amplitude \(u_{j}^{x}\) on the first quadrature and the leakage population \[\mathcal{L}_{j}=\abs{\bra{2}U_{k+1}\ket{0}}^{2} \tag{6}\] estimated from the \((I,Q)\)-plane, where \(U_{k+1}\) represents the operation of evolving the qubit by the first \(k+1\) segments of the waveform. The agent now samples an action \(u_{j}^{y}\in\mathbb{U}^{y}\) according to the output \(\pi_{\theta^{y}_{i}}(\cdot|s_{j}^{y})\) of the second network. The \(k\)-th segment of the control pulse thus takes the form \[c(t)=u_{j}^{x}\cos(\omega_{d}t)+u_{j}^{y}\sin(\omega_{d}t) \tag{7}\] for \(k\tau\leq t<(k+1)\tau\). Based on hardware parameters, we restrict the \(x\)-quadrature amplitudes to the set \(\mathbb{U}^{x}=\{0.00,0.01,\ldots,0.19,0.20\}\) and the \(y\)-quadrature amplitudes to \(\mathbb{U}^{y}=\{-0.10,-0.09,\ldots,0.09,0.10\}\). To train the agent, we require a reward function for each network. Let \(c_{T}=(\bra{I_{T}},\bra{Q_{T}})\pm\sigma_{T}\) be the expected average measurement result after applying the target gate to the ground state (calibrated experimentally). During training, we penalize the distance of the observed measurement signal from \((\bra{I_{T}},\bra{Q_{T}})\) which encapsulates both a failure to steer the qubit into the desired state and leakage into higher energy levels. We use the reward function \[r_{j}^{x}\left(s_{j+1}^{x},c_{T}\right)= \tag{8}\] \[\min\left\{1-\lambda k,\frac{\sigma_{T}}{\|(\bra{I_{j+1}},\bra{ Q_{j+1}})-(\bra{I_{T}},\bra{Q_{T}})\|}-\lambda k\right\}\] to train the first network, where the term \(\lambda k\) penalizes the length of the control pulse for some coefficient \(\lambda\in\mathbb{R}\). For the second network, we compensate low leakage populations with the reward \[r_{j}^{y}\left(s_{j+1}^{y}\right)=\max\left\{0,\mathcal{L}_{\text{max}}- \mathcal{L}_{j+1}\right\} \tag{9}\] where \(\mathcal{L}_{\text{max}}\) sets a limit on the allowable leakage. A summary of our DRL algorithm is shown in Algorithm 1. ``` 0: initial state \(s_{0}^{x}\), leakage \(\mathcal{L}_{0}\), and parameters \(\theta_{0}^{x},\theta_{0}^{y}\) 1: Set \(k=0\) 2:for each iteration \(i=0,...,N_{\text{iter}}-1\)do 3:for each episode \(j=0,...,N_{\text{ep}}-1\)do 4:if\(k=N_{\text{seg}}\)then 5: Re-initialize waveform \(k=0\) 6: Re-initialize state \(s_{j}^{x}=s_{0}^{x}\) 7: Re-initialize leakage \(\mathcal{L}_{j}=\mathcal{L}_{0}\) 8:endif 9: Select next action \(u_{j}^{x}\) according to policy \(\pi_{\theta^{x}_{i}}(\cdot|s_{j}^{x})\) 10: Set \(s_{j}^{y}=(u_{j}^{x},\mathcal{L}_{j})\) 11: Select next action \(u_{j}^{y}\) according to policy \(\pi_{\theta^{y}_{i}}(\cdot|s_{j}^{y})\) 12: Evolve qubit by first \(k+1\) segments of waveform 13: Measure qubit \(N_{\text{shot}}\) times to get \((\bra{I_{j+1}},\bra{Q_{j+1}})\) 14: Update \(k\to k+1\) 15: Set \(s_{j+1}^{x}=(\bra{I_{j+1}},\bra{Q_{j+1}},k)\) 16: Calculate reward \(r_{j}^{x}\) 17: Estimate leakage population \(\mathcal{L}_{j+1}\) 18: Calculate reward \(r_{j}^{y}\) 19: Reset qubit to \(\ket{0}\) 20:endfor 21: Send trajectory \(\left(s_{0}^{x},u_{0}^{x},r_{0}^{x},s_{1}^{x},u_{1}^{x},r_{1}^{x},\ldots,r_{N_{ \text{reg}}-1}^{x}\right)\) to first network 22: Update neural network parameters \(\theta_{i}^{x}\rightarrow\theta_{i+1}^{x}\) 23: Send trajectory \(\left(s_{0}^{y},u_{0}^{y},r_{0}^{y},s_{1}^{y},u_{1}^{y},r_{1}^{y},\ldots,r_{N_{ \text{reg}}-1}^{y}\right)\) to second network 24: Update neural network parameters \(\theta_{i}^{y}\rightarrow\theta_{i+1}^{y}\) 25:endfor ``` **Algorithm 1** DRL for fast quantum gate design The initial state \(s_{0}^{x}\) and leakage population \(\mathcal{L}_{0}\) are estimated by measuring the qubit before performing any gate operations. The initial policies \(\pi_{\theta_{0}^{x}}^{x},\pi_{\theta_{0}}^{y}\) are generated based on the industry standard DRAG gate by pre-training the agent. The DRAG pulse is Gaussian with a derivative component on the second quadrature. That is, \(c^{x}(t)=\Omega_{G}(t)\) and \(c^{y}(t)=\gamma\Omega_{G}^{\prime}(t)\) for a Gaussian envelope \(\Omega_{G}\) and a coefficient \(\gamma\in\mathbb{R}\)[2]. The DRAG pulse is designed to reduce leakage based on a simple three-level model. In experiments, the amplitude of \(\Omega_{G}\) and the \(\gamma\) factor are calibrated to combat time dependent changes in the noise. Our initial policy captures information about the system dynamics using the calibrated DRAG pulse; however, our algorithm should not be considered an optimization of the DRAG pulse which has been proposed in other works [2, 52]. In the next section, we describe experimental results showing that our agent learns novel pulse shapes able to outperform the DRAG gate in terms of fidelity and leakage rate. For more details on the pre-training, see Appendix B. ## V Experimental Results We conduct a proof-of-concept by designing the \(X\) and \(\sqrt{X}\) gates on the IBM Lima quantum computer using the Qiskit Pulse library [53, 54]. To assess the success of a gate \(U\), we measure the leakage rate \(\mathcal{L}\) defined in (6) and the state fidelity \[\mathcal{F}=\abs{\bra{\psi_{T}}U\ket{0}}^{2} \tag{10}\] where \(\ket{\psi_{T}}\) is the target state. We benchmark our fast quantum gates against DRAG gates, which are the industry standard used on many quantum computers including those produced Figure 2: Plots showing default and optimized control pulses for single qubit gates on the IBM Lima quantum computer and their respective state fidelities and leakage rates. The time unit on the \(x\)-axis is in units of \(dt=0.222222\) ns, a device-dependent parameter specifying the maximum sampling rate of the waveform generator. Fig. (a) is a DRAG pulse for an \(X\) gate. Fig. (b) is a pulse for an \(X\) gate with duration 35.6 ns learned by our deep RL algorithm. Fig. (c) is a pulse for an \(X\) gate with duration 26.7 ns learned by our deep RL algorithm. Fig. (d) is a pulse for an \(X\) gate with duration 17.8 ns learned by our deep RL algorithm. Fig. (e) is a pulse for an \(X\) gate calibrated by our deep RL algorithm 30 days after training. Fig. (f) is a DRAG pulse for an \(X\) gate calibrated 30 days after training. Fig. (g) is table summarizing the state fidelities (10) and leakage rates (6) achieved by these gates. by IBM. The trained agent can create gates of any number of segments from 1 to \(N_{\text{seg}}\). We test gates of length 20 segments (\(t_{g}\approx 35.6\) ns) - the same length as the DRAG gate -, 15 segments (\(t_{g}\approx 26.7\) ns), and 10 segments (\(t_{g}\approx 17.8\) ns). After just 200 training iterations, the faster gates begin to match the performance of the default gate, with fidelities and leakage rates the same or only slightly worse. Our full-length optimized gates begin to achieve higher state fidelities and lower leakage rates than the calibrated DRAG pulses. The optimized \(X\) and \(\sqrt{X}\) gates are shown in Figures 2 and 3 respectively alongside the corresponding DRAG pulses. The fidelities and leakage rates for each gate are summarized in Figures 2(g) and 3(e). We also test the robustness of our agent over time. Figure 2(f) shows a new optimized \(X\) gate created 30 days after the agent was trained. Again, it has slightly higher fidelity and lower leakage than the calibrated DRAG pulse. This shows that our agent does not require any additional training after 30 days. After an initial training period, our algorithm can be used to efficiently calibrate gates on superconducting qubits. Figure 3: Plots showing default and optimized control pulses for single qubit gates on the IBM Lima quantum computer and their respective state fidelities and leakage rates. The time unit on the \(x\)-axis is in units of \(dt=0.22222\) ns, a device-dependent parameter specifying the maximum sampling rate of the waveform generator. Fig. (a) is a DRAG pulse for an \(\sqrt{X}\) gate. Fig. (b) is a pulse for an \(\sqrt{X}\) gate with duration 35.6 ns learned by our deep RL algorithm. Fig. (c) is a pulse for an \(\sqrt{X}\) gate with duration 26.7 ns learned by our deep RL algorithm. Fig. (d) is a pulse for an \(\sqrt{X}\) gate with duration 17.8 ns learned by our deep RL algorithm. Fig. (e) is table summarizing the state fidelities (10) and leakage rates (6) achieved by these gates. Due to limited access to hardware, we do not train our agent beyond 200 iterations nor do we optimize the learning rate, neural network structure, number of segments, or other hyperparameters. Our algorithm has not fully converged after the first 200 training iterations. We expect to achieve more significant advantage after optimizing the algorithm hyperparameters including using more training iterations. ## VI Generalization to two-qubit gates A small modification to state space and reward function generalizes our algorithm to the design of two-qubit gates. In superconducting transmon qubits, two-qubit interactions are generated via a cross-resonance (\(CR\)) pulse (i.e. driving the control qubit at the resonant frequency of the target qubit) [55]. The \(CR\) pulse corresponds to the gate \(Z\otimes X\), an \(X\) rotation on the target qubit with the direction dependent on the state of the control qubit. The industry standard \(CR\) gate is a rounded square pulse with Gaussian rise and a derivative-pulse correction on the second quadrature [56]. To avoid spurious cross-talk, a simultaneous cancellation tone is applied to the target qubit. Baum et al. used RL to design an improved \(CR\) gate which did not require this extra pulse [39]. We take the same strategy, so our agent can learn an improved \(CR\) pulse shape using the action spaces \(\mathbb{U}^{x}\) and \(\mathbb{U}^{y}\) from the single qubit case. The state \(\bar{s}_{j}^{x}=(\langle I_{j}^{1}\rangle,\langle Q_{j}^{1}\rangle,\langle I _{j}^{2}\rangle,\langle Q_{j}^{2}\rangle,k)\) contains the average readout signal from each qubit. We use the reward \[\bar{r}_{j}^{x}(\bar{s}_{j+1}^{x},c_{T}^{1},c_{T}^{2})= \tag{11}\] \[\frac{1}{2}\left(r_{j}^{x}(\langle I_{j+1}^{1}\rangle,\langle Q_ {j+1}^{1}\rangle,k,c_{T}^{1})+r_{j}^{x}(\langle I_{j+1}^{2}\rangle,\langle Q_ {j+1}^{2}\rangle,k,c_{T}^{2})\right)\] where \(r_{j}^{x}\) is defined in (8) and \(c_{T}^{1}\), \(c_{T}^{2}\) are the expected measurement locations for the first and second qubit respectively. For the second control quadrature, the leakage population is summed over both qubits to give the same state and reward as the single qubit case. The \(CR\) gate completes a universal gateset \(\left\{CR,R_{Z}(\theta),X,\sqrt{X}\right\}\) when combined with arbitrary virtual \(Z\) rotations [57] and the single qubit gates shown in our proof-of-concept. ## VII Conclusion We have created a DRL algorithm for fast quantum gate design using real-time feedback based on the readout signal from noisy hardware. RL is model free, as opposed to other gate synthesis strategies which require precise models that cannot capture the stochastic dynamics of the qubit. Our dual-agent architecture allows us to target competing goals of decreasing leakage and creating faster gates to reduce decoherence. Our proposed algorithm reduces the impact of measurement classification errors by training directly on the readout signal from probing the qubit. The low measurement overhead means that our algorithm can be trained on hardware, rather than a numerical simulation. We carried out a proof-of-concept with \(X\) and \(\sqrt{X}\) gates of different durations on IBM's hardware. Our agent proposed novel control pulses which are two times faster than industry standard DRAG gates and match their performance in terms of fidelity and leakage after just 200 training iterations. Our agent also created gates of the same duration as DRAG which offer slight improvements in fidelity and leakage. We also showed that our trained agent is robust over time. So far, we had limited access to hardware and only ran the algorithm with one specific set of hyperparameters. As is well known, the performance of RL is greatly improved with fine tuning of hyperparameters [58]. We expect optimization of the algorithm hyperparameters to lead to much more significant advantage. Our proof-of-concept was carried out on transmon qubits; however, our proposed DRL algorithm is general and can be used on other quantum hardware with adjustments to the state and action spaces (e.g. photon counts and laser pulses for trapped ion quantum computing [49]). The improved gate operations created by our DRL algorithm for fast quantum gate design open the way for more extensive applications on near term and future quantum devices with reduced decoherence and leakage which cannot be fixed by most error correction algorithms. ## Appendix A Transmon hardware We apply our deep RL algorithm for fast quantum gate design to a transmon qubit, formed of the lowest two energy levels in the quantized spectrum of a superconducting circuit [59]. The transmon qubit is controlled via a capacitively coupled drive line. Quantum circuit analysis leads to the transmon Hamiltonian [60] \[\mathcal{H}=\omega_{q}a^{\dagger}a+\frac{\alpha}{2}a^{\dagger}a^{\dagger}aa- ic(t)(a^{\dagger}-a) \tag{12}\] where \(a\), \(a^{\dagger}\) are the creation and annihilation operators, \(\omega_{q}\) is the qubit frequency, \(\alpha\) is the anharmonicity and \(c(t)\) is the voltage envelope introduced in (5). The transmon qubit is dispersively coupled to a superconducting resonator for measurement. The qubit is probed with a microwave signal, which scatters off the resonator and gets amplified. The resultant observation is comprised of time traces of the in-phase \(I\) and quadrature \(Q\) components of the digitized signal. The state of the qubit can be inferred from the location of the signal in the \((I,Q)\)-plane. On noisy hardware, the locations often overlap in the \((I,Q)\)-plane and a decision maker such as a linear discriminant analyzer [61] is necessary to predict the state (see Figure 1). ## Appendix B Pre-training In this section, we describe pre-training our agent. First, we discretize a calibrated DRAG pulse into an \(N_{\text{seg}}\) PWC pulse where the amplitude of each segment is the nearest action from \(\mathbb{U}^{x}\) and \(\mathbb{U}^{y}\) for the first and second quadratures respectively. For each \(k\), we measure the average \((\langle I\rangle,\langle Q\rangle)\) signal and leakage \(\mathcal{L}\) after the first \(k+1\) segments of the waveform. Each agent receives a reward based on the mean squared error (MSE) between the policy output by the neural network given the input state and a Gaussian distribution over the respective action space centered at the desired amplitude. The pre-training is performed for several iterations over the full pulse. The pre-training can be executed efficiently because the rewards and actions are not based on the state of the system so the circuits can all be run before updating the network parameters. At the end of the pre-training, the agents favour a DRAG pulse. While the agents subsequently learn novel pulse shapes, the exploration begins in a near-optimal region of the solution space.
2303.06588
MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation
Recommender systems have become ubiquitous in our digital lives, from recommending products on e-commerce websites to suggesting movies and music on streaming platforms. Existing recommendation datasets, such as Amazon Product Reviews and MovieLens, greatly facilitated the research and development of recommender systems in their respective domains. While the number of mobile users and applications (aka apps) has increased exponentially over the past decade, research in mobile app recommender systems has been significantly constrained, primarily due to the lack of high-quality benchmark datasets, as opposed to recommendations for products, movies, and news. To facilitate research for app recommendation systems, we introduce a large-scale dataset, called MobileRec. We constructed MobileRec from users' activity on the Google play store. MobileRec contains 19.3 million user interactions (i.e., user reviews on apps) with over 10K unique apps across 48 categories. MobileRec records the sequential activity of a total of 0.7 million distinct users. Each of these users has interacted with no fewer than five distinct apps, which stands in contrast to previous datasets on mobile apps that recorded only a single interaction per user. Furthermore, MobileRec presents users' ratings as well as sentiments on installed apps, and each app contains rich metadata such as app name, category, description, and overall rating, among others. We demonstrate that MobileRec can serve as an excellent testbed for app recommendation through a comparative study of several state-of-the-art recommendation approaches. The quantitative results can act as a baseline for other researchers to compare their results against. The MobileRec dataset is available at https://huggingface.co/datasets/recmeapp/mobilerec.
M. H. Maqbool, Umar Farooq, Adib Mosharrof, A. B. Siddique, Hassan Foroosh
2023-03-12T06:39:40Z
http://arxiv.org/abs/2303.06588v1
# MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation ###### Abstract. Recommender systems have become ubiquitous in our digital lives, from recommending products on e-commerce websites to suggesting movies and music on streaming platforms. Existing recommendation datasets, such as Amazon Product Reviews and MovieLens, greatly facilitated the research and development of recommender systems in their respective domains. While the number of mobile users and applications (aka apps) has increased exponentially over the past decade, research in mobile app recommender systems has been significantly constrained, primarily due to the lack of high-quality benchmark datasets, as opposed to recommendations for products, movies, and news. To facilitate research for app recommendation systems, we introduce a large-scale dataset, called MobileRec. We constructed MobileRec from users' activity on the Google play store. MobileRec contains 19.3 million user interactions (i.e., user reviews on apps) with over 10K unique apps across 48 categories. MobileRec records the sequential activity of a total of 0.7 million distinct users. Each of these users has interacted with no fewer than five distinct apps, which stands in contrast to previous datasets on mobile apps that recorded only a single interaction per user. Furthermore, MobileRec presents users' ratings as well as sentiments on installed apps, and each app contains rich metadata such as app name, category, description, and overall rating, among others. We demonstrate that MobileRec can serve as an excellent testbed for app recommendation through a comparative study of several state-of-the-art recommendation approaches. The quantitative results can act as a baseline for other researchers to compare their results against. The MobileRec dataset is available at [https://huggingface.co/datasets/recmeapp/mobilerec](https://huggingface.co/datasets/recmeapp/mobilerec). Sequential Recommendation, GooglePlay Dataset, App Recommendation Dataset. + Footnote †: journal: Acm Reference Format M.H. Maqbool, Umar Farooq, Adib Mosharrof, A.B. Siddique, and Hassan Foroosh. 2023. MobileRec: A Large-Scale Dataset for Mobile Apps Recommendation. In _Proceedings of Proceedings of the 46th ACM SIGIR Conference on Research and Development in Information Retrieval, (SIGIR'23)_. ACM, New York, NY, USA, 10 pages. [https://doi.org/10.1145/nmmnnn.nmmn](https://doi.org/10.1145/nmmnnn.nmmn) ## 1. Introduction Mobile apps have seen exponential growth in the last decade and over 5 billion users (Bradbury et al., 2013) utilize them for a variety of reasons, including social media, entertainment, news, productivity, and ride-sharing, among others. As a result of this boom, Google Play (Groos et al., 2016) and Apple App store (Bradbury et al., 2013) host more than 3.5 and 2.2 million apps, respectively (Bradbury et al., 2013). The increasingly crowded app marketplaces pose a significant challenge for users to discover apps that align with their preferences effectively. Personalized app recommendations can relieve users' cognitive overload and improve the app installation experience. As illustrated in Figure 1, an app recommendation system has the capability to suggest new applications to users based on their previous app installations and interactions. Although Google Play and the App Store employ app recommendation techniques for suggesting apps to their users potentially leveraging user data collected internally, the research in app recommendation is almost nonexistent. Figure 1. An Example of a sequence of the user activity. Based on past user interactions (e.g., app installations), the app recommendation system recommends new apps to install.
2310.18856
Stochastic modeling of superconducting qudits in the dispersive regime
The field of superconducting quantum computing, based on Josephson junctions, has recently seen remarkable strides in scaling the number of logical qubits. In particular, the fidelities of one- and two-qubit gates have reached the breakeven point with the novel error mitigation and correction methods. Parallel to these advances is the effort to expand the Hilbert space within a single junction or device by employing high-dimensional qubits, otherwise known as qudits. Research has demonstrated the possibility of driving higher-order transitions in a transmon or designing innovative multimode superconducting circuits, termed multimons. These advances can significantly expand the computational basis while simplifying the interconnects in a large-scale quantum processor. In this work we extend the measurement theory of a conventional superconducting qubit to that of a qudit, focusing on modeling the dispersive quadrature measurement in an open quantum system. Under the Markov assumption, the qudit Lindblad and stochastic master equations are formulated and analyzed; in addition, both the ensemble-averaged and the quantum-jump approach of decoherence analysis are detailed with analytical and numerical comparisons. We verify our stochastic model with a series of experimental results on a transmon-type qutrit, verifying the validity of our high-dimensional formalism.
Kangdi Yu, Murat C. Sarihan, Jin Ho Kang, Madeline Taylor, Cody S. Fan, Ananyo Banerjee, Jonathan L. DuBois, Yaniv J. Rosen, Chee Wei Wong
2023-10-29T00:39:47Z
http://arxiv.org/abs/2310.18856v2
# Stochastic modeling of superconducting qudits in the dispersive regime ###### Abstract The field of superconducting quantum computing, based on Josephson junctions, has recently seen remarkable strides in scaling the number of logical qubits. In particular, the fidelities of one- and two-qubit gates have reached the breakeven point with the novel error mitigation and correction methods. Parallel to these advances is the effort to expand the Hilbert space within a single junction or device by employing high-dimensional qubits, otherwise known as qudits. Research has demonstrated the possibility of driving higher-order transitions in a transmon or designing innovative multimode superconducting circuits, termed multimons. These advances can significantly expand the computational basis while simplifying the interconnects in a large-scale quantum processor. In this work we extend the measurement theory of a conventional superconducting qubit to that of a qudit, focusing on modeling the dispersive quadrature measurement in an open quantum system. Under the Markov assumption, the qudit Lindblad and stochastic master equations are formulated and analyzed; in addition, both the ensemble-averaged and the quantum-jump approach of decoherence analysis are detailed with analytical and numerical comparisons. We verify our stochastic model with a series of experimental results on a transmon-type qutrit, verifying the validity of our high-dimensional formalism. ## I Introduction Superconducting quantum computation based on single- and two-qubit gates [1; 2; 3] has recently demonstrated milestone successes in comparison to classical computation [4; 5]. While the error correction scheme can overcome the noise in the qubit-qubit interaction and correct the undesired decoherence due to coupling to the environment, the required hardware resources, chip footprint, and peripheral connections will soon face scalability issues. To create a more hardware-efficient quantum process with fewer quantum gates and interconnects, an emerging effort has been placed into developing high-dimensional superconducting quantum computing. In particular, the hope is that both the size of the quantum algorithm and the error rate induced in a long gate sequence can be reduced by using the naturally available higher energy levels of the localized artificial atoms, also termed qudits. Experiments involving the usage of higher energy levels already in the transmon qubits have been examined [6; 7]. Furthermore, a more complex Josephson-junction-based network with multiple nonlinear modes coupled longitudinally [8; 9] can also be developed. Despite the recent experimental efforts on higher dimensional quantum computation, the theory of dispersive measurement used in inferring a qudit state under sources of decoherence still mostly relies on a heuristic extension from the qubit formalism. In this work, we adopt both Lindblad master equations and the method of quantum trajectories to analyze the state evolution of a qudit when measured dispersively. In circuit or cavity quantum electrodynamics (QED), continuous measurement and other unwanted coupling to the environment can be modeled by master equations of the Lindblad form under the Born-Markov assumption. Analysis of the qubit-resonator coupling in the dispersive regime has relied on this formalism, aided by the good agreement between measurement and the predicted measurement-induced dephasing and number splitting [10; 11]. Our work first generalizes the measurement of the observable \(\hat{\sigma}_{z}\) in the qubit case to a generic measurement in the longitudinal direction of the qudit, thus extending the notion of measurement-induced dephasing. Due to the longitudinal coupling appearing in the dispersive regime, we show analytically that each energy eigenstate of the qudit is entangled with a coherent state of the resonator; consequently, measurement of the quadrature fields of the resonator can be used to infer the qudit state. However, since the qudit Lindblad master equation only describes the ensemble-averaged time evolution of the state, it does not make use of the information leaking out of the system, part of which is, of course, what one measures in real experiments. To capture the update of an observer's knowledge during a continuous measurement, in this work we subsequently apply the quantum trajectory theory [12; 13; 14; 15] in which our knowledge of the quantum state can be modified based on the measurement record. Given that the measurement outcomes are stochastic as required by quantum mechanics, the entire measurement record forms a random process that can be modeled as a diffusive process when the measurement is continuous and weak. In particular, under the diffusive limit, we derive an effective heterodyne stochastic master equation (SME) of the qudit subspace (in the Ito sense) which is then compared with the unconditioned master equation after taking the ensemble average. Recently, the stochastic nature of qudit dispersive measurement has been theoretically studied in the absence of other decoherence processes such as qudit \(T_{1}\)-decay and dephasing, and specific examples such as the \(N\)-level "clock" system were examined [16]. Our approach relaxes the assumptions to include other experimentally observed decoherence processes in a transmon qudit and makes an intuitive link between the stochastic trajectories solved from SME and the averaged dynamics given by the Lindblad master equation. In this work, we first review the dispersive coupling of a high-dimensional system to a linear resonator in Section II. In Section III, the (unconditioned) Lindblad master equation of the combined system (i.e., a qudit plus a resonator) is solved analytically using two methods - the positive \(P\)-representation and the qudit-state-dependent displacement operator. Consequently, an effective master equation for the qudit with the resonator degrees of freedom traced out can be fortunately formulated as a Markovian system when the measurement-induced frequency shifts are negligible. Then, in Section IV, we turn to the conditioned qudit state and derive an effective qudit SME in the diffusive limit, with the measurement outcomes described by two stochastic differential equations. Finally, in Section V, the simulated quantum trajectories are compared with our experiments on a transmon qutrit (i.e., a three-level system) coupled to a 3D cavity, demonstrating a remarkable match between our formalism and experiments. ## II cQed for dispersive measurement of a qudit For a general analysis, we consider a weakly anharmonic oscillator coupled to a linear resonator for readout and control. For future reference, we denote the anharmonic oscillator, also referred to as the system or qudit, by \(\mathcal{S}\) and the resonator by \(\mathcal{R}\). Hence, the combined system lives in the Hilbert space \(\mathscr{H}_{\mathcal{SR}}=\mathscr{H}_{\mathcal{S}}\otimes\mathscr{H}_{ \mathcal{R}}\). The anharmonic oscillator model can be used to describe a transmon-type superconducting qudit while the resonator, modeled by a harmonic oscillator, can be realized as a 3D cavity or a planar transmission-line resonator [17; 18; 19]. On the one hand, the size of a typical 3D cavity, and thus the associated mode volume, is much larger than that of a superconducting qudit, allowing one to apply the dipole approximation to describe the qudit-resonator interaction. On the other hand, even though a planar resonator has a more confined mode profile and is coupled to the qubit locally (e.g., capacitive coupling), one can still derive a similar dipole interaction at the circuit level. Consequently, for both the 3D and 2D resonators, the Hamiltonian of the combined system under the rotating-frame approximation (RWA) is given by [20] \[\hat{H}/\hbar=\omega_{\mathrm{q}}\hat{a}_{\mathrm{q}}^{\dagger} \hat{a}_{\mathrm{q}}+\frac{\alpha_{\mathrm{q}}}{2}\hat{a}_{\mathrm{q}}^{ \dagger}\hat{a}_{\mathrm{q}}^{\dagger}\hat{a}_{\mathrm{q}}\hat{a}_{\mathrm{q}} \\ +\omega_{\mathrm{r}}\bigg{(}\hat{a}_{\mathrm{r}}^{\dagger}\hat{a} _{\mathrm{r}}+\frac{1}{2}\bigg{)}-\Big{(}g\hat{a}_{\mathrm{r}}\hat{a}_{ \mathrm{q}}^{\dagger}+g^{*}\hat{a}_{\mathrm{r}}^{\dagger}\hat{a}_{\mathrm{q}} \Big{)}, \tag{1}\] where \(\omega_{\mathrm{q}}\) is the qubit frequency (i.e., the transition frequency between the ground and first excited states), \(\alpha_{\mathrm{q}}\) the anharmonicity of the qudit, \(\omega_{\mathrm{r}}\) the resonator frequency, and \(g\) the qudit-resonator coupling coefficient. We also define \(\Delta_{\mathrm{qr}}=\omega_{q}-\omega_{r}\) as the detuning between the qudit and the resonator. In Eq.(1), we have assumed the fourth-order expansion of the transmon Hamiltonian, which ignores the fact that a transmon can only support a finite number of bound energy eigenstates. More generally, we can replace the fourth-order expansion with the Hamiltonian of a general qudit such that \[\hat{H}/\hbar=\sum_{j=0}^{D-1}\omega_{j}\ket{j}\bra{j}+\omega_{ \mathrm{r}}\bigg{(}\hat{a}_{\mathrm{r}}^{\dagger}\hat{a}_{\mathrm{r}}+\frac{ 1}{2}\bigg{)}\\ -\sum_{j,k=0}^{D-1}\Big{(}g_{jk}\ket{j}\bra{k}\hat{a}_{\mathrm{q} }^{\dagger}+g_{jk}\ket{k}\bra{j}\hat{a}_{\mathrm{q}}\Big{)} \tag{2}\] with \(\omega_{j}\) and \(\ket{j}\) representing the energy and state vector of the \(j\)th energy level of the qudit. The weakly anharmonic model corresponds to the case where \[g_{jk}\approx\begin{cases}\sqrt{j+1}\,g&\text{if}\quad j-k=1,\\ 0&\text{otherwise},\end{cases} \tag{3}\] for all \(j,k=0,...,D-1\) (\(g\) is the same coupling coefficient defined in Eq.(1)). To perform a quantum non-demolition (QND) measurement (justified in the following text), we set the resonator frequency to be detuned from (and usually higher than) the qubit frequency. In the dispersive regime where \(\ket{g}\ll\abs{\Delta_{\mathrm{qr}}}\), Eq.(1) can be approximated as \[\hat{H}^{\mathrm{disp}}/\hbar=\tilde{\omega}_{\mathrm{q}}\hat{a} _{\mathrm{q}}^{\dagger}\hat{a}_{\mathrm{q}}+\frac{\alpha_{\mathrm{q}}}{2} \hat{a}_{\mathrm{q}}^{\dagger}\hat{a}_{\mathrm{q}}^{\dagger}\hat{a}_{\mathrm{q }}\hat{a}_{\mathrm{q}}\\ +\omega_{\mathrm{r}}\bigg{(}\hat{a}_{\mathrm{r}}^{\dagger}\hat{a} _{\mathrm{r}}+\frac{1}{2}\bigg{)}+\chi_{\mathrm{qr}}\hat{a}_{\mathrm{q}}^{ \dagger}\hat{a}_{\mathrm{q}}\hat{a}_{\mathrm{r}}^{\dagger}\hat{a}_{\mathrm{r }}, \tag{4}\] where \(\tilde{\omega}_{\mathrm{q}}=\omega_{\mathrm{q}}+\abs{g}^{2}/\Delta_{\mathrm{qr}}\) is the Lamb-shifted qubit frequency and \[\chi_{\mathrm{qr}}=\frac{2\alpha_{\mathrm{q}}\abs{g}^{2}}{\Delta_{\mathrm{qr} }(\Delta_{\mathrm{qr}}+\alpha_{\mathrm{q}})} \tag{5}\] is the dispersive shift, also known as the (fourth-order) cross-Kerr coefficient [20; 21]. By lumping the last term in Eq.(4) into the resonator Hamiltonian, one observes a qudit-state-dependent shift in the resonator frequency. In particular, if the qudit is in the energy eigenstate \(\ket{j}\) the resonator will experience a dispersive shift \(j\chi_{\rm qr}\). By determining this frequency shift via a resonator transmission or reflection measurement, one should be able to infer the qudit state. Moreover, if Eq.(2) is adopted instead, one finds \[\hat{H}^{\rm disp}/\hbar=\sum_{j=0}^{D-1}(\omega_{j}+\Lambda_{j}) \,|j\rangle\langle j|\\ +\omega_{\rm r}\bigg{(}\hat{a}_{\rm r}^{\dagger}\hat{a}_{\rm r}+ \frac{1}{2}\bigg{)}+\sum_{j=0}^{D-1}\chi_{j}\hat{a}^{\dagger}\hat{a}\,|j \rangle\langle j|\,. \tag{6}\] to the second order in \(|g_{jk}/(\omega_{j}-\omega_{k}-\omega_{\rm r})|\) in the dispersive regime. The Lamb shift \(\Lambda_{j}\) and the dispersive shift \(\chi_{j}\) of the \(j\)th energy level of the qudit are given, respectively, by [22] \[\Lambda_{j}=\sum_{k=0}^{D-1}\chi_{jk}=\sum_{k=0}^{D-1}\frac{|g_{jk}|^{2}}{ \omega_{j}-\omega_{k}-\omega_{\rm r}}, \tag{7}\] \[\chi_{j}=\sum_{k=0}^{D-1}(\chi_{jk}-\chi_{kj}), \tag{8}\] where \[\chi_{jk}=\frac{|g_{jk}|^{2}}{\omega_{j}-\omega_{k}-\omega_{\rm r}}. \tag{9}\] The main goal of this work is to quantify the qudit dispersive measurement in terms of the rate at which the information leaks out from the resonator. In addition, we need to answer to what extent a weak and continuous dispersive measurement is QND. The following analysis does not rely on the particular form of the dispersive shift nor on the relationships among the dispersive shifts of different energy levels; for simplicity, we will mainly use Eq.(4). Nevertheless, it should be mentioned that Eq.(4) and (6) are valid only when the resonator photon number is low [22]. In particular, the Schrieffer-Wolff transformation used in deriving Eq.(4) and (6) assumes that the strength of the qudit-resonator interaction is much smaller than \(\Delta_{\rm qr}\). Due to the creation and annihilation operators that appear in the qudit-resonator coupling term, there is a \(\sqrt{n_{\rm r}}\)-scaling of the interaction strength, where \(n_{\rm r}\) is the resonator photon number. Hence, we constrain to a low readout power in both the derivation and experiment. ## III Unconditioned Master Equation We first consider the unconditioned master equation where information leaking out from the combined system is averaged unconditionally. Later, we derive a stochastic differential equation in which the density operator at time \(t\) is conditioned on the heterodyne measurement at time \(s<t\). Since only the ensemble-averaged state is examined in the unconditioned master equation, the solution to the differential equation is deterministic given an initial condition. The master equation of a qubit coupled to a resonator dispersively has been studied extensively [10; 23]. It has been shown that each energy eigenstate (i.e., the \(z\)-basis vectors) of the qubit is entangled with a coherent state of the resonator; in addition, a qubit in a superposition state subject to a continuous readout pulse will dephase without changing the population in the \(z\)-basis. Here we generalize the solution to an arbitrary qudit in the dispersive-coupling regime. To avoid writing down too many equations when \(D\) is large, we will explicitly show the derivation for a qutrit measured dispersively while experiencing the \(T_{1}\)-decay and pure dephasing; nevertheless, the approaches used for solving the master equation can be easily extended to higher dimensional systems by adding terms of similar forms. To set up the problem, we let the combined system be a qutrit (labeled as \(\mathcal{S}\)) coupled to a resonator (\(\mathcal{R}\)) dispersively. Note that the environment is not a part of the composite system, i.e., we have already traced out the environment to write down a master equation. The (reduced) state of the combined system, denoted by \(\hat{\rho}_{\mathcal{SR}}\), lives in the Hilbert space \(\mathscr{H}_{\mathcal{SR}}\). For clarity, we denote the energy levels of the qutrit by \(|g\rangle\), \(|e\rangle\), and \(|f\rangle\) (ordered with increasing energy) to differentiate from the Fock states of the resonator. We study the time evolution of the composite state under the usual Born and Markov approximations [24; 25] in which the state transition maps form a quantum dynamical semigroup and are described by a Lindblad master equation. ### Master Equation for the Composite System in the Laboratory Frame Suppose the resonator is coupled to the environment with a total decay rate of \(\kappa\). For a 3D microwave cavity with two ports, \(\kappa\) is the sum of the input decay rate \(\kappa_{\rm in}\), output decay rate \(\kappa_{\rm out}\), and the internal decay rates \(\kappa_{\rm int}\) due to material losses. If the resonator is configured in the reflection mode (i.e., one port), then \(\kappa_{\rm in}=\kappa_{\rm out}\doteq\kappa_{\rm ext}\) and the total decay rate is \(\kappa=\kappa_{\rm int}+\kappa_{\rm ext}\). In reality, the resonator supports many modes; here we focus on only one mode (usually the fundamental mode) of the resonator with frequency \(\omega_{\rm r}\) which captures the dispersive measurement succinctly. The resonator-environment interaction is modeled as a harmonic oscillator coupled to a continuum of bath oscillators. At superconducting temperature, we assume that the bath is in the vacuum state (i.e., the mean photon number at the resonator frequency is \(\bar{N}(\omega_{\rm r})=0\)) so that the usual terms in the quantum optical master equation [26], i.e., \[\kappa\big{[}\bar{N}(\omega_{\rm r})+1\big{]}\mathscr{D}\big{[}\hat{a}\big{]} \hat{\rho}_{\mathcal{R}}(t)+\kappa\bar{N}(\omega_{\rm r})\mathscr{D}\big{[} \hat{a}^{\dagger}\big{]}\hat{\rho}_{\mathcal{R}}(t), \tag{10}\] reduces to \(\mathscr{D}\big{[}\hat{a}\big{]}\hat{\rho}_{\mathcal{R}}\), where \(\mathscr{D}\big{[}\hat{L}\big{]}\) is the dissipation superoperator associated with the Lindblad operator \(\hat{L}\) defined via the action \[\mathscr{D}\big{[}\hat{L}\big{]}\hat{\rho}=\hat{L}\hat{\rho}\hat{L}^{\dagger}- \frac{1}{2}\hat{L}^{\dagger}\hat{L}\hat{\rho}-\frac{1}{2}\hat{\rho}\hat{L}^{ \dagger}\hat{L} \tag{11}\] on any density operator \(\hat{\rho}\)[13; 26; 27; 28]. For the qutrit, we study both spontaneous decay and pure dephasing. Without imposing any selection rule, we assume the qutrit can decay from \(|f\rangle\) to \(|e\rangle\), from \(|f\rangle\) to \(|g\rangle\), and from \(|e\rangle\) to \(|g\rangle\) with decay rates \(\gamma_{1,ef}\), \(\gamma_{1,gf}\), and \(\gamma_{1,ge}\), respectively. We also include the pairwise pure dephasing with rates \(\gamma_{\phi,ge},\gamma_{\phi,gf}\), and \(\gamma_{\phi,ef}\) to study the coherence time of superposition states. One can show that the three pairwise dephasing terms are equivalent to a single dephasing term with three energy levels included; that is, given any set \(\{\gamma_{\phi,g},\gamma_{\phi,e},\gamma_{\phi,f}\}\), there exists \(\{\gamma_{\phi,ge},\gamma_{\phi,gf},\gamma_{\phi,ef}\}\) such that \[\mathscr{D}\left[\sum_{a\in\{g,e,f\}}\sqrt{\gamma_{\phi,a}}\,|a \rangle\langle a|\right]\] \[\quad=\sum_{a\in\{g,e,f\}}\sum_{b>a}\mathscr{D}\left[\sqrt{\frac{ \gamma_{\phi,ab}}{2}}\left(|a\rangle\langle a|-|b\rangle\langle b|\right) \right]. \tag{12}\] Therefore, we stick with the three pairwise dephasing rates without loss of generality. For a general qudit with \(D\) levels, we will need \(D(D-1)/2\) pairwise dephasing terms; Eq.(12) can also be extended to a \(D\)-level system easily. By including the decoherence channels mentioned above, we can write down the Lindblad master equation of the composite system \[\dot{\hat{\rho}}_{\mathcal{SR}}(t)=-\frac{\mathrm{i}}{\hbar} \big{[}\hat{H}_{\mathrm{eff}}(t),\hat{\rho}_{\mathcal{SR}}(t)\big{]}+\kappa \mathscr{D}\big{[}\hat{a}\big{]}\hat{\rho}_{\mathcal{SR}}(t)+\gamma_{1,ge} \mathscr{D}\big{[}\hat{\sigma}_{ge}\big{]}\hat{\rho}_{\mathcal{SR}}(t)+\gamma _{1,gf}\mathscr{D}\big{[}\hat{\sigma}_{gf}\big{]}\hat{\rho}_{\mathcal{SR}}(t) +\gamma_{1,ef}\mathscr{D}\big{[}\hat{\sigma}_{ef}\big{]}\hat{\rho}_{\mathcal{ SR}}(t)\] \[\approx-\frac{\mathrm{i}}{\hbar}\big{[}\hat{H}_{\mathrm{eff}}(t), \hat{\rho}_{\mathcal{SR}}(t)\big{]}+\left(\kappa\mathscr{D}\big{[}\hat{a} \big{]}+\frac{\gamma_{2,ge}}{2}\mathscr{D}\big{[}\hat{\sigma}_{z,ge}\big{]}+ \frac{\gamma_{2,gf}}{2}\mathscr{D}\big{[}\hat{\sigma}_{z,gf}\big{]}+\frac{ \gamma_{2,gf}}{2}\mathscr{D}\big{[}\hat{\sigma}_{z,ef}\big{]}\right)\hat{\rho }_{\mathcal{SR}}(t), \tag{13}\] where we adopt the notations \[\hat{\sigma}_{z,ab}=|a\rangle\langle a|-|b\rangle\langle b|\quad\text{and} \quad\hat{\sigma}_{ab}=|a\rangle\langle b| \tag{14}\] for \(a\neq b\). In addition, by assuming that \(T_{1,ab}=1/\gamma_{1,ab}\) is much longer than the other decoherence timescales, we have removed the qutrit decay terms and have lumped the extra dephasing rates \(\gamma_{1,ab}/2\) with the pure dephasing rates \(\gamma_{\phi,ab}\) to define \(\gamma_{2,ab}=\gamma_{\phi,ab}+\gamma_{1,ab}/2\) for \((a,b)\in\{(g,e),(g,f),(e,f)\}\). In the Appendices, two approaches to solving the master equation are discussed: The first method, which requires less algebra, is the method of positive \(P\)-representation, typically used when a harmonic oscillator is driven into a coherent state [10]. The long-\(T_{1}\) assumption is used for the derivation to have a closed-form solution. In contrast, the second method, introduced in [12], relies on the resonator displacement operator and allows one to solve Eq.(13) with \(\gamma_{1,ab}\) included. The effective Hamiltonian of a qutrit coupled with a resonator in the dispersive regime subject to a readout probe (under the RWA) is given by \[\hat{H}^{\mathrm{disp}}_{\mathcal{SR}}(t)/\hbar\] \[=\tilde{\omega}_{\mathrm{q}}\,|e\rangle\langle e|+\left(2\tilde {\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}\right)|f\rangle\langle f|+\omega_{ \mathrm{r}}\hat{a}^{\dagger}\hat{a}\] \[\quad+\chi_{\mathrm{qr}}(|e\rangle\langle e|+2\,|f\rangle\langle f |)\hat{a}^{\dagger}\hat{a}\] \[\quad-\left(\sqrt{\kappa_{\mathrm{in}}}\bar{a}_{\mathrm{in}}e^{- \mathrm{i}\omega_{\mathrm{q}}t}\,\hat{a}^{\dagger}+\sqrt{\kappa_{\mathrm{in}}} \bar{a}_{\mathrm{in}}^{\mathrm{q}}e^{\mathrm{i}\omega_{\mathrm{q}}t}\,\hat{a} \right), \tag{15}\] where we have set the zero-energy reference to be the ground-state energy of the dressed system and used \(\tilde{\omega}_{\mathrm{q}}=\tilde{\omega}_{\mathrm{e}}-\tilde{\omega}_{g}\) to denote the qubit frequency with the Lamb shift included. To address the second-excited state \(|f\rangle\), we also introduce the anharmonicity \(\alpha_{\mathrm{q}}=(\tilde{\omega}_{f}-\tilde{\omega}_{e})-2\tilde{\omega}_{ \mathrm{q}}\), which is negative for a transmon. The last term in Eq.(15) represents the readout signal sent to the resonator with frequency \(\omega_{\mathrm{d}}\) and amplitude (in terms of the square root of the average photon flux) \(\bar{a}_{\mathrm{in}}\). It is assumed that the readout signal can be modeled by a classical signal [13; 17] (i.e., the stiff-pump limit) and the imaginary frequency added to the resonator due to the decay is negligible since \(\kappa\ll\omega_{\mathrm{r}}\). Moreover, for a transmon-type qudit, we use the fact that the dispersive shift is a linear function of the number of excitations in the qudit, i.e., the cavity frequency shifts by \(\chi_{\mathrm{qr}}\) when exited from \(|g\rangle\) to \(|e\rangle\) and shifts by \(2\chi_{\mathrm{qr}}\) when exited from \(|g\rangle\) to \(|f\rangle\). We emphasize that the two methods used to solve the master equation are still valid even if the dispersive shift scales nonlinearly. The subsequent calculation can be simplified if we move the cavity part of the Hamiltonian to a frame that rotates at the drive frequency \(\omega_{\mathrm{d}}\). (Note that the qutrit Hamiltonian stays the same.) Then, the time-varying drive \(\varepsilon_{\mathrm{d}}(t)\) reduces to a complex scalar \(\epsilon=\sqrt{\kappa_{\mathrm{in}}}\bar{a}_{\mathrm{in}}\) and the Hamiltonian in this rotating frame \[\hat{H}^{\rm disp}_{\mathcal{SR},\rm rot}/\hbar\] \[=\tilde{\omega}_{\rm q}\ket{e}\bra{e}+(2\tilde{\omega}_{\rm q}+ \alpha_{\rm q})\ket{f}\bra{f}+\Delta_{\rm rd}\hat{a}^{\dagger}\hat{a}\] \[\quad+\chi_{\rm qr}(\ket{e}\bra{e}+2\ket{f}\bra{f})\hat{a}^{ \dagger}\hat{a}-\left(\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat{a}\right), \tag{16}\] with \(\Delta_{\rm rd}=\omega_{\rm r}-\omega_{\rm d}\), is now time-independent. Then, the master equation of the composite system in the rotating frame is obtained by making the substitution \(\hat{H}_{\rm eff}=\hat{H}^{\rm disp}_{\mathcal{SR},\rm rot}\) in Eq.(13). ### Analytical Solution to the Master Equation of the Combined System To solve Eq.(13), we first express \(\hat{\rho}_{\mathcal{SR}}\) as \[\hat{\rho}_{\mathcal{SR}}(t)=\sum_{a,b\in\{g,e,f\}}\hat{\rho}_{ab}(t)\ket{a} \bra{b}, \tag{17}\] where \[\hat{\rho}_{ab}(t)=\bra{a}\hat{\rho}_{\mathcal{SR}}(t)\ket{b} \tag{18}\] for \(a,b\in\{g,e,f\}\) are operators acting on the Fock space of the resonator. Now, Eq.(13) can be rewritten as a set of nine coupled operator equations in terms of \(\hat{\rho}_{ab}\). As shown in Appendix A.1, by expressing \(\hat{\rho}_{ab}\) in terms of the positive \(P\)-representation [29, 13] \[\hat{\rho}_{ab}(t)=\int\mathrm{d}^{2}\alpha\int\mathrm{d}^{2}\beta\frac{\ket{ \alpha}\bra{\beta^{*}}}{\bra{\beta^{*}}\alpha}P_{ab}(\alpha,\beta,t), \tag{19}\] one can reduce the nine operator differential equations into nine scalar equations of \(P_{ab}\). The technique of the positive \(P\)-representation has already been applied to the qubit case to reveal the measurement-induced dephasing on the qubit; Appendix A demonstrates the effectiveness of this method in solving the qutrit-resonator master equation. In fact, one can apply it to a general qudit system (in the long-\(T_{1}\) limit). For the qutrit case, the time evolution of the combined state is shown to be \[\hat{\rho}_{\mathcal{SR}}(t)=\sum_{a\in\{g,e,f\}}p_{a}(0)\ket{a} \bra{a}\otimes\ket{\alpha_{a}(t)}\bra{\alpha_{a}(t)}\\ +\sum_{a,b\in\{g,e,f\}}\frac{c_{ab}(t)}{\bra{\alpha_{b}(t)}\bra{ \alpha_{a}(t)}}\ket{a}\bra{b}\otimes\ket{\alpha_{a}(t)}\bra{\alpha_{b}(t)}, \tag{20}\] where \(p_{g,e,f}(0)\) are the initial populations in \(\ket{g}\), \(\ket{e}\), and \(\ket{f}\), respectively and \(\ket{\alpha_{a}(t)}\) represent the coherent states of the resonator. In addition, \[\dot{\alpha}_{g} =-\mathrm{i}(\Delta_{\rm rd}-\mathrm{i}\kappa/2)\alpha_{g}+ \mathrm{i}\epsilon, \tag{21}\] \[\dot{\alpha}_{e} =-\mathrm{i}(\Delta_{\rm rd}+\chi_{\rm qr}-\mathrm{i}\kappa/2) \alpha_{e}+\mathrm{i}\epsilon,\] (22) \[\dot{\alpha}_{f} =-\mathrm{i}(\Delta_{\rm rd}+2\chi_{\rm qr}-\mathrm{i}\kappa/2) \alpha_{f}+\mathrm{i}\epsilon. \tag{23}\] determine the time evolution of the coherent states entangled with the qutrit states whereas \[\dot{c}_{ge} =\mathrm{i}(\tilde{\omega}_{\rm q}+\mathrm{i}\gamma_{2,ge})c_{ge} +\mathrm{i}\chi_{\rm qr}\alpha_{g}\alpha_{e}^{*}c_{ge}, \tag{24}\] \[\dot{c}_{gf} =\mathrm{i}(2\tilde{\omega}_{\rm q}+\alpha_{\rm q}+\mathrm{i} \gamma_{2,gf})c_{gf}+\mathrm{i}2\chi_{\rm qr}\alpha_{g}\alpha_{f}^{*}c_{gf},\] (25) \[\dot{c}_{ef} =\mathrm{i}(\tilde{\omega}_{\rm q}+\alpha_{\rm q}+\mathrm{i} \gamma_{2,ef})c_{ef}+\mathrm{i}\chi_{\rm qr}\alpha_{e}\alpha_{f}^{*}c_{ef}, \tag{26}\] with \(c_{ab}=c_{ba}^{*}\) for \(a\neq b\), govern the oscillation and decay of the off-diagonal terms of the density operator. Since we have ignored \(\gamma_{1,ab}\), we observe that the populations of the qutrit do not change over time, a critical feature of the QND measurement. However, the coherence terms will decay to zero with additional rates proportional to \(\chi_{\rm qr}\). It should be noted that \(\alpha_{a}\alpha_{b}^{*}\) is complex in general; hence, the last term in Eq.(24), (25) or (26) contains both a decay and a frequency shift. An example of the time evolution of \(\hat{\rho}_{\mathcal{SR}}\) is plotted in Figure 1(e) and (f). Given the general solution, of particular interest are the steady-state amplitudes of the cavity coherent states \[\alpha_{g}(+\infty) =\frac{\sqrt{\kappa_{\rm in}}\tilde{a}_{\rm in}}{\Delta_{\rm rd}- \mathrm{i}\kappa/2}, \tag{27}\] \[\alpha_{e}(+\infty) =\frac{\sqrt{\kappa_{\rm in}}\tilde{a}_{\rm in}}{\Delta_{\rm rd} +\chi_{\rm qr}-\mathrm{i}\kappa/2},\] (28) \[\alpha_{f}(+\infty) =\frac{\sqrt{\kappa_{\rm in}}\tilde{a}_{\rm in}}{\Delta_{\rm rd} +2\chi_{\rm qr}-\mathrm{i}\kappa/2}, \tag{29}\] which can be anticipated from the solutions of the quantum Langevin equation (QLE) when the resonator is driven by a classical source [30], i.e., \[\dot{\hat{a}}(t)=-\mathrm{i}\Big{[}\Delta_{\rm rd}+\big{(}\chi_{ \rm qr}\ket{e}\bra{e}+2\chi_{\rm qr}\ket{f}\bra{f}\big{)}\Big{]}\hat{a}(t)\\ -\frac{\kappa}{2}\hat{a}(t)+\mathrm{i}\sqrt{\kappa_{\rm in}} \tilde{a}_{\rm in}. \tag{30}\] What is not obvious by looking at the QLE is the dephasing rate captured in Eq.(24)-(26). Note that the solution is obtained in the rotating frame; to go back to the rest frame, we just need to restore the phase \(e^{-\mathrm{i}\omega_{\rm d}t}\) in each coherent state. We can go one step further by tracing out the resonator part; in other words, the reduced density operator for the qutrit is given by \[\hat{\rho}_{\mathcal{SR}}(t) =\mathrm{Tr}_{\mathcal{R}}\big{[}\hat{\rho}_{\mathcal{SR}}(t) \big{]}\] \[=\sum_{a}p_{a}(0)\ket{a}\bra{a}+\sum_{a,b}c_{ab}(t)\ket{a}\bra{b}; \tag{31}\] hence, \(c_{ab}\) are simply the coherence of the qutrit (i.e., the off-diagonal terms of the reduced density operator of the qutrit) and can be solved from Eq.(21)-(26). Before discussing the effective qutrit master, we briefly mention the case when the thermal bath is equilibrated at a nonzero temperature. The operator differential equations of \(\hat{\rho}_{ab}\) are almost the same as before except that we replace \(\kappa\overleftarrow{\mathcal{D}}\big{[}\hat{a}\big{]}\hat{\rho}_{ab}\) with \(\kappa\big{(}\bar{N}+1\big{)}\mathscr{D}\big{[}\hat{a}\big{]}\hat{\rho}_{ab}+ \kappa\bar{N}\mathscr{D}\big{[}\hat{a}^{\dagger}\big{]}\hat{\rho}_{ab}\) for \(\bar{N}>0\). Consequently, each scalar differential equations of \(P_{ab}\) acquires a second partial derivative, i.e., \[\dot{P}_{ab}=\big{(}\text{terms from the case }\bar{N}=0,\text{ see Appendix A\,}\ref{eq:Pab}\big{)}\\ +\kappa\bar{N}\frac{\partial^{2}}{\partial\alpha\partial\beta}P_{ ab}. \tag{32}\] Since Eq.(32) with \(a=b\) has the same form as the classical Fokker-Planck equation [26, 31], its solution is described by a 2D Gaussian distribution of finite width. In other words, instead of building up a coherent state (which is a delta function in the positive \(P\)-representation) in the resonator, the external drive will excite a Gaussian state with a quadrature uncertainty broadened by the thermal bath. This also means that the resonator state is now a linear combination of a continuum of coherent states with amplitudes near \(\alpha_{g}\), \(\alpha_{e}\), or \(\alpha_{f}\). In contrast, if the bath is in the vacuum state, a coherent state excited in the resonator will remain coherent indefinitely. See Appendix A.2 for more details. ### Effective Qutrit/Qudit Master Equation Going back to the zero-temperature assumption, the fact that Eq.(31) describes the ensemble-averaged time evolution of a qutrit suggests that we can construct an effective qutrit master equation for dispersive measurement. To relax the long-\(T_{1}\) assumption, we adopt the method of displacement operator used in [12] to solve for the qutrit case. We move to the displaced frame by introducing a qutrit-state-dependent displacement operator \[\hat{\mathsf{P}}(t)=\hat{\Pi}_{g}\hat{D}(\alpha_{g}(t))+\hat{\Pi}_{e}\hat{D}( \alpha_{e}(t))+\hat{\Pi}_{f}\hat{D}(\alpha_{f}(t)), \tag{33}\] where \(\hat{\Pi}_{a}=|a\rangle\langle a|\) is the projection operator onto the qutrit state \(|a\rangle\) for \(a\in\{g,e,f\}\) and \[\hat{D}(\alpha_{a}(t))=\exp\!\left[\alpha_{a}(t)\hat{a}_{\mathrm{r}}^{\dagger }-\alpha_{a}^{*}(t)\hat{a}_{\mathrm{r}}\right] \tag{34}\] is the (time-dependent) resonator displacement operator which displaces a coherent state \(|\alpha_{a}(t)\rangle\) to the vacuum state [27]. In other words, by performing the transformation \(\hat{\rho}_{\mathcal{SR}}^{\mathsf{P}}=\mathsf{P}^{\dagger}\hat{\rho}_{ \mathcal{SR}}\mathsf{P}\) on the combined state and Figure 1: **Schematic of qutrit readout and solution of the composite-system master equation.****a**. The input-output perspective of the transmission-mode dispersive measurement. The readout signal \(\hat{a}_{\mathrm{in}}\) entering the cavity from the left port (i.e., port 1) is approximated by a classical drive with complex amplitude \(\bar{a}_{\mathrm{in}}\) while the transmitted signal at the right port (i.e., port 2) described by the traveling-wave annihilation operator \(\hat{a}_{\mathrm{out,2}}\) in order to capture the quadrature uncertainty. **b**. The transient complex amplitude of the three coherent states \(|\alpha_{a}\rangle\) of the resonator associated with the \(|a\rangle\) for \(a=g,e,f\). The steady state of each coherent state amplitude lies on a circle going through the origin of the phase plane. Inset: The build-up of mean photon number of \(|\alpha\rangle_{a}\) as a function of time. **c**. Distance between two coherent state amplitudes as a function of the readout frequency. To illustrate a more general trend, we also include the fourth energy level \(|h\rangle\) of the transmon. **d**. Same as **c**, but plotted with \(\chi_{\mathrm{qr}}\) smaller, equal, or larger than a fixed \(\kappa\). **e/f**. Time evolution of the composite state as solved from the composite-system master equation in the long-\(T_{1}\) limit. \(\hat{O}^{\mathsf{P}}=\mathsf{P}^{\dagger}\hat{O}^{\mathsf{P}}\mathsf{P}\) for any operator \(\hat{O}\) in the laboratory frame, we have effectively removed all the resonator photons entangled with the qutrit. Let \(\rho_{\mathcal{S},ab}=\bra{a}\hat{\rho}_{\mathcal{S}}\ket{b}\) with \(a,b\in\{g,e,f\}\) be the matrix element of the qutrit density operator in the laboratory frame. As shown in Appendix B and C, by first solving the matrix elements of the density operator in the displaced frame and then moving back to the laboratory frame with the resonator state traced out, one arrives at \[\dot{\rho}_{\mathcal{S},gg} =\gamma_{1,ge}\rho_{\mathcal{S},ee}+\gamma_{1,gf}\rho_{\mathcal{S },ff}, \tag{35}\] \[\dot{\rho}_{\mathcal{S},ee} =-\gamma_{1,ge}\rho_{\mathcal{S},ee}+\gamma_{1,ef}\rho_{\mathcal{ S},ff},\] (36) \[\dot{\rho}_{\mathcal{S},ff} =-(\gamma_{1,gf}+\gamma_{1,ef})\rho_{\mathcal{S},ff}. \tag{37}\] \[\dot{\rho}_{\mathcal{S},ge} =\Big{[}\bar{\omega}_{eg}-\gamma_{1,ge}/2-\gamma_{\phi,ge}+ \Gamma_{\mathrm{d},ge}\Big{]}\rho_{\mathcal{S},ge}, \tag{38}\] \[\dot{\rho}_{\mathcal{S},gf} =\Big{[}\bar{\omega}_{gf}-\gamma_{1,ge}/2-\gamma_{\phi,gf}+\Gamma _{\mathrm{d},gf}\Big{]}\rho_{\mathcal{S},gf},\] (39) \[\dot{\rho}_{\mathcal{S},ef} =\Big{[}\bar{\omega}_{ef}-\gamma_{1,ef}/2-\gamma_{\phi,ef}+\Gamma _{\mathrm{d},ef}\Big{]}\rho_{\mathcal{S},ef}, \tag{40}\] where the extra dephasing rates appearing in Eq.(38)-(40) are given by \[\Gamma_{\mathrm{d},ge}(t) =\Gamma_{\mathrm{d},eg}(t)=\chi_{\mathrm{qr}}\operatorname{Im}( \alpha_{g}\alpha_{e}^{*}), \tag{41}\] \[\Gamma_{\mathrm{d},gf}(t) =\Gamma_{\mathrm{d},fg}(t)=2\chi_{\mathrm{qr}}\operatorname{Im}( \alpha_{g}\alpha_{f}^{*}),\] (42) \[\Gamma_{\mathrm{d},ef}(t) =\Gamma_{\mathrm{d},fe}(t)=\chi_{\mathrm{qr}}\operatorname{Im}( \alpha_{e}\alpha_{f}^{*}), \tag{43}\] and the time evolution of \(\alpha_{a}(t)\) are still governed by Eq.(21)-(23). In addition, \(\bar{\omega}_{ba}=\tilde{\omega}_{b}-\tilde{\omega}_{a}-\Delta_{\mathrm{d},ba}\) are the new transition frequencies with the extra shifts \[\Delta_{\mathrm{d},eg}(t) =-\Delta_{\mathrm{d},ge}(t)=\chi_{\mathrm{qr}}\operatorname{Re}( \alpha_{e}\alpha_{g}^{*}), \tag{44}\] \[\Delta_{\mathrm{d},fg}(t) =-\Delta_{\mathrm{d},gf}(t)=2\chi_{\mathrm{qr}}\operatorname{Re}( \alpha_{f}\alpha_{g}^{*}),\] (45) \[\Delta_{\mathrm{d},fe}(t) =-\Delta_{\mathrm{d},ef}(t)=\chi_{\mathrm{qr}}\operatorname{Re}( \alpha_{f}\alpha_{e}^{*}). \tag{46}\] The dephasing rate \(\Gamma_{\mathrm{d},ab}\) and the frequency shift \(\Delta_{\mathrm{d},ba}\) are nothing more than the real and imaginary parts of the last terms in Eq.(24)-(26). On the one hand, since we have included the effect of \(T_{1}\), Eq.(35)-(37) simply restate the semiclassical rate equations. On the other hand, the time evolution of the coherence shown in Eq.(38)-(40) is exactly the same as that in Eq.(24)-(26). \(\Gamma_{\mathrm{d},ab}\) is known as the measurement-induced dephasing [10] and is a function of the dispersive shift \(\chi_{\mathrm{qr}}\), resonator decay rate \(\kappa\), readout detuning \(\Delta_{\mathrm{rd}}\), and readout amplitude \(\epsilon\). Furthermore, the same conclusion holds for a general qudit measured dispersively: When subject to a coherent readout probe, the resonator is excited to a superposition of \(D\) coherent states each entangled with a qudit eigenstate. The complex amplitude of the \(j\)th (with \(j=0,...,D-1\)) coherent state evolves according to \[\dot{\alpha}_{j}=-\mathrm{i}(\Delta_{\mathrm{rd}}+\chi_{j}-\mathrm{i}\kappa/2 )\alpha_{j}+\mathrm{i}\epsilon; \tag{47}\] that is, the readout probe sees a resonator with a frequency shift \(\chi_{j}\) relative to the bare frequency \(\omega_{\mathrm{r}}\). In addition, the time evolution of the qutrit populations, again, follows the rate equation, i.e., \[\dot{\rho}_{\mathcal{S},jj}=\sum_{k>j}\gamma_{1,jk}\rho_{\mathcal{S},kk}-\sum_{ k<j}\gamma_{1,jk}\rho_{\mathcal{S},jj}, \tag{48}\] where \(\rho_{\mathcal{S},jj}\) for \(j=0,...,D-1\) is the population of the \(j\)th qudit eigenstate and \(\gamma_{1,jk}\) with \(j<k\) is the decay rate from state \(\ket{k}\) to \(\ket{j}\). The time evolution of coherence term \(\rho_{\mathcal{S},jk}\) (\(j<k\)) is given by \[\dot{\rho}_{\mathcal{S},jk}=\Big{[}\mathrm{i}\bar{\omega}_{kj}-\gamma_{1,jk}/2- \gamma_{\phi,jk}\Gamma_{\mathrm{d},jk}\Big{]}\rho_{\mathcal{S},jk}, \tag{49}\] where \(\gamma_{\phi,jk}\) is the pure dephasing, \[\Gamma_{\mathrm{d},jk}(t)=\Gamma_{\mathrm{d},kj}(t)=(\chi_{k}-\chi_{j}) \operatorname{Im}(\alpha_{j}\alpha_{k}^{*}) \tag{50}\] the measurement-induced dephasing, and \[\bar{\omega}_{kj}(t)=-\bar{\omega}_{jk}(t)=(\bar{\omega}_{k}-\bar{\omega}_{j}) +(\chi_{k}-\chi_{j})\operatorname{Re}(\alpha_{k}\alpha_{j}^{*}) \tag{51}\] the shifted transition frequencies. At this point, one might attempt to write down an effective master equation for the qutrit based on Eq.(35)-(37) and (38)-(40); however, the qutrit frequencies appeared in Eq.(38)-(40), in general, do not satisfy the relation \[\bar{\omega}_{fg}-\bar{\omega}_{fe}=\bar{\omega}_{ge} \tag{52}\] for a three-level system, so we cannot write down an exact master equation of the Lindblad form. Such a problem does not appear in the qubit case [12] since the single transition frequency \(\bar{\omega}_{ge}\) is not subject to any constraint. Nevertheless, if the frequency shifts are much smaller than other rate parameters, we can still _approximate_ the qutrit alone as a simple Markovian system, thus writing down an effective master equation \[\dot{\bar{\rho}}_{\mathcal{S}} =-\frac{\mathrm{i}}{\hbar}\Big{[}\hat{H}_{\mathrm{a}\mathrm{eff}},\hat{\rho}_{\mathcal{S}}\Big{]}+\gamma_{1,ge}\mathbb{D}\Big{[}\hat{\sigma}_{ge} \Big{]}\hat{\rho}_{\mathcal{S}}+\gamma_{1,gf}\mathbb{D}\Big{[}\hat{\sigma}_{gf} \Big{]}\hat{\rho}_{\mathcal{S}}+\gamma_{1,ef}\mathbb{D}\Big{[}\hat{\sigma}_{ef} \Big{]}\hat{\rho}_{\mathcal{S}}\] \[\quad+\frac{\gamma_{\phi,ge}+\Gamma_{\mathrm{d},ge}}{2}\mathbb{D} \Big{[}\hat{\sigma}_{z,ge}\Big{]}\hat{\rho}_{\mathcal{S}}+\frac{\gamma_{\phi, gf}+\Gamma_{\mathrm{d},gf}}{2}\mathbb{D}\Big{[}\hat{\sigma}_{z,gf}\Big{]}\hat{\rho}_{ \mathcal{S}}+\frac{\gamma_{\phi,ef}+\Gamma_{\mathrm{d},ef}}{2}\mathbb{D}\Big{[} \hat{\sigma}_{z,ef}\Big{]}\hat{\rho}_{\mathcal{S}}, \tag{53}\] where the effective qutrit Hamiltonian is assumed to describe a self-consistent set of energy levels (e.g., by ignoring the measurement-induced frequency shifts). Moreover, for the same reason, it should be clear that we cannot write down an exact effective master equation for any qudit with \(D\geq 3\). However, by ignoring the frequency shifts, the effective master equation for a qudit will take the form \[\dot{\hat{\rho}}_{\mathcal{S}}=-\frac{\mathrm{i}}{\hbar}\Big{[} \hat{H}_{\mathrm{q,eff}},\hat{\rho}_{\mathcal{S}}\Big{]}+\sum_{j=0}^{D-1}\sum_{ k>j}\gamma_{1,jk}\mathcal{D}\big{[}\hat{\sigma}_{jk}\big{]}\hat{\rho}_{ \mathcal{S}}\] \[\qquad+\sum_{j=0}^{D-1}\sum_{k>j}\frac{\gamma_{\phi,jk}+\Gamma_{ \mathrm{d},jk}}{2}\mathcal{D}\big{[}\hat{\sigma}_{z,jk}\big{]}\hat{\rho}_{ \mathcal{S}}. \tag{54}\] ### Measurement-Induced Dephasing We have already pointed out the connection between the diagonal terms of the effective master equation and the semi-classical rate equation. The more interesting phenomenon lies in the time evolution of the coherence terms. In particular, we see that the product of the dispersive shift and the imaginary part of \(\alpha_{\alpha}\alpha_{b}^{*}\) induces a dephasing for each energy level of the qutrit. There are three factors that affect the measurement-induced dephasing rates: 1. A large readout drive leads to large coherent state amplitudes and thus a stronger dephasing during the measurement. From the point of view of information theory, since the field leaks out from the resonator will have a larger amplitude as well, we are more likely to gain useful information from the measurement since the signal-to-noise ratio increases as the power of the readout signal goes up [32]. Nevertheless, our measurement of the coherent states leaked out of the resonator is subject to the quadrature uncertainty, thus, the measurement result will be distributed as Gaussians centered at \(\alpha_{g}\), \(\alpha_{e}\), or \(\alpha_{f}\). The uncertainty in the measurement will then lead to a random backaction on the qutrit conditioned on the measurement results. It is this random backaction that leads to the dephasing of the qutrit. 2. Related to the coherent state amplitudes is the readout frequency. As shown in Eq.(27)-(29), the field amplitudes are Lorentzian functions of the detuning. Hence, for the same drive strength \(\bar{a}_{\mathrm{in}}\), the amplitude built up inside the resonator will be the highest when the detuning is zero. However, we cannot drive all three dressed frequencies with zero detuning simultaneously, which means that we have to play with the readout frequency so that the separations among the three coherent states are maximized for state classification. Figure 1(b) shows the steady-state amplitudes of the resonator for some arbitrary readout frequency near the bare resonator frequency; in general, the complex steady-state amplitudes lie on a circle that goes through the origin of the phase plane. In addition, Figure 1(c) plots the distance between two coherent state amplitudes as a function of the readout frequency. As shown in Section IV, the distance between two coherent states determines the measurement rate. 3. A larger dispersive shift \(\chi_{\mathrm{qr}}\) will also lead to a faster decoherence time. This, again, can be argued from an information-theory point of view. The dispersive shift determines how well we can separate the three qutrit-dependent resonator frequencies; hence the larger the dispersive shifts, the easier the state classification. However, as we have seen in the analysis of the qudit-resonator coupling, \(\chi_{\mathrm{qr}}\) is proportional to the square of the coupling coefficient and is approximately inversely proportional to the square of the detuning. For the dispersive coupling to be valid, one cannot make the qutrit-resonator coupling arbitrarily large, thus limiting the amount of \(\chi_{\mathrm{qr}}\) realizable in practice. Furthermore, there is another ratio we can design to improve the state classification - the ratio between the dispersive shift \(\chi_{\mathrm{qr}}\) and cavity decay rate \(\kappa\). The effect of \(\kappa\) is hidden in the expression of the steady-state amplitudes. As shown in Figure 1(d), the distance between the coherent states can be improved by making \(\chi_{\mathrm{qr}}>\kappa\). ## IV Effective Qutrit Stochastic Master Equation An unconditioned master equation can be interpreted as the stochastic trajectories ensemble-averaged over all the possible measurement outcomes [33]. The combined system of the qudit and the resonator can be measured either actively by us or implicitly by the environment. In Section III, we ignored the information coming out of the resonator, which is equivalent to assuming that all measurements are implicitly made by the environment. To describe an active dispersive measurement by us on a qudit, we thus need to retrieve the information that has been so far averaged out. Since measurements are probabilistic in quantum mechanics, we introduce an effective SME to model the random measurement outcomes. ### Heterodyne Detection in the Transmission Mode Now, we consider a qutrit coupled to the cavity dispersively. To read out the resonator state, we perform a heterodyne detection where the readout signal coming out of the resonator travels on the transmission line (i.e., the coaxial cables connecting the inside of the dilution refrigerator to the room-temperature electronics) and is mixed at the room temperature with a strong local oscillator (LO) signal \[\hat{V}_{\mathrm{LO}}(t) =\frac{\hat{a}_{\mathrm{LO}}(t)e^{-\mathrm{i}\phi_{\mathrm{LO}}}+ \hat{a}_{\mathrm{LO}}^{\dagger}(t)e^{\mathrm{i}\phi_{\mathrm{LO}}}}{2}\] \[\approx V_{\mathrm{LO}}\cos(\omega_{\mathrm{LO}}t-\phi_{\mathrm{LO}}), \tag{55}\] whose frequency \(\omega_{\mathrm{LO}}\) is different from the readout signal \(\omega_{\mathrm{d}}\) by the intermediate frequency (IF) \(\omega_{\mathrm{IF}}\), e.g., \(\omega_{\mathrm{IF}}=\omega_{\mathrm{LO}}-\omega_{\mathrm{d}}\). In fact, in a typical IQ demodulation stage, the amplified readout signal is first divided equally in power and then mixed separately by two LO signals whose phases are \(90^{\circ}\)-out-of-phase. Subsequently, the analog IF signals, passing through an analog-to-digital converter (ADC), are processed digitally and demodulated finally to DC (zero frequency) as a complex number (so that one can plot the measurement as a point in the phase plane). Unlike a homodyne detection where \(\omega_{\mathrm{LO}}=\omega_{\mathrm{d}}\), the heterodyne scheme allows us to measure both quadratures of the field at the same time (but still constrained by the uncertainty principle). In addition, by first moving to an IF frequency (\(\omega_{\mathrm{IF}}\sim 100\) MHz in our experiment), the signal experiences less \(1/f\) noise. We will, however, not attempt to model the analog or digital demodulation part of the heterodyne detection using quantum mechanics. Instead, we will work directly with the coherent state coming out of the resonator and assume that we can process the signal in the way described above and retrieve information about the coherent states subject to quantum-mechanical noise and imperfect measurement efficiency. For a fully quantum-mechanical description of the output chain, including filtering and amplification, see [22]. Unlike the qubit measurement where the readout frequency is usually set to be \(\omega_{\mathrm{r}}+\chi_{\mathrm{qr}}/2\) so that the detunings seen by the readout probe are exactly \(\pm\chi_{\mathrm{qr}}/2\) and the information can be encoded in only one quadrature, there isn't any symmetry we can utilize to describe a general qudit measured using the heterodyne scheme. To analyze the information encoded in the complex amplitude of a coherent state, we first define (in the rotating frame of \(\omega_{\mathrm{d}}\)) \[\hat{I}_{\phi}=\frac{\hat{a}e^{-\mathrm{i}\phi}+\hat{a}^{\dagger}e^{\mathrm{i }\phi}}{2}\quad\text{and}\quad\hat{Q}_{\phi}=\frac{\hat{a}e^{-\mathrm{i}\phi}- \hat{a}^{\dagger}e^{\mathrm{i}\phi}}{2\mathrm{i}} \tag{56}\] to be the two quadrature operators of the resonator field, where \(\phi\) models the net phase coming from the cable delay and any rotation applied during data processing. Similarly, for the coherent states introduced in Section III.2, we define \[\bar{I}_{g}(t) =\mathrm{Re}(\alpha_{g}e^{-\mathrm{i}\phi}), \bar{Q}_{g}(t) =\mathrm{Im}(\alpha_{g}e^{-\mathrm{i}\phi}), \tag{57}\] \[\bar{I}_{e}(t) =\mathrm{Re}(\alpha_{e}e^{-\mathrm{i}\phi}), \bar{Q}_{e}(t) =\mathrm{Im}(\alpha_{e}e^{-\mathrm{i}\phi}),\] (58) \[\bar{I}_{f}(t) =\mathrm{Re}(\alpha_{f}e^{-\mathrm{i}\phi}), \bar{Q}_{f}(t) =\mathrm{Im}(\alpha_{f}e^{-\mathrm{i}\phi}). \tag{59}\] We, again, first restricted to the qutrit case since the qudit case can be generalized easily. Note, however, the quadrature fields defined above live in the resonator and what we observe is only the leakage of the resonator to the transmission line. We denote the input and output traveling-wave annihilation operators at port \(i=1,2\) as \(\hat{a}_{\mathrm{in,out},i}\). Recall that associated with the QLE of the resonator at port \(2\) is the boundary condition [34; 35; 13] \[\hat{a}_{\mathrm{out}}(t) \doteq\hat{a}_{\mathrm{out},2}(t)\] \[=-\hat{a}_{\mathrm{in},2}(t)+\sqrt{\kappa_{\mathrm{out}}}\hat{a} (t)\approx\sqrt{\kappa_{\mathrm{out}}}\hat{a}(t), \tag{60}\] where we have dropped \(\hat{a}_{\mathrm{in},2}\) by assuming that the incoming signal at port \(2\) (i.e., the output port) is isolated by a well-designed circulator/isolator stage and is not amplified at the output stage (i.e., the HEMT is approximately unilateral). Since in the transmission mode, the resonator is a two-port device, there should be another boundary condition for port \(1\) (i.e., the input port, see Figure 1(a)); in fact, we have been implicitly using it to define the drive \(e=\sqrt{\kappa_{\mathrm{in}}}\hat{a}_{\mathrm{in}}\) entering at port \(1\) of the resonator. Information leaking out from port \(1\) will be factored into the quantum efficiency \(\eta\) to be discussed. In the Schrodinger picture, the boundary condition at port \(2\) implies that the transmitted signal is in a coherent state with the complex amplitude \[\alpha_{\mathrm{out}}(t)=\sqrt{\kappa_{\mathrm{out}}}\alpha_{a}(t) \tag{61}\] if the resonator is in the coherent state \(|\alpha_{a}(t)\rangle\) for \(a\in\{e,g,f\}\). Moreover, since \(\hat{a}_{\mathrm{out}}^{\dagger}\hat{a}_{\mathrm{out}}\) is the outgoing photon flux, the average number of photons leaving the resonator from port \(2\) within an infinitesimally short time \(\Delta t\) is given by \[\bar{n}(t)=\kappa_{\mathrm{out}}\Delta t|\alpha_{a}(t)|^{2}. \tag{62}\] In terms of the outgoing quadrature fields on the transmission line, we have \[I_{\mathrm{out}}(t) =\sqrt{\kappa_{\mathrm{out}}\Delta t}\bar{I}_{a}(t), \tag{63}\] \[Q_{\mathrm{out}}(t) =\sqrt{\kappa_{\mathrm{out}}\Delta t}\bar{Q}_{a}(t), \tag{64}\] where \(\bar{I}_{a}\) and \(\bar{Q}_{a}\) are approximately constant over short \(\Delta t\). More precisely, it is the signal \[\hat{V}_{\mathrm{out}}(t)=\frac{\hat{a}_{\mathrm{out}}(t)e^{-(\mathrm{i}\omega _{\mathrm{d}}t+\phi)}+\hat{a}_{\mathrm{out}}^{\dagger}(t)e^{\mathrm{i}(\omega_{ \mathrm{d}}t+\phi)}}{2} \tag{65}\] that is mixed with \(\hat{V}_{\mathrm{LO}}(t)\) and its out-of-phase signal \(\hat{V}_{\mathrm{LO}}(t-\pi/2\omega_{\mathrm{LO}})\). Nevertheless, it's not hard to see that Eq.(63) and (64), up to a constant scaling due to cable loss and amplification, are effectively what one would get after filtering the higher sideband of the mixer output at \(\omega_{\mathrm{d}}+\omega_{\mathrm{LO}}\) and demodulating the filtered signal digitally to DC. For this reason, we will treat \(I_{\mathrm{out}}\) and \(\bar{Q}_{\mathrm{out}}\) as the measurement outcome directly and omit calculation involving \(\hat{V}_{\mathrm{LO}}\). Any loss due to the nonideality of the mixer will be included in the quantum efficiency. In reality, however, measurements are not perfect and not all the information encoded in the photon flux can be captured; thus, the effective photon number we can measure is only \[\bar{n}_{\text{eff}}(t)=\eta\kappa\Delta t|\alpha_{a}(t)|^{2}, \tag{66}\] where \(\eta\in[0,1]\) is the measurement efficiency. Since \(\eta_{\text{r}}=\kappa_{\text{out}}/\kappa<1\), the efficiency \(\eta\) is naturally lowered by \(\eta_{\text{r}}\). Note that \(\eta_{\text{r}}\) also contains the effect of \(\kappa_{\text{int}}\) since photons lost internally are inaccessible to the detector. In addition, by performing a heterodyne detection, we automatically half the efficiency due to the power division in an IQ mixer. Moreover, note that when \(\eta=0\), we would gain zero information about the system and can only talk about the behavior of the qutrit in an averaged sense; thus, the SME to be constructed should reduce to the unconditioned master equation and we will verify this point later. Unlike the mean amplitude, the uncertainty/variance associated with a coherent state traveling on the transmission line is fixed (each quadrature has a variance of \(1/4\)), so the signal-to-noise ratio is proportional to \(\Delta t\); in other words, the photon shot noise can be effectively reduced by increasing the measurement time. Concretely, suppose the resonator is in one of the coherent states \(|\alpha_{a}\rangle\) associated with an energy eigenstate \(|a\rangle\) of the qutrit. Then, the _conditional_ probability density of measuring a particular point \((I,Q)\) in the phase plane with an integration time of \(\Delta t\)_given the qutrit state_\(|a\rangle\) is a two-dimensional Gaussian \[f(I,Q|\hat{\rho}_{\mathcal{S}}(t)=|a\rangle\langle a|)\] \[\propto\exp\biggl{[}-\frac{1}{2}\frac{(I-\sqrt{\eta\kappa\Delta t }\,\bar{I}_{a})^{2}+(Q-\sqrt{\eta\kappa\Delta t}\,\bar{Q}_{a})^{2}}{1/4}\biggr{]} \tag{67}\] with a variance \(1/4\) in each quadrature. We have also assumed that the two arms of the mixer output have the same conversion loss to use a single \(\eta\); in other words, the measurement is balanced. More generally, if the qutrit is in a superposition state, then the cavity state, after tracing out the qutrit state is given by \[\hat{\rho}_{\mathcal{R}}(t) =\text{Tr}_{\mathcal{S}}[\hat{\rho}_{\mathcal{SR}}(t)]\] \[=\sum_{a\in\{g,e,f\}}\rho_{\mathcal{S},aa}(t)\,|\alpha_{a}(t) \rangle\langle\alpha_{a}(t)|\,, \tag{68}\] as suggested by Eq.(20). Hence, the total conditional probability density of measuring \((I,Q)\) in the phase plane is now given by \[f(I,Q|\hat{\rho}_{\mathcal{SR}}(t))\propto\sum_{a\in\{g,e,f\}}\rho_{\mathcal{ S},aa}(t)\exp\biggl{[}-\frac{1}{2}\frac{(I-\sqrt{\eta\kappa\Delta t}\,\bar{I}_{a })^{2}+(Q-\sqrt{\eta\kappa\Delta t}\,\bar{Q}_{a})^{2}}{1/4}\biggr{]}. \tag{69}\] Consequently, the entanglement generated by the dispersive coupling will project the qutrit state to a new state (possibly mixed) based on the measurement outcome \((I,Q)\) after \(\Delta t\). By introducing the integration time \(\Delta t\), we have discretized the continuous measurement into time steps \(t_{0},t_{1},t_{2},...\) with a step size of \(\Delta t\). We formalize measurement and the backaction induced by the measurement output by introducing, at each time step \(t_{k}\), a continuum of POVM [36; 26] \[\Bigl{\{}\hat{E}_{IQ}(t_{k})=\hat{K}_{IQ}^{\dagger}(t_{k})\hat{K}_{IQ}(t_{k}) \ |\ (I,Q)\in\mathbf{R}^{2}\Bigr{\}} \tag{70}\] for the qutrit (i.e., the resonator is traced out in this description) with the Kraus operators \[\hat{K}_{IQ}(t_{k})\\ =\mathscr{N}_{k}\!\!\sum_{a\in\{g,e,f\}}\!\!\exp\Bigg{\{}-\Bigl{[} I-\sqrt{\eta\kappa\Delta t}\,\bar{I}_{a}(t_{k})\Bigr{]}^{2}\\ -\Bigl{[}Q-\sqrt{\eta\kappa\Delta t}\,\bar{Q}_{a}(t_{k})\Bigr{]} ^{2}\Bigg{\}}\hat{\Pi}_{a} \tag{71}\] for any point \((I,Q)\) in the phase plane. The normalization constant \(\mathscr{N}_{k}\) can be found by imposing \[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\mathrm{d}I\mathrm{d}Q\,\hat{E}_ {IQ}(t_{k})=1, \tag{72}\] as required by the completeness of the POVM. Using the Kraus operators, the probability density of measuring \((I,Q)\) is \[\text{Tr}\Bigl{[}\hat{\rho}_{\mathcal{S}}(t_{k})\hat{E}_{IQ}(t_{k})\Bigr{]}= \text{Tr}\Bigl{[}\hat{K}_{IQ}(t_{k})\hat{\rho}_{\mathcal{S}}(t_{k})\hat{K}_{IQ }^{\dagger}(t_{k})\Bigr{]}, \tag{73}\] which, of course, must agree with Eq.(69). Furthermore, the post-measurement state _conditioned_ on the outcome \((I,Q)\) is \[\hat{\rho}_{\mathcal{S}}(t_{k+1})=\hat{\rho}_{\mathcal{S}}(t_{k}+\Delta t)= \frac{\hat{K}_{IQ}(t_{k})\hat{\rho}_{\mathcal{S}}(t_{k})\hat{K}_{IQ}^{\dagger} (t_{k})}{\text{Tr}\Bigl{[}\hat{K}_{IQ}(t_{k})\hat{\rho}_{\mathcal{S}}(t_{k}) \hat{K}_{IQ}^{\dagger}(t_{k})\Bigr{]}}. \tag{74}\] We emphasize that \(\hat{\rho}_{\mathcal{S}}(t_{k+1})\) is the conditional reduced density operator and thus is _not_ the same as the reduced density operator used in the effective qutrit master equation before. Nevertheless, once averaged over all the possible measurement history, we should reproduce the unconditional density operator. Furthermore, it should be clear from Eq.(74) that the series of quantum channels form a Markov chain, making the entire mathematical formalism easier to deal with. ### Heuristic Derivation of the Qutrit Stochastic Master Equation Based on Eq.(74), we look for a stochastic differential equation in the diffusive limit [37], i.e., \(\Delta t\to 0\). As shown in Figure 2, the three Gaussian clusters in Eq.(69) merge together to form approximately a new Gaussian distribution as \(\Delta t\) becomes sufficiently small [38]. Appendix D uses this approximation (which is true to the first order in \(\Delta t\)) to reduce the Kraus operator to \[\hat{K}_{IQ}(t_{k})\approx\tilde{\mathscr{N}_{k}}\exp\biggl{\{}- \Bigl{[}I-\sqrt{\eta\kappa\Delta t}\hat{L}_{I}(t_{k})\Bigr{]}^{2}\biggr{\}}\] \[\qquad\qquad\times\exp\biggl{\{}-\Bigl{[}Q-\sqrt{\eta\kappa \Delta t}\hat{L}_{Q}(t_{k})\Bigr{]}^{2}\biggr{\}}, \tag{75}\] in the diffusive limit, where the new operators \[\hat{L}_{I}(t)=\bar{I}_{g}(t)\hat{\Pi}_{g}+\bar{I}_{e}(t)\hat{ \Pi}_{e}+\bar{I}_{f}(t)\hat{\Pi}_{f}, \tag{76}\] \[\hat{L}_{Q}(t)=\bar{Q}_{g}(t)\hat{\Pi}_{g}+\bar{Q}_{e}(t)\hat{ \Pi}_{e}+\bar{Q}_{f}(t)\hat{\Pi}_{f}. \tag{77}\] act as the Lindblad operators for measuring the \(I\) and \(Q\) quadrature, respectively. By expanding Eq.(74) using Eq.(75), one can derive (see Appendix D) the effective SME \[\mathrm{d}\hat{\rho}=\bigg{(} -\frac{\mathrm{i}}{\hbar}\Bigl{[}\hat{H}_{\mathrm{q,eff}},\hat{ \rho}\Bigr{]}+\gamma_{1,ge}\mathscr{D}\Bigl{[}\hat{\sigma}_{ge}\Bigr{]}\hat{ \rho}+\gamma_{1,gf}\mathscr{D}\Bigl{[}\hat{\sigma}_{gf}\Bigr{]}\hat{\rho}\] \[+\gamma_{1,ef}\mathscr{D}\Bigl{[}\hat{\sigma}_{ef}\Bigr{]}\hat{ \rho}+\frac{\gamma_{\phi,ge}}{2}\mathscr{D}\Bigl{[}\hat{\sigma}_{z,ge}\Bigr{]} \hat{\rho}\] \[+\Bigl{(}\kappa\mathscr{D}\Bigl{[}\hat{L}_{I}\Bigr{]}\hat{\rho}+ \kappa\mathscr{D}\Bigl{[}\hat{L}_{Q}\Bigr{]}\hat{\rho}\Bigr{)}\,\mathrm{d}t\] \[+\sqrt{\eta\kappa}.\boldsymbol{dt}\bigl{[}\hat{L}_{I}\bigr{]}\hat {\rho}\,\mathrm{d}W_{I}+\sqrt{\eta\kappa}.\boldsymbol{dt}\bigl{[}\hat{L}_{Q} \bigr{]}\hat{\rho}\,\mathrm{d}W_{Q} \tag{78}\] for the conditional reduced density operator of the qutrit, with the heterodyne measurement outcomes (i.e., the complex signal demodulated to DC) encoded in two classical stochastic differential equations \[V_{I}(t) =\sqrt{\eta\kappa}\bigl{(}2\hat{L}_{I}(t)\bigr{)}+\xi_{I}(t), \tag{79}\] \[V_{Q}(t) =\sqrt{\eta\kappa}\bigl{(}2\hat{L}_{Q}(t)\bigr{)}+\xi_{Q}(t), \tag{80}\] where \(\langle\hat{c}\rangle=\mathrm{Tr}(\hat{\rho}\hat{c})\). The outcomes \(V_{I,Q}(t)\) are proportional to \(I\) and \(Q\) signals measured by the ADC in the real experiment, but are rescaled to remove \(\sqrt{\Delta t}\) in Eq.(63) and (64). Note that the subscript \(\mathcal{S}\) is dropped with the understanding that \(\hat{\rho}\) represents the conditional density operator of the qutrit only. In the SME, the measurement superoperator \(\mathscr{M}\bigl{[}\hat{L}\bigr{]}\) associated with an operator \(\hat{L}\) is defined via [37; 39] \[\mathscr{M}\bigl{[}\hat{L}\bigr{]}\hat{\rho}=\hat{L}\hat{\rho}+\hat{\rho} \hat{L}^{\dagger}-\bigl{\langle}\hat{L}+\hat{L}^{\dagger}\bigr{\rangle}\hat{ \rho}. \tag{81}\] In addition, \(W_{I,Q}\) are two independent classical Wiener processes and \(\xi_{I,Q}(t)=\hat{W}_{I,Q}(t)\) are classical white-noise signals satisfying \[\mathbb{E}[\xi_{I}(t)]=\mathbb{E}[\xi_{Q}(t)]=\mathbb{E}[\xi_{I} (t)\xi_{Q}(t^{\prime})]=0, \tag{82}\] \[\mathbb{E}[\xi_{I}(t)\xi_{I}(t^{\prime})]=\mathbb{E}[\xi_{Q}(t) \xi_{Q}(t^{\prime})]=\delta(t-t^{\prime}). \tag{83}\] Formally, we should use Ito's rule, i.e., \(\mathrm{d}W_{I}^{2}=\mathrm{d}W_{Q}^{2}=\mathrm{d}t\) and \(\mathrm{d}W_{I}\mathrm{d}W_{Q}=0\) almost surely [40]. ### Measurement Rates v.s. Measurement-Induced Dephasing Rates At first glance, the measurement-induced dephasing rates defined in Eq.(41)-(41) seem to be missed by the SME shown in Eq.(78). In fact, \(\kappa\mathscr{D}\bigl{[}\hat{L}_{I}\bigr{]}\hat{\rho}+\kappa\mathscr{D} \bigl{[}\hat{L}_{Q}\bigr{]}\hat{\rho}\) did not show up in the any of the unconditioned master equations introduced before, at least not obvious in its current form. Nevertheless, one can show that \[\kappa\mathscr{D}\bigl{[}\hat{L}_{I}\bigr{]}\hat{\rho}+\kappa \mathscr{D}\bigl{[}\hat{L}_{Q}\bigr{]}\hat{\rho}\] \[=\frac{\Gamma_{\mathrm{m},ge}}{4}\mathscr{D}\bigl{[}\hat{\sigma} _{z,ge}\bigr{]}\hat{\rho}+\frac{\Gamma_{\mathrm{m},gf}}{4}\mathscr{D}\bigl{[} \hat{\sigma}_{z,gf}\bigr{]}\hat{\rho}\] \[\qquad+\frac{\Gamma_{\mathrm{m},ef}}{4}\mathscr{D}\bigl{[}\hat{ \sigma}_{z,ef}\bigr{]}\hat{\rho} \tag{84}\] by a simple rearrangement of the dissipation superoperators. In Eq.(84), \[\Gamma_{\mathrm{m},ab}(t)=\kappa|\beta_{ab}(t)|^{2} \tag{85}\] is the measurement rate associated with the operator \(\hat{\sigma}_{z,ab}\) and \(\beta_{ab}(t)=\alpha_{a}(t)-\alpha_{b}(t)\) represents the vector connecting the two resonator coherent states in the phase plane. Intuitively, the greater the separation between any two coherent states or the larger the resonator decay rate, the easier the state classification becomes and, thus, the higher the measurement rate. Figure 2: **Illustration of a weak measurement in the diffusive limit.** As the integration time of each sample reduces, the three Gaussian distributions merge together and can be approximated by a single Gaussian. The mean vector of the approximated distribution is given by the centroid of the three original mean vectors (i.e., \((\bar{I}_{a},\bar{Q}_{a})\)) weighted by the probability of the corresponding qutrit state \(|a\rangle\). Moreover, it turns out that the same rates \(\Gamma_{\mathrm{m},ab}\) also appear as the dephasing rates in the unconditioned master equation of the combined system in the _displaced_ frame (see Appendix B); thus, Eq.(78) can be related to the unconditioned master equation in a heuristic sense. Even though we have reproduced something that appeared in the displaced frame, we have still not derived the exact measurement-induced dephasing rates \(\Gamma_{\mathrm{d},ab}\) that appear in the unconditioned master equation of the qutrit. However, if we choose to use the steady-state values of \(\alpha_{g,e,f}\) listed in Eq.(27)-(29), we can show analytically that \[\Gamma_{\mathrm{m},ab}(+\infty)=2\Gamma_{\mathrm{d},ab}(+\infty) \tag{86}\] for \(a\neq b\) and, thus, the heterodyne measurement indeed induces a dephasing at rate \(\Gamma_{\mathrm{d},ab}\) between two energy levels of the qutrit at steady state. The steady-state behavior was observed in the qubit case [12], but our results imply the generality of such a relationship for a qutrit (and a qudit). Figure 3 provides a detailed comparison between \(\Gamma_{\mathrm{m},ge}\) and \(2\Gamma_{\mathrm{d},ge}\) before reaching the steady state. Such a discrepancy can exist since we have been assuming a Markovian system; from the derivation of the effective qutrit master equation, however, we know that the qutrit alone is not really Markovian and the information loss in the transient of the resonator evolution is not fully gained by the heterodyne detection. To capture the full measurement-induced dephasing using the quantum trajectory approach, one must include the resonator as part of the SME with the measurement operators acting on the resonator directly. Appendix D.2 provides a derivation of the effective qutrit SME from the SME of the combined system, showing that we can simply replace \(\Gamma_{\mathrm{m},ab}(t)\) in the decoherence terms of Eq.(78) with \(2\Gamma_{\mathrm{d},ab}(t)\) as expected. Consequently, we arrive at the corrected effective qutrit stochastic master equation \[\mathrm{d}\hat{\rho}=\bigg{(} -\frac{\mathrm{i}}{\hbar}\Big{[}\hat{H}_{\mathrm{q},\mathrm{eff }},\hat{\rho}\Big{]}+\gamma_{1,ge}\mathscr{D}\big{[}\hat{\sigma}_{ge}\big{]} \hat{\rho}\] \[+\gamma_{1,gf}\mathscr{D}\big{[}\hat{\sigma}_{gf}\big{]}\hat{ \rho}+\gamma_{1,ef}\mathscr{D}\big{[}\hat{\sigma}_{ef}\big{]}\hat{\rho}\] \[+\frac{\gamma_{\phi,ge}+\Gamma_{\mathrm{d},ge}}{2}\mathscr{D} \big{[}\hat{\sigma}_{z,ge}\big{]}\hat{\rho}\] \[+\frac{\gamma_{\phi,gf}+\Gamma_{\mathrm{d},gf}}{2}\mathscr{D} \big{[}\hat{\sigma}_{z,gf}\big{]}\hat{\rho}\] \[+\sqrt{\eta\kappa}\mathscr{M}\big{[}\hat{L}_{I}\big{]}\hat{\rho} \,\mathrm{d}W_{I}+\sqrt{\eta\kappa}\mathscr{M}\big{[}\hat{L}_{Q}\big{]}\hat{ \rho}\,\mathrm{d}W_{Q}, \tag{87}\] where \(\eta<1/2\) as a consequence of the heterodyne detection. Moreover, since \(\mathbb{E}(\mathrm{d}W_{I,Q})=0\), we indeed reproduce the effective qutrit master equation described in Section III.3 (at steady state) by taking the ensemble average of the SME. ### Generalization to the Measurement of a Qudit The diffusive SME for the qutrit can be generalized in the qudit case. Besides adding more decoherence channels, we need to redefine the Lindblad operators for measurement: \[\hat{L}_{I}=\sum_{j=0}^{D-1}\bar{I}_{j}\hat{\Pi}_{j},\quad\text{and}\quad\hat {L}_{Q}=\sum_{j=0}^{D-1}\bar{Q}_{j}\hat{\Pi}_{j}, \tag{88}\] where the quadrature amplitudes of the \(j\)th coherent state are given by \[\bar{I}_{j}=\mathrm{Re}(\alpha_{j}e^{-\mathrm{i}\phi})\quad\text{and}\quad \bar{Q}_{j}=\mathrm{Im}(\alpha_{j}e^{-\mathrm{i}\phi}) \tag{89}\] for \(j=0,...,D-1\). Then, the effective qudit SME is given by \[\mathrm{d}\hat{\rho}=-\frac{\mathrm{i}}{\hbar}\Big{[}\hat{H}_{ \mathrm{q},\mathrm{eff}},\hat{\rho}\Big{]}\mathrm{d}t+\sum_{j=0}^{D-1}\sum_{k>j }\gamma_{1,jk}\mathscr{D}\Big{[}\hat{\sigma}_{jk}\Big{]}\hat{\rho}\,\mathrm{ d}t\\ +\sum_{j=0}^{D-1}\sum_{k>j}\frac{\gamma_{\phi,jk}+\Gamma_{ \mathrm{d},jk}}{2}\mathscr{D}\Big{[}\hat{\sigma}_{z,jk}\Big{]}\hat{\rho}\, \mathrm{d}t\\ +\sqrt{\eta\kappa}\mathscr{M}\big{[}\hat{L}_{I}\big{]}\hat{\rho} \,\mathrm{d}W_{I}+\sqrt{\eta\kappa}\mathscr{M}\big{[}\hat{L}_{Q}\big{]}\hat{ \rho}\,\mathrm{d}W_{Q}, \tag{90}\] where \(\hat{\rho}\) represents the density operator of the qudit _conditioned_ on the measurement history. The measurement outcomes are still governed by Eq.(79) and (80) but \(\hat{L}_{I}\) and \(\hat{L}_{Q}\) are now defined by Eq.(88). Finally, taking the ensemble average of Eq.(90) gives the unconditioned master equation stated in Eq.(54). ### Unraveling of the SME In Eq.(87) and (90), the heterodyne measurement is modeled using stochastic processes while the other decoherence terms are analyzed by taking the ensemble Figure 3: **A comparison of the measurement dephasing rates derived in the laboratory frame and the measurement rates found in the displaced frame.** Since we have assumed a Markovian system in the derivation of the SME, the measurement rate and measurement-induced dephasing rates are not exactly the same in the transient parts. However, when the system reaches its steady state, the two rates are equivalent. average. Instead of having a mixture of quantum trajectories and averaged time evolution in the same equation, one can unravel the SME to write down a stochastic Schrodinger equation for the wave function. Since a mixed state is not allowed in a Schrodinger equation, all the decoherence channels must be described stochastically, i.e., by some random processes, resulting in infinitely many realizations of the wave function. The unravelings of an SME are not unique, but they must reproduce the decoherence terms in the SME after averaging over all the realizations of the random processes associated with the decoherence channels. One possible unraveling uses the quantum state diffusion mode, which works naturally with the diffusive limit used in describing the continuous measurement. Given the set of pairs \(\left\{\left(\hat{L}_{i},\gamma_{i}\right)\right\}_{i}\) with Lindblad operators \(\hat{L}_{i}\) and the associated rates \(\gamma_{i}\), the quantum state diffusion equation is given by \[\mathrm{d}\left|\Psi\right\rangle =-\frac{\mathrm{i}}{\hbar}\hat{H}_{\mathrm{eff}}\left|\Psi \right\rangle\mathrm{d}t+\sum_{i}\sqrt{\gamma_{i}}\Big{(}\hat{L}_{i}-\left< \hat{L}_{i}\right>\Big{)}\mathrm{d}W_{i}\] \[-\frac{1}{2}\sum_{i}\gamma_{i}\left(\hat{L}_{i}^{\dagger}\hat{L} _{i}+\left<\hat{L}_{i}\right>\!\left<\hat{L}_{i}^{\dagger}\right>-2\!\left< \hat{L}_{i}^{\dagger}\right>\!\hat{L}_{i}\right)\left|\Psi\right\rangle \mathrm{d}t, \tag{91}\] where \(\left<\hat{L}_{i}^{\dagger}\right>=\left<\Psi\right|\hat{L}_{i}^{\dagger} \left|\Psi\right>\) is the expectation value computed for a single realization of \(\left|\Psi\right>\) and \(\left\{W_{i}\right\}_{i}\) are independent Wiener processes satisfying \(\mathbb{E}[\mathrm{d}W_{i}(t)]=\mathbb{E}[\mathrm{d}W_{i}(t)\mathrm{d}W_{j}(t) ]=0\) for \(i\neq j\) and \(\mathbb{E}[\mathrm{d}W_{i}^{2}(t)]=\mathrm{d}t\). By substituting \[\left\{\left(\hat{L}_{i},\gamma_{i}\right)\right\}_{i}=\left\{\left(\hat{ \sigma}_{jk},\gamma_{1,jk}\right)\!,\left(\hat{\sigma}_{z,jk},\gamma_{\phi,jk }/2\right)\right\}_{k>j} \tag{92}\] into Eq.(91) and using the properties of the Wiener processes, one can easily compute \(\mathbb{E}[\mathrm{d}\hat{\rho}]\) and obtain the decoherence terms in Eq.(90). In other words, the conditioning of the wave function on the realizations of the Wiener processes is marginalized to create a mixed state of the qudit. Moreover, the random processes \(W_{I}\) and \(W_{Q}\) can be treated as a part of Eq.(91) by adding \(\left(\hat{L}_{I},\sqrt{\eta\kappa}\right)\), \(\left(\hat{L}_{I},\sqrt{(1-\eta)\kappa}\right)\), \(\left(\hat{L}_{Q},\sqrt{\eta\kappa}\right)\) and \(\left(\hat{L}_{Q},\sqrt{(1-\eta)\kappa}\right)\) into Eq.(92) with independent Wiener processes \(W_{I}\), \(W_{I}^{\prime}\), \(W_{Q}\), and \(W_{Q}^{\prime}\). Then, the full SME (in the steady state) can be found by only taking the expectation with respect to \(W_{I}^{\prime}\) and \(W_{Q}^{\prime}\), thus producing the correct measurement efficiency (see Appendix D for modeling an imperfect measurement). Figure 4: **Monte Carlo simulation of the qutrit stochastic master equation with an initial state given by Eq.(93).** Three sample trajectories are shown in the first three columns. Then, a thousand trajectories are averaged to produce the last column. The first two rows plot the matrix elements of the qutrit density operator. The last row shows the von Neumann entropy \(S(\hat{\rho})=-\operatorname{Tr}(\hat{\rho}\ln\hat{\rho})\), which peaks when the coherence drops to zero. This is because the measurement-induced dephasing will first erase the phase information in a superposition state; however, since the state will be steered towards one of the energy eigenstates, the uncertainty drops to zero eventually. Simulations and Experiments ### Quantum Trajectories and Ensemble Averages Eq.(87) can be simulated using finite difference just like a normal differential equation. However, since the measurement is random, each finite-difference simulation of Eq.(87) must be accompanied by a specific realization of \(W_{I}\) and \(W_{Q}\). In other words, Eq.(87) can only be simulated in the Monte-Carlo sense. In addition, since a finite-difference simulation discretizes the time into steps of size \(\Delta t\), one need to draw \(\Delta W_{I}\) and \(\Delta W_{Q}\) from two independent Gaussian distributions both with mean zero and variance \(\Delta t\). For example, consider an arbitrary qutrit state \[\hat{\rho}_{\mathcal{S}}(0)=\begin{pmatrix}0.5&0.3&0.36\\ 0.3&0.2&0.24\\ 0.36&0.24&0.3\end{pmatrix}. \tag{93}\] If we repeat the simulation with this initial state a thousand times, we will obtain a thousand distinct quantum trajectories of the qutrit. Three sample trajectories are shown in Figure 4, along with the corresponding von Neumann entropy. The last column of the figure gives the sample-averaged state and entropy. Indeed, the sample-averaged population and coherence agree with the ensemble-averaged ones shown in Figure 1(e)-(f), verifying that the unconditioned master equation is the expectation of the SME. What is not obvious from the unconditioned master equation is the convergence of the qutrit state to one of the energy eigenstates as shown in the first row of Figure 4. Although the qutrit is measured continuously and weakly, each infinitesimal measurement can push the qutrit to a new state; consequently, due to the nature of the dispersive coupling, the qutrit will slowly converge to the pointer states \(|g\rangle\), \(|e\rangle\), and \(|f\rangle\). For the qubit SME in the long-\(T_{1}\) limit, one can construct a Lyapunov function to show the convergence of the qubit state to either \(|g\rangle\) and \(|e\rangle\) when the readout probe is sent at \(\omega_{\mathrm{r}}+\chi_{\mathrm{qr}}/2\)[37]. However, for a qutrit or a qudit, a Lyapunov function is not known to the best of the author's knowledge. Despite the fact that each trajectory converges to an energy eigenstate, the sample-averaged population is still flat as shown in the last column of 4, which is a manifestation of the QND nature of dispersive measurement. Since the dispersive measurements on average do not modify the populations of each qutrit levels, we can conclude that the measurement outcomes on average are a faithful reproduction of the underlying qutrit state. However, when \(\gamma_{1,ab}\) are finite, the convergence of the qutrit is not exactly true even if the quantum trajectories appear to have converged to an eigenstate in a short amount of time. We can gain more insight by making the simulation longer, which effectively increases the measurement time. As shown in Figure 5, if we set the measurement time to be longer than \(1/\gamma_{1,ab}\), we would observe the shift of populations in the phase plane. In other words, the sample-averaged populations, i.e., \(\rho_{gg}\), \(\rho_{ee}\), and \(\rho_{ff}\), are no longer a constant as appeared in Figure 4 if we extend the simulation time. Hence, in practice, the QND nature of the measurement is limited by the decay rates and we cannot simply improve the signal-to-noise ratio by increasing the measurement time arbitrarily. Furthermore, by looking at the time evolution of the qutrit populations, we observe jumps when the measurement time is sufficiently long. Similar behavior of the quantum trajectories was discussed for the qubit case in [12]; for a general qudit, as long as \(\gamma_{1,jk}\) is nonzero, similar jumps will happen so that the qudit state can eventually reach the ground state as required by the unconditioned master equation. ### Measurement Shot Noise To validate the qutrit SME, real dispersive measurement is performed on a transmon qutrit with \(\tilde{\omega}_{\mathrm{q}}/2\pi=4.48\) GHz and \(|\alpha_{\mathrm{q}}|/2\pi\approx 280\) GHz coupled to a 3D aluminum cavity with \(\omega_{\mathrm{r}}/2\pi=6.7835\) GHz when the qutrit is in the ground state. The qutrit decay rates have been characterized to be larger than 30 us for \(T_{1,ge}\), \(T_{1,gf}\), and \(T_{1,ef}\) and the maximum readout time used in the experiment is much shorter than the decay time to ensure the validity of the QND measurement. In addition, the shortest dephasing time is measured to be \(T_{2,gf}=1/\gamma_{2,gf}=3\) us, which does not play a significant Figure 5: **Effect of qutrit decay in a long heterodyne measurement**. A thousand random quantum trajectories have been simulated with a measurement time \(T=40\) us and with the qutrit parameters \(1/\gamma_{1,ge}=1/\gamma_{1,ef}=35\) us and \(1/\gamma_{1,gf}=1\) ms. **a**. For each quantum trajectory, the heterodyne measurement outcomes \(V_{I}(t)\) and \(V_{Q}(t)\) are averaged over the measurement time \(T\) and are plotted as a point in the \(I-Q\) plane. Besides the three Gaussian clusters expected from a QND measurement, we also observe streams of points connecting the three clusters, which represent the leakage of population from the excited states to the ground states. **b**. When the measurement time is on the order of the decay time, the random process exhibits a jumping characteristic on the large time scale. role since the measurement-induced dephasing happens at a much higher rate as can be seen in the second row of Figure 4. In all the subsequent experiments, the qutrit is excited to an equal superposition state by two consecutive \(\pi\)-pulses to create three equally weighted clusters in the phase plane. We include two types of comparison between the experiment and theory. The first one is on the measurement time, showing the reduction of the measurement uncertainty as more information leaks out of the readout cavity. The second comparison tests how accurately the theory can predict the steady-state amplitudes of the coherent states as a function of the readout detuning. In a real dispersive measurement, what we have access to are the transmitted signals \(V_{I}(t)\) and \(V_{Q}(t)\). By calculating the time averages of \(V_{I}(t)\) and \(V_{Q}(t)\) (with possibly weighting functions to account for the transient behavior), denoted by \(\tilde{V}_{I}\) and \(\tilde{V}_{Q}\), we obtain a complex number \(\tilde{V}_{I}+\mathrm{i}\tilde{V}_{Q}\), which can be plotted on a phase plane. Repeating the experiment a large number of times generates a scattering plot, whose distribution reflects the qutrit state. One can then compare the simulated \(\tilde{V}_{I}+\mathrm{i}\tilde{V}_{Q}\) with the one measured in experiments to verify the correctness of the SME. In Figure 6(a), we vary the measurement time up to 3 us to verify the effect of the shot noise due to the quadrature of the coherent states and the broadening due to the measurement inefficiency. As suggested by Eq.(79) and (80), by increasing the measurement time and computing a more accurate time average of \(V_{I,Q}\), one will have access to the value of \(\left\langle\hat{L}_{I,Q}(t)\right\rangle\) up to small variations in the transient part of \(\hat{L}_{I,Q}(t)\). Since \(\hat{L}_{I,Q}(t)\) is a superposition of the projection operators weighted by the quadrature amplitudes of the resonator coherent state, the qutrit state (which determines the probability amplitude of each energy level) directly affects the chance that one of the projection operator \(\hat{\Pi}_{a}\) is picked in the frequentist's point of view. Hence, \(\left\langle\hat{L}_{I,Q}(t)\right\rangle\) encodes the information about the qutrit state and a longer measurement time effectively increases signal-to-noise ratio for measuring \(\left\langle\hat{L}_{I,Q}(t)\right\rangle\) as illustrated in Figure 6(a). ### Readout Frequency With the readout time fixed at 3 us, we sweep the readout frequency and observe the change in the amplitude and phase of the (time-averaged) transmitted signal. A comparison between the simulated and measured phase planes is shown in Figure 6(b). As predicted by Figure 1(b), the center of each Gaussian cluster lies on a circle that goes through the center. When the readout frequency is far away from any of the state-dependent resonator frequencies (i.e., \(\omega_{\mathrm{r}}+j\chi_{\mathrm{qr}}\) for \(j=0,1,2\)), the three Gaussian clusters overlap on one another, making the state classification impossible. However, this also means that the qutrit states are well protected (both in terms of the population and coherence) since the rate at which information can leak out of the readout cavity at the steady state is proportional to the separation between the clusters (see Eq.(85) and (86)). For a transmon-type qutrit where the dispersive shifts are 0, \(\chi_{\mathrm{qr}}\) and \(2\chi_{\mathrm{qr}}\), we can set the readout frequency to be \(\omega_{\mathrm{r}}+\chi_{\mathrm{qr}}\) so that the three Gaussian clusters will be symmetrically placed in the phase plane as shown in the last column of 6(b). However, for a general qudit with unequally-spaced dispersive shifts, there isn't any symmetry one can utilize, which makes the SME simulation of great usefulness. Given that the simulation and experiment can be well matched, one can now find an optimal set of design and experimental parameters that corresponds to a large separation between the clusters and thus gives the best state classification. In addition, other parameters such as mea Figure 6: **Comparison between the simulation and experiment.** The total cavity decay rate is measured to be \(\kappa/2\pi=2.7\) MHz and the dispersive shift is \(\chi_{\mathrm{qr}}/2\pi=0.6\) MHz. The same parameters are used in the simulation with the measurement efficiency set to \(\eta=0.04\) (the experiment is performed without a quantum-limited amplifier). The quadrature signals are collected by AlazarTech digitizer with a sampling frequency of 1 GHz, which can be treated at \(1/\Delta t\) in the derivation of the diffusive SME. **a.** Changing the measurement time \(T\). **b.** Sweeping the readout frequency \(f_{4}\) (i.e., the cavity drive frequency). surement efficiency can be determined for each experimental setup; subsequently, the simulation used to design the qudits can adopt the same measurement efficiency to predict the outcomes of the heterodyne detection. ## VI Conclusion In this work, we have introduced and solved the unconditioned master equation describing the dispersive coupling between a qudit and a resonator. Consequently, the concept of measurement-induced dephasing and frequency shifts are extended to a qutrit and a general qudit. Two approaches - the positive \(P\)-representation and the displaced frame - employed in the qubit analysis have been adopted to solve the unconditioned master equation for the qudit for the first time. Unlike the qubit case where the shift of the transition frequency is unconstrained, we observe a non-Markovian nature of the dispersive measurement for qudit with \(D\geq 3\). In the presence of negligible frequency shifts due to the readout probe, we arrive at an effective (unconditioned) master equation for a qudit. Given the analysis of the unconditioned qudit state, we then study the conditioned qudit state, where the information leaving the combined system (i.e., the transmitted readout signal) is retrieved and processed in both the analog and digital domain. In particular, we focus on heterodyne detection, where both quadratures of the coherent signals traveling along the transmission line can be measured with the efficiency halved. In the diffusive limit where each measurement has an infinitesimal integration time, we can model the continuous monitoring of the qudit by a stochastic master equation (SME). In addition, the measurement outcomes of the two quadratures are described by two classical stochastic differential equations with the same Wiener processes that appeared in the SME. In this way, the quantum dynamics of the qudit are related to the classical information about the coherent signal, allowing us to depict the stochastic nature of quantum measurement. Finally, we compare the simulation of the SME with real experiments on a transmon qutrit coupled to a readout cavity. The accuracy of the model is tested with a sweep of the measurement time and of the readout frequency. Both comparisons demonstrate the preciseness of the SME in predicting the formation of clusters in the phase plane. Since our model does not rely on the type of qudit, it can be applied to the measurement of other new superconducting qudit topologies, such as a multimon. In practice, the analytical solution to the unconditioned master SMEs can be critical guidance to the qudit design and readout, enabling rapid prediction of the location of the clusters in the phase plane and providing a potential theoretical foundation for understanding the role of weak measurement in quantum feedback control. ###### Acknowledgements. All simulations of the density operators were performed using QuTiP and NumPy. The work at the University of California, Los Angeles (UCLA) is supported by the National Science Foundation (NSF) Graduate Research Fellowship Program under grant number DGE-2034835, the NSF Graduate Research Traineeship on Quantum Science and Engineering at UCLA under grant number DGE-2125924, the Office of Naval Research, and the Army Research Office. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, the Office of Naval Research, and the Army Research Office. The work at Lawrence Livermore National Laboratory is performed under Contract No. DE-AC52-07NA27344 by the U.S. Department of Energy. ## Appendix A Solving the Master Equation of the Combined System (Qutrit + Resonator) ### Zero Temperature To solve the master equation, we project the density operator of the composite system onto the energy eigenbasis of the qutrit and thus introduce the operators \[\hat{\rho}_{ab}(t)=\bra{a}\hat{\rho}_{\mathcal{SR}}(t)\ket{b} \tag{16}\] for \(a,b\in\{g,e,f\}\); in other words, the reduced density operator can be decomposed into \[\hat{\rho}_{\mathcal{SR}}(t) =\hat{\rho}_{gg}\ket{g}\bra{g}+\hat{\rho}_{ge}\ket{g}\bra{e}+ \hat{\rho}_{gf}\ket{g}\bra{f}\] \[\quad+\hat{\rho}_{eg}\ket{e}\bra{g}+\hat{\rho}_{ee}\ket{e}\bra{e }+\hat{\rho}_{ef}\ket{e}\bra{f}\] \[\quad+\hat{\rho}_{fg}\ket{f}\bra{g}+\hat{\rho}_{fe}\ket{f}\bra{e }+\hat{\rho}_{ff}\ket{f}\bra{f}, \tag{17}\] and, after expanding the master equation using the nine operators, we obtain nine coupled _operator_ differential equations \[\dot{\hat{\rho}}_{gg} =-\mathrm{i}\Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a}, \hat{\rho}_{gg}\right]\] \[\quad+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat {a},\hat{\rho}_{gg}\right]+\kappa\mathcal{D}[\hat{a}]\hat{\rho}_{gg}, \tag{18}\] \[\dot{\hat{\rho}}_{ee} =-\mathrm{i}(\chi_{\mathrm{qr}}+\Delta_{\mathrm{rd}})\left[\hat {a}^{\dagger}\hat{a},\hat{\rho}_{ee}\right]\] \[\quad+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat {a},\hat{\rho}_{ee}\right]+\kappa\mathcal{D}[\hat{a}]\hat{\rho}_{ee}, \tag{19}\] \[\dot{\hat{\rho}}_{ff} =-\mathrm{i}(2\chi_{\mathrm{qr}}+\Delta_{\mathrm{rd}})\left[ \hat{a}^{\dagger}\hat{a},\hat{\rho}_{ff}\right]\] \[\quad+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat {a},\hat{\rho}_{ff}\right]+\kappa\mathcal{D}[\hat{a}]\hat{\rho}_{ff}, \tag{20}\] \[\dot{\hat{\rho}}_{ge} =\mathrm{i}\tilde{\omega}_{\mathrm{q}}\hat{\rho}_{ge}-\mathrm{i} \Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a},\hat{\rho}_{ge}\right]+ \mathrm{i}\chi_{\mathrm{qr}}\hat{\rho}_{ge}\hat{a}^{\dagger}\hat{a}\] \[\quad+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat {a},\hat{\rho}_{ge}\right]+\kappa\mathcal{D}[\hat{a}]\hat{\rho}_{ge}-\gamma_{2,ge}\hat{\rho}_{ge}, \tag{21}\] \[\dot{\hat{\rho}}_{eg} =-\mathrm{i}\tilde{\omega}_{\mathrm{q}}\hat{\rho}_{eg}+\mathrm{i} \Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a},\hat{\rho}_{eg}\right]- \mathrm{i}\chi_{\mathrm{qr}}\hat{\rho}_{eg}\hat{a}^{\dagger}\hat{a}\] \[\quad-\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat{ a},\hat{\rho}_{eg}\right]+\kappa\mathfrak{D}[\hat{a}]\hat{\rho}_{eg}-\gamma_{2,eg} \hat{\rho}_{eg}, \tag{100}\] \[\dot{\hat{\rho}}_{eg} =-\mathrm{i}(2\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}) \hat{\rho}_{fg}+\mathrm{i}\Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a}, \hat{\rho}_{gf}\right]\] \[\quad+\mathrm{i}\mathcal{I}\chi_{\mathrm{qr}}\hat{\rho}_{fg}\hat {a}^{\dagger}\hat{a}+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*} \hat{a},\hat{\rho}_{gf}\right]\] \[\quad+\kappa\mathfrak{D}[\hat{a}]\hat{\rho}_{gf}-\gamma_{2,gf} \hat{\rho}_{gf}, \tag{101}\] \[\dot{\hat{\rho}}_{eg} =-\mathrm{i}(2\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}) \hat{\rho}_{fg}+\mathrm{i}\Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a}, \hat{\rho}_{gf}\right]\] \[\quad-\mathrm{i}\mathcal{I}\chi_{\mathrm{qr}}\hat{\rho}_{fg}\hat {a}^{\dagger}\hat{a}-\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*} \hat{a},\hat{\rho}_{fg}\right]\] \[\quad+\kappa\mathfrak{D}[\hat{a}]\hat{\rho}_{fg}-\gamma_{2,fg} \hat{\rho}_{fg}, \tag{102}\] \[\dot{\hat{\rho}}_{ef} =\mathrm{i}(\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}) \hat{\rho}_{ef}-\mathrm{i}\Delta_{\mathrm{rd}}\left[\hat{a}^{\dagger}\hat{a}, \hat{\rho}_{ef}\right]\] \[\quad+\mathrm{i}\chi_{\mathrm{qr}}(2\hat{\rho}_{ef}\hat{a}^{ \dagger}\hat{a}-\hat{a}^{\dagger}\hat{a}\hat{\rho}_{ef})\] \[\quad+\mathrm{i}\left[\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat {a},\hat{\rho}_{ef}\right]+\kappa\mathfrak{D}[\hat{a}]\hat{\rho}_{ef}-\gamma_ {2,ef}\hat{\rho}_{ef}, \tag{103}\] Note that each operator \(\hat{\rho}_{ab}\) lives in an infinite dimensional space since \(\mathscr{H}_{\mathbb{R}}\) is a Fock space. Nevertheless, it's possible to find a closed-form solution by invoking the positive \(P\)-representation \[\hat{\rho}_{ab}(t)=\int\mathrm{d}^{2}\alpha\int\mathrm{d}^{2}\beta\frac{| \alpha\rangle\langle\beta^{*}|}{\langle\beta^{*}|\alpha\rangle}P_{ab}(\alpha, \beta,t). \tag{104}\] The reader can verify that the action of the creation and annihilation operators in the operator space can be translated to some simple operations in the positive \(P\)-representation: \[\hat{a}\hat{\rho}(t) \longrightarrow \alpha P(\alpha,\beta,t), \tag{105}\] \[\hat{a}^{\dagger}\hat{\rho}(t) \longrightarrow \left(\beta-\frac{\partial}{\partial\alpha}\right)P(\alpha,\beta,t),\] (106) \[\hat{\rho}(t)\hat{a}^{\dagger} \longrightarrow \beta P(\alpha,\beta,t),\] (107) \[\hat{\rho}(t)\hat{a} \longrightarrow \left(\alpha-\frac{\partial}{\partial\beta}\right)P(\alpha, \beta,t). \tag{108}\] As an example, let us use Eq.(105)-(108) to transform Eq.(105) into a scalar equation: \[\dot{P}_{gg} =-\mathrm{i}\Delta_{\mathrm{rd}}\left[\left(\beta-\frac{\partial }{\partial\alpha}\right)\left(\alpha P_{gg}\right)-\left(\alpha-\frac{\partial }{\partial\beta}\right)\left(\beta P_{gg}\right)\right]\] \[\quad+\mathrm{i}\bigg{[}\epsilon\left(\beta-\frac{\partial}{ \partial\alpha}\right)P_{gg}+\epsilon^{*}\alpha P_{gg}\] \[\quad\quad\quad\quad-\epsilon\beta P_{gg}-\epsilon^{*}\left( \alpha-\frac{\partial}{\partial\beta}\right)P_{gg}\bigg{]}\] \[\quad\quad+\kappa\bigg{[}\alpha\beta P_{gg}-\frac{1}{2}\left(\beta -\frac{\partial}{\partial\alpha}\right)\left(\alpha P_{gg}\right)\] \[\quad\quad\quad\quad-\frac{1}{2}\left(\alpha-\frac{\partial}{ \partial\beta}\right)\left(\beta P_{gg}\right)\bigg{]}\] \[=\frac{\partial}{\partial\alpha}\left[\left(-\mathrm{i}\epsilon+ \mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2\right)P_{gg}\right]\] \[\quad\quad+\frac{\partial}{\partial\beta}\left[\left(\mathrm{i} \epsilon^{*}-\mathrm{i}\Delta_{\mathrm{rd}}\beta+\frac{\kappa\beta}{2}\right)P_ {gg}\right], \tag{109}\] \[\dot{P}_{ee} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+ \mathrm{i}\chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa \alpha/2)P_{ee}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa \beta/2)P_{ee}\right], \tag{110}\] \[\dot{P}_{ff} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+ \mathrm{i}2\chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+ \kappa\alpha/2)P_{ff}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}2\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa \beta/2)P_{ff}\right], \tag{111}\] \[\dot{P}_{ge} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+ \mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{ge}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa \beta/2)P_{ge}\right]\] \[\quad+\mathrm{i}\chi_{\mathrm{qr}}\alpha\beta P_{ge}+\mathrm{i} \tilde{\omega}_{\mathrm{q}}P_{ge}-\gamma_{2,ge}P_{ge}, \tag{112}\] \[\dot{P}_{eg} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+\mathrm{i} \chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{eg}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa\beta/2)P_{eg}\right]\] \[\quad-\mathrm{i}\chi_{\mathrm{qr}}\alpha\beta P_{eg}-\mathrm{i} \tilde{\omega}_{\mathrm{q}}P_{eg}-\gamma_{2,ge}P_{eg}, \tag{113}\] \[\dot{P}_{gf} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+\mathrm{i} \Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{gf}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}2\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+ \kappa\beta/2)P_{gf}\right]\] \[\quad+\mathrm{i}2\chi_{\mathrm{qr}}\alpha\beta P_{gf}+\mathrm{i} \tilde{\omega}_{\mathrm{q}}P_{eg}-\gamma_{2,ge}P_{eg}, \tag{114}\] \[\dot{P}_{eg} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+\mathrm{i} \chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{eg}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa\beta/2)P_{eg}\right]\] \[\quad-\mathrm{i}\chi_{\mathrm{qr}}\alpha\beta P_{eg}-\mathrm{i} \tilde{\omega}_{\mathrm{q}}P_{eg}-\gamma_{2,ge}P_{eg}, \tag{115}\] \[\dot{P}_{gf} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+\mathrm{i} \Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{gf}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*}- \mathrm{i}2\chi_{\mathrm{qr}}\ \[\dot{P}_{fg} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+\mathrm{i} 2\chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa\alpha/2)P_{ fg}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\Delta_{\mathrm{rd}}\beta-\kappa\beta/2)P_{fg}\right]\] \[\quad-\mathrm{i}2\chi_{\mathrm{qr}}\alpha\beta P_{fg}-\mathrm{i }(2\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}})P_{fg}-\gamma_{2,gf}P_{fg}, \tag{24}\] \[\dot{P}_{ef} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+ \mathrm{i}\chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+\kappa \alpha/2)P_{ef}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}2\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+ \kappa\beta/2)P_{ef}\right]\] \[\quad+\mathrm{i}\chi_{\mathrm{qr}}\alpha\beta P_{ef}+\mathrm{i} (\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}})P_{ef}-\gamma_{2,ef}P_{ef}, \tag{25}\] \[\dot{P}_{fe} =\frac{\partial}{\partial\alpha}\left[(-\mathrm{i}\epsilon+ \mathrm{i}2\chi_{\mathrm{qr}}\alpha+\mathrm{i}\Delta_{\mathrm{rd}}\alpha+ \kappa\alpha/2)P_{fe}\right]\] \[\quad+\frac{\partial}{\partial\beta}\left[(\mathrm{i}\epsilon^{*} -\mathrm{i}\chi_{\mathrm{qr}}\beta-\mathrm{i}\Delta_{\mathrm{rd}}\beta+\kappa \beta/2)P_{fe}\right]\] \[\quad-\mathrm{i}\chi_{\mathrm{qr}}\alpha\beta P_{fe}-\mathrm{i} (\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}})P_{fe}-\gamma_{2,ef}P_{fe}. \tag{26}\] It should be noted that the differential equations of \(P_{ab}\) are usually of the type of a Fokker-Planck equation, which also includes the diffusive terms (i.e., the second partial derivatives with respect to \(\alpha\) and \(\beta\)). However, since we have assumed that \(\bar{N}(\omega_{\mathrm{r}})=0\), there is no terms of the form \[\dot{a}^{\dagger}\hat{\rho}_{ab}\dot{a} \longrightarrow \left(\beta-\frac{\partial}{\partial\alpha}\right)\left(\alpha- \frac{\partial}{\partial\beta}\right)P(\alpha,\beta,t). \tag{27}\] Even in the case where \(\bar{N}>0\), the method of the positive \(P\)-representation will still work but a sharp coherent state (see below) inside the cavity will broaden itself diffusively in the phase plane. Although looking complicated, the nine coupled equations admit simple trajectories in the complex planes of \(\alpha\) and \(\beta\). We use the ansatze \[P_{gg}(\alpha,\beta,t) =\delta^{(2)}(\alpha-\alpha_{g}(t))\delta^{(2)}(\beta-\alpha_{g}^{ *}(t)), \tag{28}\] \[P_{ee}(\alpha,\beta,t) =\delta^{(2)}(\alpha-\alpha_{e}(t))\delta^{(2)}(\beta-\alpha_{e}^{ *}(t)),\] (29) \[P_{ff}(\alpha,\beta,t) =\delta^{(2)}(\alpha-\alpha_{f}(t))\delta^{(2)}(\beta-\alpha_{f}^{ *}(t)) \tag{30}\] for the diagonal terms and \[P_{ge}(\alpha,\beta,t) =c_{ge}(t)\delta^{(2)}(\alpha-\alpha_{g}(t))\delta^{(2)}(\beta- \alpha_{e}^{*}(t)), \tag{31}\] \[P_{eg}(\alpha,\beta,t) =c_{eg}(t)\delta^{(2)}(\alpha-\alpha_{e}(t))\delta^{(2)}(\beta- \alpha_{g}^{*}(t)),\] (32) \[P_{gf}(\alpha,\beta,t) =c_{gf}(t)\delta^{(2)}(\alpha-\alpha_{g}(t))\delta^{(2)}(\beta- \alpha_{f}^{*}(t)),\] (33) \[P_{fg}(\alpha,\beta,t) =c_{fg}(t)\delta^{(2)}(\alpha-\alpha_{f}(t))\delta^{(2)}(\beta- \alpha_{g}^{*}(t)),\] (34) \[P_{ef}(\alpha,\beta,t) =c_{ef}(t)\delta^{(2)}(\alpha-\alpha_{e}(t))\delta^{(2)}(\beta- \alpha_{f}^{*}(t)),\] (35) \[P_{fe}(\alpha,\beta,t) =c_{fe}(t)\delta^{(2)}(\alpha-\alpha_{f}(t))\delta^{(2)}(\beta- \alpha_{e}^{*}(t)) \tag{36}\] for off-diagonal terms. Each diagonal term \(P_{aa}\) represents a single coherent state whose amplitudes are specified by the two delta functions. Plugging the ansatze into Eq.(18)-(20), we obtain the time evolution of the coherent states \[\dot{\alpha}_{g} =-\mathrm{i}(\Delta_{\mathrm{rd}}-\mathrm{i}\kappa/2)\alpha_{g}+ \mathrm{i}\epsilon, \tag{37}\] \[\dot{\alpha}_{e} =-\mathrm{i}(\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}}-\mathrm{i} \kappa/2)\alpha_{e}+\mathrm{i}\epsilon,\] (38) \[\dot{\alpha}_{f} =-\mathrm{i}(\Delta_{\mathrm{rd}}+2\chi_{\mathrm{qr}}-\mathrm{i} \kappa/2)\alpha_{f}+\mathrm{i}\epsilon. \tag{39}\] Besides the time evolution brought by the coherent states, each non-diagonal term \(P_{ab}\) (\(a\neq b\)) is modulated by an envelope function \(c_{ab}\). By substituting the ansatze together with Eq.(37)-(39) into Eq.(21)-(26), we deduce that \[\dot{c}_{ge} =\mathrm{i}(\tilde{\omega}_{\mathrm{q}}+\mathrm{i}\gamma_{2,ge})c_{ ge}+\mathrm{i}\chi_{\mathrm{qr}}\alpha_{g}\alpha_{e}^{*}c_{ge}, \tag{40}\] \[c_{eg} =c_{ge}^{*},\] (41) \[\dot{c}_{gf} =\mathrm{i}(2\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}+ \mathrm{i}\gamma_{2,gf})c_{gf}+\mathrm{i}2\chi_{\mathrm{qr}}\alpha_{g}\alpha_{f}^{ *}c_{gf},\] (42) \[c_{fg} =c_{gf}^{*},\] (43) \[\dot{c}_{ef} =\mathrm{i}(\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}+ \mathrm{i}\gamma_{2,ef})c_{ef}+\mathrm{i}\chi_{\mathrm{qr}}\alpha_{e}\alpha_{f}^{ *}c_{ef},\] (44) \[c_{fe} =c_{ef}^{*}. \tag{45}\] Given the arbitrary initial conditions, the detailed time evolution of \(\alpha_{a}\) and \(c_{ab}\) can be solved numerically. After integrating the delta functions in the complex plane of \(\alpha\) and \(\beta\), we arrive at the general solution (in the rotating frame) \[\hat{\rho}_{\mathcal{SR}}(t)=\sum_{a\in\{g,e,f\}}p_{a}(0)\ket{a} \bra{a}\otimes\ket{\alpha_{a}(t)}\bra{\alpha_{a}(t)}\] \[+\sum_{a,b\in\{g,e,f\}}\frac{c_{ab}(t)}{\bra{\alpha_{b}(t)}\bra{ \alpha_{a}(t)}}\ket{a}\bra{b}\otimes\ket{\alpha_{a}(t)}\bra{\alpha_{b}(t)}, \tag{46}\] where \(p_{g,e,f}(0)\) are the initial populations in each energy eigenstate of the qutrit. The steady-state (complex) amplitudes of the cavity coherent states are given by \[\alpha_{g}(+\infty) =\frac{\sqrt{\kappa_{\mathrm{in}}}\tilde{a}_{\mathrm{in}}}{ \Delta_{\mathrm{rd}}-\mathrm{i}\kappa/2}, \tag{47}\] \[\alpha_{e}(+\infty) =\frac{\sqrt{\kappa_{\mathrm{in}}}\tilde{a}_{\mathrm{in}}}{\Delta_ {\mathrm{rd}}+\chi_{\mathrm{qr}}-\mathrm{i}\kappa/2},\] (48) \[\alpha_{f}(+\infty) =\frac{\sqrt{\kappa_{\mathrm{in}}}\tilde{a}_{\mathrm{in}}}{\Delta_ {\mathrm{rd}}+2\chi_{\mathrm{qr}}-\mathrm{i}\kappa/2}. \tag{49}\] In the Appendix B and C, we will approach the same problem with a different technique; nevertheless, the result will be identical, except that we will include \(\gamma_{1,ab}\) for completeness. ### Nonzero Temperature When \(\bar{N}>0\), the master equation takes the form \[\dot{\hat{\rho}}_{\mathcal{SR}} =-\frac{\mathrm{i}}{\hbar}\left[\hat{H}_{\mathrm{eff}},\hat{\rho}_{ \mathcal{SR}}\right]+\kappa\big{(}\bar{N}+1\big{)}\mathscr{D}[\hat{a}]\hat{\rho} _{\mathcal{SR}}\] \[\quad+\kappa\bar{N}\mathscr{D}\big{[}\hat{a}^{\dagger}\big{]}\hat {\rho}_{\mathcal{SR}}+\frac{\gamma_{2,ge}}{2}\mathscr{D}\big{[}\hat{\sigma}_{ z,ge}\big{]}\hat{\rho}_{\mathcal{SR}}\] \[\quad+\frac{\gamma_{2,gf}}{2}\mathscr{D}\big{[}\hat{\sigma}_{z,gf }\big{]}\hat{\rho}_{\mathcal{SR}}+\frac{\gamma_{2,ef}}{2}\mathscr{D}\big{[} \hat{\sigma}_{z,ef}\big{]}\hat{\rho}_{\mathcal{SR}}, \tag{100}\] in the long-\(T_{1}\) limit. The operator differential equations of \(\hat{\rho}_{ab}\) are almost the same as before except that we replace \(\kappa\mathscr{D}[\hat{a}]\hat{\rho}_{ab}\) with \[\kappa\big{(}\bar{N}+1\big{)}\mathscr{D}[\hat{a}]\hat{\rho}_{ab}+\kappa\bar{N }\mathscr{D}\big{[}\hat{a}^{\dagger}\big{]}\hat{\rho}_{ab}. \tag{101}\] Consequently, the scalar differential equations for the positive \(P\)-representations \(P_{ab}\) acquire the second partial derivatives mention before, i.e., \[\dot{P}_{ab}=\big{(}\text{terms from the case }\bar{N}=0\big{)}+\kappa\bar{N} \frac{\partial^{2}}{\partial\alpha\partial\beta}P_{ab}. \tag{102}\] Since Eq.(102) with \(a=b\) has the same form as the classical Fokker-Planck equation, we use Gaussian distributions now as the new ansatze \[P_{gg}(\alpha,\beta,t)\] \[=\frac{1}{\pi N(t)}\exp\left\{-\frac{1}{N(t)}\big{[}\alpha- \alpha_{g}(t)\big{]}\big{[}\beta-\alpha_{g}^{*}(t)\big{]}\right\}, \tag{103}\] \[P_{ee}(\alpha,\beta,t)\] \[=\frac{1}{\pi N(t)}\exp\left\{-\frac{1}{N(t)}\big{[}\alpha- \alpha_{e}(t)\big{]}\big{[}\beta-\alpha_{e}^{*}(t)\big{]}\right\},\] (104) \[P_{ff}(\alpha,\beta,t)\] \[=\frac{1}{\pi N(t)}\exp\left\{-\frac{1}{N(t)}\big{[}\alpha- \alpha_{f}(t)\big{]}\big{[}\beta-\alpha_{f}^{*}(t)\big{]}\right\}, \tag{105}\] where we require that \(\alpha_{g,e,f}\) still satisfy Eq.(101), (102), and (103), respectively. Substituting the Gaussian distributions into the Fokker-Planck equation, we obtain the differential equation of the variance \(N\) (more precisely, the variance of the two-dimensional Gaussian is \(N/2\)) \[\dot{N}(t)=-\kappa\big{[}N(t)-\bar{N}\big{]}. \tag{106}\] Suppose the composite system was in thermal equilibrium with the bath before receiving the drive \(\varepsilon_{\mathrm{d}}(t)\), then we will simply use \(N(+\infty)=\bar{N}\) in \(P_{gg}\), \(P_{ee}\), and \(P_{ff}\). In other words, instead of building up a coherent state in the resonator, the external drive will excite a Gaussian state with a quadrature uncertainty broadened by the thermal bath. This also means that the resonator is a continuum combination of coherent states with amplitudes near \(\alpha_{g}\), \(\alpha_{e}\), and \(\alpha_{f}\). In contrast, if the bath is in a vacuum state, \(N(+\infty)=0\) and a coherent state initially excited in the resonator will remain coherent. Unlike the case where \(\bar{N}=0\), the coherence \(c_{ab}\) also depends on \(\alpha\) and \(\beta\) now. This is because \(c_{ab}\) are affected by an infinite collection of coherent states around \(\alpha_{g}\), \(\alpha_{e}\), and \(\alpha_{f}\). Nevertheless, we expect \(c_{ab}\) to vanish on similar timescales set by Eq.(100)-(104). Appendix B Deriving the Master Equation of the Combined System (Qutrit + Resonator) in the Displaced Frame ### Qutrit-State-Dependent Displacement Due to dispersive coupling, each eigenstate of the qutrit is entangled with a coherent state of the resonator. In the last section, we have found three differential equations for the complex amplitudes \(\alpha_{g}\), \(\alpha_{e}\), and \(\alpha_{f}\) of the coherent states. To make this entanglement explicit, we define a unitary operator \[\hat{\mathsf{P}}(t)=\hat{\Pi}_{g}\hat{D}(\alpha_{g}(t))+\hat{\Pi}_{e}\hat{D}( \alpha_{e}(t))+\hat{\Pi}_{f}\hat{D}(\alpha_{f}(t)), \tag{107}\] where \(\hat{\Pi}_{a}=|a\rangle\langle a|\) are the projection operator onto the energy eigenstate \(|a\rangle\) of the qutrit and \(\hat{D}(\alpha_{a}(t))\) are the displacement operators (see the Eq.(34) for the definition). Intuitively, \(\hat{\mathsf{P}}\) entangles each projection \(\hat{\Pi}_{a}\) of the qutrit with a displacement operator of the resonator such that if the qutrit is in an energy eigenstate, the resonator coherent state will be displaced to the vacuum state. For the subsequent derivation, we follow the notation used in [12] and use \[\hat{O}^{\mathsf{P}}=\mathsf{P}^{\dagger}\hat{O}^{\mathsf{P}}\mathsf{P} \tag{108}\] to denote any operator \(\hat{O}\) in the displaced frame. In the new frame, the density operator of the composite system is given by \[\hat{\rho}^{\mathsf{P}}(t)=\mathsf{P}^{\dagger}\hat{\rho}(t)\mathsf{P}, \tag{109}\] where, to simplify the notation, to will start to use \(\hat{\rho}=\hat{\rho}_{\mathcal{SR}}\). In addition, if we define \[\hat{\rho}^{\mathsf{P}}_{nmab}(t)=\langle n,a|\,\hat{\rho}^{\mathsf{P}}(t)\,|m,b\rangle \tag{110}\] to be the matrix element of \(\hat{\rho}^{\mathsf{P}}\) in the energy basis of the qutrit and the number basis of the resonator, then \[\hat{\rho}^{\mathsf{P}}=\sum_{n,m=0}^{\infty}\sum_{a,b\in\{g,e,f\}}\hat{\rho}^{ \mathsf{P}}_{nmab}\,|n,a\rangle\langle m,b|\,. \tag{111}\] Our goal is to find the time evolution of the qutrit reduced density operator, i.e., \[\hat{\rho}_{\mathcal{S}}(t)=\mathrm{Tr}_{\mathcal{R}}\big{[}\hat{\rho}(t)\big{]} =\mathrm{Tr}_{\mathcal{R}}\big{[}\hat{\rho}\hat{\rho}^{\mathsf{P}}(t) \mathsf{P}^{\dagger}\big{]}. \tag{112}\] By using Eq.(114), we obtain \[\hat{\rho}_{\mathcal{S}}(t) =\sum_{n}\left(\rho_{nngg}^{\mathsf{p}}\left|g\right\rangle\! \left\langle g\right|+\rho_{nnge}^{\mathsf{p}}\left|e\right\rangle\!\left\langle e \right|+\rho_{nnff}^{\mathsf{p}}\left|f\right\rangle\!\left\langle f\right| \right)\] \[\quad+\sum_{n,m}\left(\lambda_{nmmn}^{ge}\left|g\right\rangle\! \left\langle e\right|+\lambda_{mmnm}^{ge*}\left|e\right\rangle\!\left\langle g \right|\right)\] \[\quad+\sum_{n,m}\left(\lambda_{nmmn}^{gf}\left|g\right\rangle\! \left\langle f\right|+\lambda_{mmnm}^{gf*}\left|f\right\rangle\!\left\langle g \right|\right)\] \[\quad+\sum_{n,m}\left(\lambda_{nmmn}^{ef}\left|e\right\rangle\! \left\langle f\right|+\lambda_{mmnm}^{ef*}\left|f\right\rangle\!\left\langle e \right|\right)\!, \tag{115}\] where \[\lambda_{nmpq}^{ge}(t) =\rho_{nmge}^{\mathsf{p}}e^{-i\operatorname{Im}(\alpha_{o}\alpha _{g}^{*})}d_{pq}, \tag{116}\] \[\lambda_{nmpq}^{gf}(t) =\rho_{nmgf}^{\mathsf{p}}e^{-i\operatorname{Im}(\alpha_{f}\alpha _{g}^{*})}d_{pq},\] (117) \[\lambda_{nmpq}^{ef}(t) =\rho_{nmef}^{\mathsf{p}}e^{-i\operatorname{Im}(\alpha_{f}\alpha _{g}^{*})}d_{pq}, \tag{118}\] with \(d_{pq}(t)=\left\langle p\right|\hat{D}(\beta_{ge})\left|q\right\rangle\). To arrive at Eq.(115), we have used the fact that \[\sum_{p}\left\langle m\right|\hat{D}^{\dagger}(\alpha)\left|p\right\rangle \left\langle p\right|\hat{D}(\alpha)\left|n\right\rangle=\delta_{mn} \tag{119}\] since \(\hat{D}(\alpha)\) is unitary. Once the matrix elements of \(\rho^{\mathsf{p}}\) (and thus \(\lambda_{nmpq}^{ab}\)) are known, the matrix elements of the qutrit reduced density operator can be computed trivially. For example, \[\hat{\rho}_{\mathcal{S},gg}=\left\langle g\right|\hat{\rho}_{\mathcal{S}} \left|g\right\rangle=\sum_{n}\rho_{nngg}^{\mathsf{p}} \tag{120}\] and \[\hat{\rho}_{\mathcal{S},gc}=\left\langle g\right|\hat{\rho}_{\mathcal{S}} \left|e\right\rangle=\sum_{n,m}\lambda_{nmmn}^{ge}. \tag{121}\] ### Master Equation in the Displaced Frame The density operator in the displaced frame satisfies the master equation \[\dot{\hat{\rho}}^{\mathsf{p}} =-\frac{\mathrm{i}}{\hbar}\!\left[\hat{H}_{\mathrm{eff}}^{ \mathsf{p}},\hat{p}^{\mathsf{p}}\right]-\hat{\mathsf{p}}^{\dagger}\hat{ \mathsf{p}}\hat{\mathsf{p}}^{\mathsf{p}}-\hat{\mathsf{p}}^{\mathsf{p}}\hat{ \mathsf{p}}^{\dagger}\hat{\mathsf{p}}+\kappa\mathcal{D}\!\left[\hat{a}^{ \mathsf{p}}\right]\!\hat{\rho}^{\mathsf{p}}\] \[\quad+\gamma_{1,ef}\mathcal{D}\!\left[\hat{a}^{\mathsf{p}}_{ef} \right]\!\hat{\rho}^{\mathsf{p}}+\frac{\gamma_{\phi,ef}\mathcal{D}\!\left[\hat{ a}^{\mathsf{p}}_{ef}\right]\!\hat{\rho}^{\mathsf{p}}}{2}\] \[\quad+\frac{\gamma_{\phi,gf}\mathcal{D}\!\left[\hat{a}^{\mathsf{ p}}_{ef}\right]\!\hat{\rho}^{\mathsf{p}}}{2}+\frac{\gamma_{\phi,ef}\mathcal{D}\! \left[\hat{a}^{\mathsf{p}}_{ef}\right]\!\hat{\rho}^{\mathsf{p}}}{2}. \tag{122}\] As for any time-dependent unitary transformation, the extra terms \(-\hat{\mathsf{p}}^{\dagger}\hat{\mathsf{p}}_{f}\hat{\rho}^{\mathsf{p}}-\hat{ \rho}^{\mathsf{p}}\hat{\mathsf{p}}^{\dagger}\hat{\mathsf{p}}\) appear in the new master equation to eliminate the readout drive terms in the original master equation. Moreover, the Hamiltonian of the combined system (qutrit + cavity) still takes the form \[\hat{H}_{\mathrm{eff}}/\hbar =\hat{H}_{\mathcal{SR},\mathrm{rot}}^{\mathrm{disp}}/\hbar\] \[=\tilde{\omega}_{\mathsf{q}}\hat{\Pi}_{e}+(2\tilde{\omega}_{ \mathsf{q}}+\alpha_{\mathsf{q}})\hat{\Pi}_{f}+\Delta_{\mathrm{rd}}\hat{a}^{ \dagger}\hat{a}\] \[\quad+\chi_{\mathrm{qr}}\!\left(\hat{\Pi}_{e}+2\hat{\Pi}_{f} \right)\!\hat{a}^{\dagger}\hat{a}-\left(\epsilon\hat{a}^{\dagger}+\epsilon^{*} \hat{a}\right). \tag{123}\] Note, however, we have kept the qutrit decay (i.e., \(\gamma_{1,ab}\)) in the master equation for full generality. To begin simplifying each term, we will need various operators rewritten in the displaced frame. For the cavity operators, we have \[\hat{a}^{\mathsf{P}}=\hat{a}+\left(\alpha_{g}\hat{\Pi}_{g}+\alpha_{g}\hat{\Pi} _{g}+\alpha_{g}\hat{\Pi}_{g}\right)=\hat{a}+\hat{\Pi}_{\alpha}, \tag{124}\] \[\left(\hat{a}^{\dagger}\hat{a}\right)^{\mathsf{p}} =\hat{a}^{\dagger}\hat{a}+\hat{a}^{\dagger}\hat{\Pi}_{\alpha}+ \hat{a}\hat{\Pi}_{\alpha}^{\dagger}+\hat{\Pi}_{\alpha}^{\dagger}\hat{\Pi}_{\alpha}\] \[=\hat{a}^{\dagger}\hat{a}+\hat{a}^{\dagger}\hat{\Pi}_{\alpha}+ \hat{a}\hat{\Pi}_{\alpha}^{\dagger}\] \[\quad+|\alpha_{g}|^{2}\hat{\Pi}_{g}+|\alpha_{e}|^{2}\hat{\Pi}_{e }+|\alpha_{f}|^{2}\hat{\Pi}_{f}, \tag{125}\] where we denote \[\hat{\Pi}_{\alpha}(t)=\alpha_{g}(t)\hat{\Pi}_{g}+\alpha_{g}(t)\hat{\Pi}_{g}+ \alpha_{g}(t)\hat{\Pi}_{g}. \tag{126}\] Similarly, the operators associated with the qutrit subspace in the displaced frame are given by \[\hat{\sigma}^{\mathsf{p}}_{z,ab}=\hat{\sigma}_{z,ab}\quad\text{and}\quad\hat{ \sigma}^{\mathsf{p}}_{ab}=\hat{\sigma}_{ab}\hat{D}^{\dagger}(\alpha_{a})\hat{D }(\alpha_{b}). \tag{127}\] First, we start with the transformed Hamiltonian. By using Eq.(124)-(127), we obtain \[\frac{\hat{H}_{\mathrm{eff}}^{\mathsf{p}}}{\hbar} =\tilde{\omega}_{\mathsf{q}}\left|e\right\rangle\!\left\langle e \right|+\left(2\tilde{\omega}_{\mathsf{q}}+\alpha_{\mathsf{q}}\right)\left|f \right\rangle\!\left\langle f\right|\] \[\quad-\left[\epsilon\left(\hat{a}^{\dagger}+\hat{\Pi}_{\alpha}^{ \dagger}\right)+\epsilon^{*}\!\left(\hat{a}+\hat{\Pi}_{\alpha}\right)\right]\] \[\quad+\left[\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}}\!\left(\hat{ \Pi}_{e}+2\hat{\Pi}_{f}\right)\right]\] \[\quad\quad\times\left(\hat{a}^{\dagger}\hat{a}+\hat{a}^{\dagger} \hat{\Pi}_{\alpha}+\hat{a}\hat{\Pi}_{\alpha}^{\dagger}\right.\] \[\quad\quad\quad\left.+|\alpha_{g}|^{2}\hat{\Pi}_{g}+|\alpha_{e}|^ {2}\hat{\Pi}_{e}+|\alpha_{f}|^{2}\hat{\Pi}_{f}\right)\] \[=\tilde{\omega}_{\mathsf{q}}\left|e\right\rangle\!\left\langle e \right|+\left(2\tilde{\omega}_{\mathsf{q}}+\alpha_{\mathsf{q}}\right)\left|f \right\rangle\!\left\langle f\right|\] \[\quad-\left(\epsilon\hat{a}^{\dagger}+\epsilon^{*}\hat{a}\right)- \left(\epsilon\hat{H}_{\alpha}^{\dagger}+\epsilon^{*}\hat{\Pi}_{\alpha}\right)\] \[\quad+\left[\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}}\!\left(\hat{ \Pi}_{e}+2\hat{\Pi}_{f}\right)\right]\!\hat{a}^{\dagger}\hat{a}\] \[\quad+\left[\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}}\!\left(\!|e \right\rangle\!\left\langle e\right|+2\left|f\rangle\!\left\langle f \right|\right)\right]\!\left(\hat{a}^{\dagger}\hat{\Pi}_{\alpha}+\hat{a}\hat{ \Pi}_{\alpha}^{\dagger}\right)\] \[\quad+\left[\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}}\!\left(\!\hat{ \Pi}_{e}+2\hat{\Pi}_{f}\right)\right]\] \[\quad\quad\times\left(|\alpha_{g}|^{2}\hat{\Pi \(t\) and \(s\), which does not apply to the case of the displacement operator. Consequently, using Eq.(B.1) results in \[\dot{\hat{\mathsf{P}}} =\hat{\Pi}_{g}\left[\left(\dot{\alpha}_{g}\hat{a}^{\dagger}-\dot{ \alpha}_{g}^{*}\hat{a}\right)+\left(\dot{\alpha}_{g}^{*}\alpha_{g}-\alpha_{g}^ {*}\dot{\alpha}_{g}\right)/2\right]\hat{D}(\alpha_{g})\] \[\quad+\hat{\Pi}_{e}\left[\left(\dot{\alpha}_{e}\hat{a}^{\dagger}- \dot{\alpha}_{e}^{*}\hat{a}\right)+\left(\dot{\alpha}_{e}^{*}\alpha_{e}-\alpha_ {e}^{*}\dot{\alpha}_{e}\right)/2\right]\hat{D}(\alpha_{e})\] \[\quad+\hat{\Pi}_{f}\left[\left(\dot{\alpha}_{f}\hat{a}^{\dagger}- \dot{\alpha}_{f}^{*}\hat{a}\right)+\left(\dot{\alpha}_{f}^{*}\alpha_{f}-\alpha _{f}^{*}\dot{\alpha}_{f}\right)/2\right]\hat{D}(\alpha_{f})\] (B.22) and \[\hat{\mathsf{P}}^{\dagger}\dot{\hat{\mathsf{P}}}\] \[=\hat{\Pi}_{g}\hat{D}^{\dagger}(\alpha_{g})\big{[}\left(\dot{ \alpha}_{g}\hat{a}^{\dagger}-\dot{\alpha}_{g}^{*}\hat{a}\right)\] \[\quad+\left(\dot{\alpha}_{g}^{*}\alpha_{g}-\alpha_{g}^{*}\dot{ \alpha}_{g}\right)/2\big{]}\hat{D}(\alpha_{g})\] \[\quad+\hat{\Pi}_{e}\hat{D}^{\dagger}(\alpha_{e})\big{[}\left( \dot{\alpha}_{e}\hat{a}^{\dagger}-\dot{\alpha}_{e}^{*}\hat{a}\right)\] \[\quad+\left(\dot{\alpha}_{e}^{*}\alpha_{e}-\alpha_{e}^{*}\dot{ \alpha}_{e}\right)/2\big{]}\hat{D}(\alpha_{e})\] \[\quad+\hat{\Pi}_{f}\hat{D}^{\dagger}(\alpha_{f})\big{[}\left( \dot{\alpha}_{f}\hat{a}^{\dagger}-\dot{\alpha}_{f}^{*}\hat{a}\right)\] \[\quad+\left(\dot{\alpha}_{f}^{*}\alpha_{f}-\alpha_{f}^{*}\hat{a} \right)/2\big{]}\hat{D}(\alpha_{f})\] \[=\hat{\Pi}_{g}\left[\left(\dot{\alpha}_{g}\hat{a}^{\dagger}-\dot{ \alpha}_{g}^{*}\hat{a}\right)-\left(\dot{\alpha}_{g}^{*}\alpha_{g}-\alpha_{g}^ {*}\dot{\alpha}_{g}\right)/2\right]\] \[\quad+\hat{\Pi}_{e}\left[\left(\dot{\alpha}_{e}\hat{a}^{\dagger} -\dot{\alpha}_{e}^{*}\hat{a}\right)-\left(\dot{\alpha}_{e}^{*}\alpha_{e}- \alpha_{e}^{*}\dot{\alpha}_{e}\right)/2\right]\] \[=\dot{\hat{\Pi}}_{\alpha}\hat{a}^{\dagger}-\dot{\hat{\Pi}}_{ \alpha}^{\dagger}\hat{a}+\mathrm{i}\operatorname{Im}(\alpha_{g}^{*}\dot{ \alpha}_{g})\hat{\Pi}_{g}\] \[\quad+\mathrm{i}\operatorname{Im}(\alpha_{e}^{*}\dot{\alpha}_{e })\hat{\Pi}_{e}+\mathrm{i}\operatorname{Im}(\alpha_{f}^{*}\dot{\alpha}_{f}) \hat{\Pi}_{f}.\] (B.23) Since \(\dot{\hat{\mathsf{P}}}^{\dagger}\hat{\mathsf{P}}=-\hat{\mathsf{P}}^{\dagger} \hat{\mathsf{P}}\), we have \[-\hat{\mathsf{P}}^{\dagger}\hat{\mathsf{P}}\hat{\mathsf{P}}^{p}- \hat{\rho}^{p}\hat{\mathsf{P}}^{\dagger}\hat{\mathsf{P}}\] \[=-\left[\dot{\hat{\Pi}}_{\alpha}\hat{a}^{\dagger}-\dot{\hat{\Pi} }_{\alpha}^{\dagger}\hat{a},\hat{\rho}^{p}\right]\] \[\quad-\mathrm{i}\left[\operatorname{Im}(\alpha_{g}^{*}\dot{ \alpha}_{g})\hat{\Pi}_{g}+\operatorname{Im}(\alpha_{e}^{*}\dot{\alpha}_{e}) \hat{\Pi}_{e}+\operatorname{Im}(\alpha_{f}^{*}\dot{\alpha}_{f})\hat{\Pi}_{f}, \hat{\rho}^{p}\right].\] (B.24) To proceed further, we substitute Eq.(A.1)-(A.1) found for the combined system into Eq.(B.2). In particular, \[\dot{\hat{\Pi}}_{\alpha} =\dot{\alpha}_{g}\hat{\Pi}_{g}+\dot{\alpha}_{e}\hat{\Pi}_{e}+ \dot{\alpha}_{f}\hat{\Pi}_{f}\] \[=\Big{[}-\mathrm{i}(\Delta_{\mathrm{rd}}-\mathrm{i}\kappa/2) \alpha_{g}+\mathrm{i}\epsilon\Big{]}\hat{\Pi}_{g}\] \[\quad+\Big{[}-\mathrm{i}(\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr} }-\mathrm{i}\kappa/2)\alpha_{e}+\mathrm{i}\epsilon\Big{]}\hat{\Pi}_{e}\] \[\quad+\Big{[}-\mathrm{i}(\Delta_{\mathrm{rd}}+2\chi_{\mathrm{qr} }-\mathrm{i}\kappa/2)\alpha_{f}+\mathrm{i}\epsilon\Big{]}\hat{\Pi}_{f}\] \[=\mathrm{i}\epsilon-\mathrm{i}\Big{[}\Delta_{\mathrm{rd}}+\chi_{ \mathrm{qr}}\Big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\Big{)}\Big{]}\hat{\Pi}_{ \alpha}-\frac{\kappa}{2}\hat{\Pi}_{\alpha}\] (B.25) and the first term in Eq.(B.2) becomes \[\dot{\hat{\Pi}}_{\alpha}\hat{a}^{\dagger}-\dot{\hat{\Pi}}_{ \alpha}^{\dagger}\hat{a}\] \[=\mathrm{i}\epsilon\hat{a}^{\dagger}-\mathrm{i}\Big{[}\Delta_{ \mathrm{rd}}+\chi_{\mathrm{qr}}\Big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\Big{)} \Big{]}\hat{\Pi}_{\alpha}\hat{a}^{\dagger}-\frac{\kappa}{2}\hat{\Pi}_{\alpha} \hat{a}^{\dagger}\] \[\quad+\mathrm{i}\epsilon^{*}\hat{a}-\mathrm{i}\Big{[}\Delta_{ \mathrm{rd}}+\chi_{\mathrm{qr}}\Big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\Big{)} \Big{]}\hat{\Pi}_{\alpha}^{\dagger}\hat{a}+\frac{\kappa}{2}\hat{\Pi}_{\alpha} ^{\dagger}\hat{a}\] \[\quad-\mathrm{i}\Big{[}\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr} }\Big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\Big{)}\Big{]}\Big{(}\hat{\Pi}_{\alpha} \hat{a}^{\dagger}+\hat{\Pi}_{\alpha}^{\dagger}\hat{a}\Big{)}\] \[\quad-\frac{\kappa}{2}\Big{(}\hat{\Pi}_{\alpha}\hat{a}^{\dagger}- \hat{\Pi}_{\alpha}^{\dagger}\hat{a}\Big{)}.\] (B.26) With a similar manipulation, the second term in Eq.(B.2) reduces to \[-\mathrm{i}\operatorname{Im}(\alpha_{g}^{*}\dot{\alpha}_{g})\hat{ \Pi}_{g}-\mathrm{i}\operatorname{Im}(\alpha_{e}^{*}\dot{\alpha}_{e})\hat{\Pi}_{ e}-\mathrm{i}\operatorname{Im}(\alpha_{f}^{*}\dot{\alpha}_{f})\hat{\Pi}_{f}\] \[=-\mathrm{i}\Big{[}\operatorname{Im}(\alpha_{g}^{*}\epsilon)- \Delta_{\mathrm{rd}}|\alpha_{g}|^{2}\Big{]}\hat{\Pi}_{g}\] \[\quad-\mathrm{i}\Big{[}\operatorname{Im}(\alpha_{e}^{*}\epsilon)- (\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}})|\alpha_{e}|^{2}\Big{]}\hat{\Pi}_{e}\] \[\quad-\mathrm{i}\Big{[}\operatorname{Im}(\alpha_{f}^{*}\epsilon)- (\Delta_{\mathrm{rd}}+2\chi_{\mathrm{qr}})|\alpha_{e}|^{2}\Big{]}\hat{\Pi}_{f}\] \[=-\mathrm{i}\Big{[}\operatorname{Im}(\alpha_{g}^{*}\epsilon) \hat{\Pi}_{g}+\operatorname{Im}(\alpha_{e}^{*}\epsilon)\hat{\Pi}_{e}+ \operatorname{Im}(\alpha_{f}^{*}\epsilon)\hat{\Pi}_{f}\Big{]}\] \[\quad+\mathrm{i}\Big{[}\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr} }(|\epsilon|)\langle e|+2\,|f\rangle\langle f|)\Big{]}\] \[\quad\times\Big{(}|\alpha_{g}|^{2}\hat{\Pi}_{g}+|\alpha_{e}|^{2}\hat{ \Pi}_{e}+|\alpha_{f}|^{2}\hat{\Pi}_{f}\Big{)}.\] (B.27) Since \(\left[\hat{1},\hat{\rho}^{p}\right]=0\), we and \[\Delta_{g,1}(t) =\frac{1}{6}\Big{[}-\left(\epsilon^{*}\beta_{ge}+\epsilon\beta_{ge}^ {*}\right)-\left(\epsilon^{*}\beta_{gf}+\epsilon\beta_{gf}^{*}\right)\Big{]}, \tag{183}\] \[\Delta_{e,1}(t) =\frac{1}{6}\Big{[}+\left(\epsilon^{*}\beta_{ge}+\epsilon\beta_{ gr}^{*}\right)-\left(\epsilon^{*}\beta_{ef}+\epsilon\beta_{ef}^{*}\right)\Big{]},\] (184) \[\Delta_{f,1}(t) =\frac{1}{6}\Big{[}+\left(\epsilon^{*}\beta_{gf}+\epsilon\beta_{ gf}^{*}\right)+\left(\epsilon^{*}\beta_{ef}+\epsilon\beta_{ef}^{*}\right)\Big{]}. \tag{185}\] In addition, the same argument can be applied to terms in \(\hat{H}_{\text{eff}}^{\text{P}}\), i.e., \[\epsilon\hat{\Pi}_{\alpha}^{\dagger}+\epsilon^{*}\hat{\Pi}_{\alpha}\] \[=2C_{1}\hat{1}-2\Delta_{g,1}\hat{\Pi}_{g}-2\Delta_{e,1}\hat{\Pi} _{e}-2\Delta_{f,1}\hat{\Pi}_{f}, \tag{186}\] and the net effect is that \[-\frac{\mathrm{i}}{\hbar}\big{[}\hat{H}_{\text{eff}}^{\text{P}}, \hat{\rho}^{\text{P}}\big{]}-\hat{\rho}^{\dagger}\hat{\P}\hat{\rho}^{\text{P} }-\hat{\rho}^{\text{P}}\hat{\P}^{\dagger}\hat{\P}\] \[=-\mathrm{i}\Big{[}\Delta_{g,1}\hat{\Pi}_{g}+(\tilde{\omega}_{ \text{q}}+\Delta_{e,1})\hat{\Pi}_{e}\] \[\qquad\qquad\qquad\qquad+(2\tilde{\omega}_{\text{q}}+\Delta_{f,1} )+\alpha_{\text{q}})\hat{\Pi}_{f},\hat{\rho}^{\text{P}}\Big{]}\] \[-\mathrm{i}\Big{[}\Big{[}\Delta_{\text{rd}}+\chi_{\text{qr}} \big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\big{)}\Big{]}\hat{a}^{\dagger}\hat{a}, \hat{\rho}^{\text{P}}\Big{]}\] \[+\frac{\kappa}{2}\Big{[}\hat{\Pi}_{\alpha}\hat{a}^{\dagger}-\hat {\Pi}_{\alpha}^{\dagger}\hat{a},\hat{\rho}^{\text{P}}\Big{]}. \tag{187}\] Now, we focus our attention on the cavity decay term \[\mathbb{O}\big{[}\hat{a}^{\text{P}}\big{]}\hat{\rho}^{\text{P}}\] \[=\Big{(}\hat{a}+\hat{\Pi}_{\alpha}\Big{)}\hat{\rho}^{\text{P}} \Big{(}\hat{a}^{\dagger}+\hat{\Pi}_{\alpha}^{\dagger}\Big{)}\] \[\quad-\frac{1}{2}\hat{\rho}^{\text{P}}\Big{(}\hat{a}^{\dagger} \hat{a}+\hat{a}^{\dagger}\hat{\Pi}_{\alpha}+\hat{a}\hat{\Pi}_{\alpha}^{ \dagger}+\hat{\Pi}_{\alpha}^{\dagger}\hat{\Pi}_{\alpha}\Big{)}\] \[\quad-\frac{1}{2}\Big{(}\hat{a}^{\dagger}\hat{a}+\hat{a}^{\dagger }\hat{\Pi}_{\alpha}+\hat{a}\hat{\Pi}_{\alpha}^{\dagger}+\hat{\Pi}_{\alpha}^{ \dagger}\hat{\Pi}_{\alpha}\Big{)}\hat{\rho}^{\text{P}}\] \[=\mathbb{O}\big{[}\hat{a}\big{]}\hat{\rho}^{\text{P}}+\mathbb{O} \big{[}\hat{\Pi}_{\alpha}\big{]}\hat{\rho}^{\text{P}}+\hat{a}\hat{\rho}^{ \text{P}}\hat{\Pi}^{\dagger}+\hat{a}^{\dagger}\hat{\rho}^{\text{P}}\hat{\Pi}\] \[\quad-\frac{1}{2}\hat{\rho}^{\text{P}}\hat{a}^{\dagger}\hat{\Pi} _{\alpha}-\frac{1}{2}\hat{\rho}^{\text{P}}\hat{a}\hat{\Pi}_{\alpha}^{\dagger}- \frac{1}{2}\hat{a}^{\dagger}\hat{\Pi}_{\alpha}\hat{\rho}^{\text{P}}-\frac{1}{2 }\hat{a}\hat{\Pi}_{\alpha}^{\dagger}\hat{\rho}^{\text{P}}. \tag{188}\] The second term \(\mathbb{O}\big{[}\hat{\Pi}_{\alpha}\big{]}\hat{\rho}^{\text{P}}\) contains both frequency shifts and dephasing. To separate the two effects, we can simply expand the expression in the energy eigenbasis of the qutrit. For example, \[\left\langle g\right|\mathbb{O}\big{[}\hat{\Pi}_{\alpha}\big{]} \hat{\rho}^{\text{P}}\left|e\right\rangle\] \[=\Big{(}-\frac{1}{2}|\beta_{ge}|^{2}-\mathrm{i}\operatorname{Im}( \alpha_{e}\alpha_{g}^{*})\Big{)}\left\langle g\right|\hat{\rho}^{\text{P}} \left|e\right\rangle. \tag{189}\] By applying the same calculation to the other off-diagonal terms and noting that the diagonal terms vanish in the chosen basis, we find that \[\mathbb{O}\big{[}\hat{\Pi}_{\alpha}\big{]}\hat{\rho}^{\text{P}}\] \[=\frac{\Gamma_{\text{m,{eg}}}}{4\kappa}\mathbb{O}\big{[}\hat{ \sigma}_{z,{eg}}\big{]}\hat{\rho}^{\text{P}}+\frac{\Gamma_{\text{m,{gf}}}}{4 \kappa}|\beta_{gf}|^{2}\mathbb{O}\big{[}\hat{\sigma}_{z,{gf}}\big{]}\hat{\rho}^{ \text{P}}\] \[\quad+\frac{\Gamma_{\text{m,{eg}}}}{4\kappa}|\beta_{ef}|^{2} \mathbb{O}\big{[}\hat{\sigma}_{z,{ef}}\big{]}\hat{\rho}^{\text{P}}\] \[\quad-\frac{\mathrm{i}}{2}\Big{[}\operatorname{Im}(\alpha_{e} \alpha_{g}^{*})\hat{\sigma}_{z,{ge}}+\operatorname{Im}(\alpha_{f}\alpha_{g}^{*}) \hat{\sigma}_{z,{gf}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\operatorname{Im}(\alpha_{f} \alpha_{e}^{*})\hat{\sigma}_{z,{ef}},\hat{\rho}^{\text{P}}\Big{]}, \tag{190}\] where we have introduced three dephasing rates \[\Gamma_{\text{m,{eg}}} =\kappa|\beta_{ge}|^{2}, \tag{191}\] \[\Gamma_{\text{m,{gf}}} =\kappa|\beta_{gf}|^{2},\] (192) \[\Gamma_{\text{m,{ef}}} =\kappa|\beta_{ef}|^{2}. \tag{193}\] With the help of Eq.(190) and the observation that \[\hat{\Pi}_{\alpha} =(\alpha_{g}+\alpha_{e}+\alpha_{f})\hat{1}\] \[\quad+\frac{\beta_{ge}}{3}\hat{\sigma}_{z,{ge}}+\frac{\beta_{gf}} {3}\hat{\sigma}_{z,{gf}}+\frac{\beta_{ef}}{3}\hat{\sigma}_{z,{ef}}, \tag{194}\] we find \[\mathbb{O}\big{[}\hat{a}^{\text{P}}\big{]}\hat{\rho}^{\text{P}}\] \[=\mathbb{O}\big{[}\hat{a}\big{]}\hat{\rho}^{\text{P}}-\frac{1}{2} \Big{[}\hat{\Pi}_{\alpha}\hat{a}^{\dagger}-\hat{\Pi}_{\alpha}^{\dagger}\hat{a}, \hat{\rho}^{\text{P}}\Big{]}\] \[\quad+\frac{\beta_{ge}^{*}}{3}\hat{a}\Big{[}\hat{\rho}^{\text{P}}, \hat{\sigma}_{z,{ge}}\Big{]}+\frac{\beta_{ge}}{3}\Big{[}\hat{\sigma}_{z,{ge}}, \hat{\rho}^{\text{P}}\Big{]}\hat{a}^{\dagger}\] \[\quad+\frac{\beta_{gf}^{*}}{3}\hat{a}\Big{[}\hat{\rho}^{\text{P}}, \hat{\sigma}_{z,{gf}}\Big{]}+\frac{\beta_{gf}}{3}\Big{[}\hat{\sigma}_{z,{gf}}, \hat{\rho}^{\text{P}}\Big{]}\hat{a}^{\dagger}\] \[\quad+\frac{\beta_{ef}^{*}}{3}\hat{a}\Big{[}\hat{\rho}^{\text{P}}, \hat{\sigma}_{z,{ef}}\Big{]}+\frac{\beta_{ef}}{3}\Big{[}\hat{\sigma}_{z,{ef}}, \hat{\rho}^{\text{P}}\Big{]}\hat{a}^{\dagger}\] \[\quad+\frac{\Gamma_{\text{m,{eg}}}}{4\kappa}\mathbb{O}\big{[} \hat{\sigma}_{z,{ge}}\big{]}\hat{\rho}^{\text{P}}+\frac{\Gamma_{\text{m,{gf}}}}{4 \kappa}|\beta_{gf}|^{2}\mathbb{O}\big{[}\hat{\sigma}_{z,{gf}}\big{]}\hat{\rho}^{ \text{P}}\] \[\quad+\frac{\Gamma_{\text{m,{ef}}}}{4\kappa}|\beta_{ef}|^{2}\mathbb{O} \big{[}\hat{\sigma}_{ the displaced frame \[\dot{\rho}^{\mathsf{P}}\] \[=-\frac{\mathrm{i}}{\hbar}\Big{[}\hat{H}^{\prime}_{\mathrm{qeff}}, \hat{\rho}^{\mathsf{P}}\Big{]}-\mathrm{i}\Big{[}\Big{[}\Delta_{\mathrm{rd}}+ \chi_{\mathrm{qr}}\Big{(}\hat{\Pi}_{e}+2\hat{\Pi}_{f}\Big{)}\Big{]}\hat{a}^{ \dagger}\hat{a},\hat{\rho}^{\mathsf{P}}\Big{]}\] \[\quad+\kappa\mathcal{D}\big{[}\hat{a}\big{]}\hat{\rho}^{\mathsf{P} }+\frac{\kappa\beta^{*}_{ge}}{3}\hat{a}\Big{[}\hat{\rho}^{\mathsf{P}},\hat{ \sigma}_{z,ge}\Big{]}+\frac{\kappa\beta^{*}_{gf}}{3}\hat{a}\Big{[}\hat{\rho}^{ \mathsf{P}},\hat{\sigma}_{z,gf}\Big{]}\] \[\quad+\frac{\kappa\beta^{*}_{ef}}{3}\hat{a}\Big{[}\hat{\rho}^{ \mathsf{P}},\hat{\sigma}_{z,ef}\Big{]}+\frac{\kappa\beta_{ge}}{3}\Big{[}\hat{ \sigma}_{z,ge},\hat{\rho}^{\mathsf{P}}\Big{]}\hat{a}^{\dagger}\] \[\quad+\frac{\kappa\beta_{gf}}{3}\Big{[}\hat{\sigma}_{z,gf},\hat{ \rho}^{\mathsf{P}}\Big{]}\hat{a}^{\dagger}+\frac{\kappa\beta_{gf}}{3}\Big{[} \hat{\sigma}_{z,ef},\hat{\rho}^{\mathsf{P}}\Big{]}\hat{a}^{\dagger}\] \[\quad+\gamma_{1,ge}\mathcal{D}\big{[}\hat{\sigma}_{ge}\hat{E}^{ \dagger}(\alpha_{g})\hat{D}(\alpha_{e})\big{]}\hat{\rho}^{\mathsf{P}}\] \[\quad+\gamma_{1,ef}\mathcal{D}\big{[}\hat{\sigma}_{gf}\hat{D}^{ \dagger}(\alpha_{g})\hat{D}(\alpha_{f})\big{]}\hat{\rho}^{\mathsf{P}}\] \[\quad+\frac{\gamma_{1,ef}\mathcal{D}}{2}\mathcal{D}\big{[}\hat{ \sigma}_{z,ge}\big{]}\hat{\rho}^{\mathsf{P}}+\frac{\gamma_{\phi,gf}}{2} \mathcal{D}\big{[}\hat{\sigma}_{z,gf}\big{]}\hat{\rho}^{\mathsf{P}}\] \[\quad+\frac{\gamma_{\phi,ef}}{2}\mathcal{D}\big{[}\hat{\sigma}_{ z,ef}\big{]}\hat{\rho}^{\mathsf{P}}+\frac{\Gamma_{m,ge}}{4}\mathcal{D}\big{[}\hat{ \sigma}_{z,ge}\big{]}\hat{\rho}^{\mathsf{P}}\] \[\quad+\frac{\Gamma_{\mathrm{m},gf}}{4}|\beta_{gf}|^{2}\mathcal{D }\big{[}\hat{\sigma}_{z,gf}\big{]}\hat{\rho}^{\mathsf{P}}+\frac{\Gamma_{m, ef}}{4}|\beta_{ef}|^{2}\mathcal{D}\big{[}\hat{\sigma}_{z,ef}\big{]}\hat{\rho}^{ \mathsf{P}}, \tag{103}\] where we have defined \[\hat{H}^{\prime}_{\mathrm{qeff}}/\hbar\] \[=\Delta_{g,1}\hat{\Pi}_{g}+(\tilde{\omega}_{\mathrm{q}}+\Delta_{ e,1})\hat{\Pi}_{e}+(2\tilde{\omega}_{\mathrm{q}}+\Delta_{f,1}+\alpha_{\mathrm{q}}) \hat{\Pi}_{f}\] \[\quad+\frac{\kappa}{2}\Big{[}\mathrm{Im}(\alpha_{e}\alpha^{*}_{g} )\hat{\sigma}_{z,ge}\] \[\quad\quad\quad+\mathrm{Im}(\alpha_{f}\alpha^{*}_{g})\hat{\sigma} _{z,gf}+\mathrm{Im}(\alpha_{f}\alpha^{*}_{e})\hat{\sigma}_{z,ef}\Big{]}\] \[\doteq\tilde{\omega}^{\prime}_{g}\hat{\Pi}_{g}+\tilde{\omega}^{ \prime}_{e}\hat{\Pi}_{e}+\tilde{\omega}^{\prime}_{f}\hat{\Pi}_{f} \tag{104}\] to be the effective qubit Hamiltonian in the displaced frame. Note that \(\hat{H}^{\prime}_{\mathrm{qeff}}\) and \(\omega^{\prime}_{ba}=\omega^{\prime}_{b}-\omega^{\prime}_{a}\) are not the final effective Hamiltonian and transition frequencies of the qutrit since we are still in the displaced frame; transforming back to the laboratory frame will cancel some of the shifts seen in the displaced frame. ## Appendix C Derivation of the Effective Qutrit Master Equation In the last section, we have established the connection between the matrix elements of the density operator in the displaced frame to the qutrit density operator in the laboratory frame. To find the effective master equation of the qutrit, we first rewrite the master equation in the displaced frame in terms of the matrix elements of \(\hat{\rho}^{\mathsf{P}}\): \[\dot{\rho}^{\mathsf{P}}_{nmgg} =\big{[}-\mathrm{i}\Delta_{\mathrm{rd}}(n-m)-\kappa(n+m)/2\big{]} \rho^{\mathsf{P}}_{nmgg}\] \[\quad+\gamma_{1,ge}\sum_{p,q}d^{*}_{pn}d_{qm}\rho^{\mathsf{P}}_{ pqce}\] \[\quad+\gamma_{1,gf}\sum_{p,q}d^{*}_{pn}d_{qm}\rho^{\mathsf{P}}_{ pqff}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\rho^{\mathsf{P}}_{(n+1)(m+1)gg}, \tag{105}\] \[\dot{\rho}^{\mathsf{P}}_{nmge} =\big{[}-\mathrm{i}(\Delta_{\mathrm{rd}}+\chi_{\mathrm{qr}})(n-m )-\gamma_{1,ge}\] \[\quad\quad-\kappa(n+m)/2\big{]}\rho^{\mathsf{P}}_{mmee}\] \[\quad+\gamma_{1,ef}\sum_{p,q}d^{*}_{pn}d_{qm}\rho^{\mathsf{P}}_{ pqff}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\rho^{\mathsf{P}}_{(n+1)(m+1)ee}, \tag{106}\] \[\dot{\rho}^{\mathsf{P}}_{nmff} =\big{[}-\mathrm{i}(\Delta_{\mathrm{rd}}+2\chi_{\mathrm{qr}})(n-m )-(\gamma_{1,gf}+\gamma_{1,ef})\] \[\quad\quad-\kappa(n+m)/2\big{]}\rho^{\mathsf{P}}_{nmff}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\rho^{\mathsf{P}}_{(n+1)(m+1)ff}, \tag{107}\] \[\dot{\rho}^{\mathsf{P}}_{nmge} =\big{[}\mathrm{i}\tilde{\omega}^{\prime}_{eg}-\mathrm{i}\Delta_{ \mathrm{rd}}(n-m)+\mathrm{i}\chi_{\mathrm{qr}}m-\gamma_{1,ge}/2\] \[\quad\quad-\gamma_{\phi,ge}-\kappa(n+m)/2-\Gamma_{m,ge}\big{]} \rho^{\mathsf{P}}_{nmge}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\rho^{\mathsf{P}}_{(n+1)(m+1)ge}\] \[\quad-\frac{2\kappa\beta^{*}_{ge}}{3}\sqrt{n+1}\rho^{\mathsf{P}}_{ n(m+1)mge}\] \[\quad+\frac{2\kappa\beta_{ge}}{3}\sqrt{m+1}\rho^{\mathsf{P}}_{n(m+1)ge}, \tag{108}\] \[\dot{\rho}^{\mathsf{P}}_{nmgf} =\big{[}\mathrm{i}\tilde{\omega}^{\prime}_{fg}-\mathrm{i}\Delta_{ \mathrm{rd}}(n-m)+\mathrm{i}\chi_{\mathrm{qr}}2m-\gamma_{1,gf}/2\] \[\quad\quad-\gamma_{\phi,gf}-\kappa(n+m)/2-\Gamma_{m,gf}\big{]} \rho^{\mathsf{P}}_{nmgf}\] \[\quad-\frac{2\kappa\beta^{*}_{gf}}{3}\sqrt{n+1}\rho^{\mathsf{P}}_{ (n+1)mgf}\] \[\quad+\frac{2\kappa\beta_{gf}}{3}\sqrt{m+1}\rho^{\mathsf{P}}_{n(m+1) gf}, \tag{109}\] \[\dot{\rho}^{\mathsf{P}}_{nmef} =\big{[}\mathrm{i}\tilde{\omega}^{\prime}_{fe}-\mathrm{i}\Delta_{ \mathrm{rd}}(n-m)+\mathrm{i}\chi_{\mathrm{qr}}(2m-n)-\gamma_{1,ef}/2\] \[\quad\quad-\gamma_{\phi,ef}-\kappa(n+m)/2-\Gamma_{m,ef}\big{]}\rho^{ \mathsf{P}}_{nmef}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\rho^{\mathsf{P}}_{(n+1)(m+1)ef}\] \[\quad-\frac{2\kappa\beta^{*}_{ef}}{3}\sqrt{n+1}\rho^{\mathsf{P}}_{ (n+1)mef}\] \[\quad+\frac{2\kappa\beta_{ef}}{3}\sqrt{m+1}\rho^{\mathsf{P}}_{n(m+1) ef}. \tag{110}\] There are three other differential equations, but they are simply the complex conjugates of Eq.(C4)-(C6). Now, given Eq.(C1)-(C3), the differential equations governing the time evolution of the diagonal matrix elements of \(\hat{\rho}_{\mathcal{S}}\) (i.e., the populations of the qutrit eigenstates) are found to be \[\dot{\rho}_{\mathcal{S},gg} =\sum_{n}\dot{\rho}^{\mathsf{P}}_{nmgg}\] \[=\gamma_{1,ge}\sum_{p,q}\rho^{\mathsf{P \[\dot{\rho}_{\mathcal{S},ee} =\sum_{n}\dot{\rho}^{\text{p}}_{nnee}\] \[=-\gamma_{1,ge}\sum_{n}\rho^{\text{p}}_{nnee}\] \[\quad+\gamma_{1,ef}\sum_{p,q}\rho^{\text{p}}_{pqff}\sum_{n}d^{*}_{ pn}d_{qn}\] \[=-\gamma_{1,ge}\rho_{\mathcal{S},ee}+\gamma_{1,ef}\rho_{\mathcal{S },ff}, \tag{100}\] \[\dot{\rho}_{\mathcal{S},ff} =\sum_{n}\dot{\rho}^{\text{p}}_{nnff}\] \[=-(\gamma_{1,gf}+\gamma_{1,ef})\sum_{n}\rho^{\text{p}}_{nnff}\] \[=-(\gamma_{1,gf}+\gamma_{1,ef})\rho_{\mathcal{S},ff}. \tag{101}\] Next, according to Eq.(100), computing the off-diagonal terms of \(\dot{\rho}_{\mathcal{S}}\) also requires us to know all \(\lambda_{nmpq}\), whose time derivatives follow (see Eq.(100)-(101)) \[\dot{\lambda}^{ge}_{nmpq} =\dot{\rho}^{\text{p}}_{nmqe}d_{p,q}e^{-\text{i}\,\text{Im}( \alpha_{e}\alpha^{*}_{g})}-\text{i}\frac{\text{d}\,\text{Im}(\alpha_{e} \alpha^{*}_{g})}{\text{d}t}\lambda^{ge}_{nmpq}\] \[\quad+\sqrt{p}\dot{\beta}_{ge}\lambda^{ge}_{nm(p-1)q}-\sqrt{q} \dot{\beta}^{*}_{ge}\lambda^{ge}_{nmpq(q-1)}\] \[\quad-\frac{\dot{\beta}^{*}_{ge}\beta_{ge}+\dot{\beta}_{ge} \beta^{*}_{ge}}{2}\lambda^{ge}_{nmpq}, \tag{102}\] \[\dot{\lambda}^{gf}_{nmpq} =\dot{\rho}^{\text{p}}_{nmqf}d_{p,q}e^{-\text{i}\,\text{Im}( \alpha_{f}\alpha^{*}_{e})}-\text{i}\frac{\text{d}\,\text{Im}(\alpha_{f} \alpha^{*}_{g})}{\text{d}t}\lambda^{gf}_{nmpq}\] \[\quad+\sqrt{p}\dot{\beta}_{gf}\lambda^{gf}_{nm(p-1)q}-\sqrt{q} \dot{\beta}^{*}_{gf}\lambda^{gf}_{nmpq(q-1)}\] \[\quad-\frac{\dot{\beta}^{*}_{gf}\beta_{gf}+\dot{\beta}_{gf} \beta^{*}_{gf}}{2}\lambda^{gf}_{nmpq}. \tag{103}\] The three equations take the same forms; we show the simplification of Eq.(102) as an example. Since we have a differential equation for \(\rho^{\text{p}}_{nmge}\), the first term on the RHS of Eq.(102) reduces to \[\dot{\rho}^{\text{p}}_{nmqe}d_{p,q}e^{-\text{i}\,\text{Im}(\alpha _{e}\alpha^{*}_{g})}\] \[=\big{[}\dot{\omega}^{\prime}_{eg}-\text{i}\Delta_{\text{rd}}(n-m )+\text{i}\chi_{\text{qr}}m-\gamma_{1,ge}/2\] \[\qquad-\gamma_{\phi,ge}-\kappa(n+m)/2-\Gamma_{m,ge}/2\big{]} \lambda^{ge}_{nmpq}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\lambda^{ge}_{(n+1)(m+1)pq}\] \[\quad-\frac{2\kappa\beta^{*}_{ge}}{3}\sqrt{n+1}\lambda^{ge}_{(n+1 )mpq}\] \[\quad+\frac{2\kappa\beta_{ge}}{3}\sqrt{m+1}\lambda^{ge}_{n(m+1)pq}. \tag{104}\] To simplify the second term on the RHS of Eq.(102), we first compute \[\dot{\alpha}_{e}\alpha^{*}_{g}+\alpha_{e}\dot{\alpha_{g}}^{*}\] \[=\Big{[}-\text{i}(\Delta_{\text{rd}}+\chi_{\text{qr}}-\text{i} \kappa/2)\alpha_{e}+\text{i}\epsilon\Big{]}\alpha^{*}_{g}\] \[\qquad+\alpha_{e}\Big{[}-\text{i}(\Delta_{\text{rd}}-\text{i} \kappa/2)\alpha_{g}+\text{i}\epsilon\Big{]}^{*}\] \[=-\text{i}\chi_{\text{qr}}\alpha_{e}\alpha^{*}_{g}-\kappa\alpha_{ e}\alpha^{*}_{g}+\text{i}(\epsilon\alpha^{*}_{g}-\epsilon^{*}\alpha_{e}). \tag{105}\] Then, \[-\text{i}\frac{\text{d}\,\text{Im}(\alpha_{e}\alpha^{*}_{g})}{ \text{d}t}\lambda^{ge}_{nmpq}\] \[=-\text{i}\,\text{Im}\Big{[}-\text{i}\chi_{\text{qr}}\alpha_{e} \alpha^{*}_{g}-\kappa\alpha_{e}\alpha^{*}_{g}+\text{i}(\epsilon\alpha^{*}_{g}- \epsilon^{*}\alpha_{e})\Big{]}\lambda^{ge}_{nmpq}\] \[=\text{i}\Big{[}\chi_{\text{qr}}\,\text{Re}(\alpha_{e}\alpha^{*} _{g})+\kappa\,\text{Im}(\alpha_{e}\alpha^{*}_{g})-\frac{\epsilon^{*}\beta_{ge} +\epsilon\beta^{*}_{ge}}{2}\Big{]}\lambda^{ge}_{nmpq}. \tag{106}\] We keep the terms involving \(\lambda^{ge}_{nm(p-1)q}\) and \(\lambda^{ge}_{nmpq(q-1)}\) outouched and directly go to the last term on the RHS of Eq.(102). Using \[\dot{\beta}_{ge}\beta^{*}_{ge} =(\dot{\alpha}_{g}-\dot{\alpha}_{e})\beta^{*}_{ge}\] \[=-\text{i}\Delta_{\text{rd}}|\beta_{ge}|^{2}-\frac{\kappa|\beta_{ ge}|^{2}}{2}+\text{i}\chi_{\text{qr}}\alpha_{e}\beta^{*}_{ge}, \tag{107}\] we obtain \[-\frac{\dot{\beta}^{*}_{ge}\beta_{ge}+\dot{\beta}_{ge}\beta^{*}_{ ge}}{2}\lambda^{ge}_{nmpq}\] \[=\left[\frac{\kappa|\beta_{ge}|^{2}}{2}+\chi_{\text{qr}}\,\text{ Im}(\alpha_{e}\alpha^{*}_{g})\right]\lambda^{ge}_{nmpq}. \tag{108}\] Finally, combining all the pieces yields \[\dot{\lambda}^{ge}_{nmpq} =\Big{[}\dot{\omega}_{eg}-\gamma_{1,ge}/2-\gamma_{\phi,ge}\] \[\qquad+\chi_{\text{qr}}\,\text{Im}(\alpha_{e}\alpha^{*}_{g})- \text{i}\Delta_{\text{rd}}(n-m)\] \[\qquad+\text{i}\chi_{\text{qr}}m-\kappa(n+m)/2\Big{]}\lambda^{ge}_ {nmpq}\] \[\quad+\kappa\sqrt{(n+1)(m+1)}\lambda^{ge}_{(n+1)(m+1)pq}\] \[\quad-\frac{2\kappa\beta^{*}_{ge}}{3}\sqrt{n+1}\lambda^{ge}_{n(n+1 )mpq}\] \[\quad+\frac{2\kappa\beta_{ge}}{3}\sqrt{m+1}\lambda^{ge}_{n(m+1)pq}, \tag{109}\] where the net frequency difference between \(|e\rangle\) and \(|g\rangle\) is found to be \[\bar{\omega}_{eg} =(\bar{\omega}^{\prime}_{e}-\bar{\omega}^{\prime}_{g})+\chi_{ \text{qr}}\,\text{Re}(\alpha_{e}\alpha^{*}_{g})\] \[\quad+\kappa\,\text{Im}(\alpha_{e}\alpha^{*}_{g})-\left(\epsilon^{*} \beta_{ge}+\epsilon\beta^{*}_{ge}\right)\!\big{/}2\] \[=\Big{[}\omega_{\text{q}}+\Delta_{e,1}-\kappa\,\text{Im}(\alpha_ {e}\alpha^{*}_{g})-\Delta_{g,1}\Big{]}\] \[\quad+\chi_{\text{qr}}\,\text{Re}(\alpha_{e}\alpha^{*}_{g})+ \kappa\,\text{Im}(\alpha_{e}\alpha^{*}_{g})\] \[\quad-\Big{(}\epsilon^{*}\beta_{ge}+\epsilon\beta^{*}_{ge}\Big{)} \!\big{/}2\] \[=\tilde{\omega}_{\text{q}}+\chi_{\text{qr}}\,\text{Re}(\alpha_{e} \alpha^{*}_{g}). \tag{110}\] Applying the same procedure to the other two equations, we find \[\dot{\lambda}^{gf}_{nmpq} =\Big{[}\mathrm{i}\bar{\omega}_{fg}-\gamma_{1,gf}/2-\gamma_{\phi,gf}\] \[\qquad\quad+2\chi_{\mathrm{qr}}\operatorname{Im}(\alpha_{f} \alpha_{g}^{*})-\mathrm{i}\Delta_{\mathrm{rd}}(n-m)\] \[\qquad+\mathrm{i}2\chi_{\mathrm{qr}}m-\kappa(n+m)/2\Big{]}\lambda ^{gf}_{nmpq}\] \[\qquad+\kappa\sqrt{(n+1)(m+1)}\lambda^{gf}_{(n+1)(m+1)pq}\] \[\qquad-\frac{2\kappa\beta_{gf}^{*}}{3}\sqrt{n+1}\lambda^{gf}_{(n +1)mpq}\] \[\qquad+\frac{2\kappa\beta_{gf}}{3}\sqrt{m+1}\lambda^{gf}_{n(m+1) pq} \tag{106}\] and \[\dot{\lambda}^{ef}_{nmpq} =\Big{[}\mathrm{i}\bar{\omega}_{fe}-\gamma_{1,ef}/2-\gamma_{\phi, ef}\] \[\qquad+2\chi_{\mathrm{qr}}\operatorname{Im}(\alpha_{f}\alpha_{e }^{*})-\mathrm{i}\Delta_{\mathrm{rd}}(n-m)\] \[\qquad+\mathrm{i}\chi_{\mathrm{qr}}(2m-n)-\kappa(n+m)/2\Big{]} \lambda^{ef}_{nmpq}\] \[\qquad+\kappa\sqrt{(n+1)(m+1)}\lambda^{ef}_{(n+1)(m+1)pq}\] \[\qquad-\frac{2\kappa\beta_{ef}^{*}}{3}\sqrt{n+1}\lambda^{ef}_{(n +1)mpq}\] \[\qquad+\frac{2\kappa\beta_{ef}}{3}\sqrt{m+1}\lambda^{ef}_{n(m+1) pq} \tag{107}\] with the net frequency differences \[\bar{\omega}_{fg} =2\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}+2\chi_{\mathrm{ qr}}\operatorname{Re}(\alpha_{f}\alpha_{g}^{*}), \tag{108}\] \[\bar{\omega}_{fe} =\tilde{\omega}_{\mathrm{q}}+\alpha_{\mathrm{q}}+\chi_{\mathrm{ qr}}\operatorname{Re}(\alpha_{f}\alpha_{e}^{*}). \tag{109}\] Since we are in the transformed frame, the photon population is initially displaced to the vacuum state, i.e., \(\lambda^{ab}_{nmpq}\propto\rho^{\mathsf{P}}_{nmq}=0\). In addition, there is no mechanism to excite \(\lambda^{ab}_{nmpq}\) with \(n,m,p,q>0\) because the three displacement operators are designed to keep the photon number zero in the displaced frame [33]; hence, \(\rho_{\mathcal{S},ab}=\lambda^{ab}_{0000}\) (see Eq.(108)) and \[\dot{\rho}_{\mathcal{S},ge}=\dot{\lambda}^{ge}_{0000}\] \[=\Big{[}\mathrm{i}\bar{\omega}_{eg}-\gamma_{1,ge}/2-\gamma_{\phi, ge}+\chi_{\mathrm{qr}}\operatorname{Im}(\alpha_{e}\alpha_{g}^{*})\Big{]} \lambda^{ge}_{0000}, \tag{110}\] \[\dot{\rho}_{\mathcal{S},gf}=\dot{\lambda}^{gf}_{0000}\] \[=\Big{[}\mathrm{i}\bar{\omega}_{gf}-\gamma_{1,ge}/2-\gamma_{\phi, gf}+2\chi_{\mathrm{qr}}\operatorname{Im}(\alpha_{f}\alpha_{g}^{*})\Big{]} \lambda^{gf}_{0000},\] (111) \[\dot{\rho}_{\mathcal{S},ef}=\dot{\lambda}^{ef}_{0000}\] \[=\Big{[}\mathrm{i}\bar{\omega}_{ef}-\gamma_{1,ef}/2-\gamma_{\phi, ef}+\chi_{\mathrm{qr}}\operatorname{Im}(\alpha_{f}\alpha_{e}^{*})\Big{]}\lambda^{ef}_{0000}. \tag{112}\] As explained in the main text, by assuming the frequency shifts are much smaller than the bare frequencies, we can approximate the qutrit as a Markovian system, thus writing down Eq.(53). Although a qutrit is used in the derivation, the result can be easily generalized to a general qudit. The reader who goes through the steps for the qutrit case should have no problem making the generalization. ## Appendix D Derivation of the Qutrit Effective Stochastic Master Equation in the Diffusive Limit ### Heuristic Derivation of the Qudit SME Given the quantum channel defined in Eq.(74) of the main text, we look for a stochastic differential equation by sending \(\Delta t\) to \(0\). We start with \(\eta=1\) so that we do not need to worry about averaging over the unobserved information; of course, such an assumption is unphysical so we will need to relax it later. To introduce random processes that capture the measurement noise, we first examine various moments of the measurement outcomes \(I_{k}=I(t_{k+1})\) and \(Q_{k}=Q(t_{k+1})\) within \([t_{k},t_{k}+\Delta t)\) conditioned on the qutrit state at \(t_{k}\). To begin with, the conditional expectation of \(I_{k}\) and \(Q_{k}\) are \[\mathbb{E}\big{[}I_{k}\,|\,\hat{\rho}_{\mathcal{S}}(t_{k})\big{]} =\iint\mathrm{d}I^{\prime}\mathrm{d}Q^{\prime}\,I^{\prime}f(I^{ \prime},Q^{\prime}|\hat{\rho}_{\mathcal{S}}(t_{k}))\] \[=\sqrt{\eta\kappa\Delta t}\operatorname{Tr}\!\Big{[}\hat{\rho}_{ \mathcal{S}}(t_{k})\hat{L}_{I}(t_{k})\Big{]}\] \[=\mathcal{O}(\sqrt{\Delta t}), \tag{113}\] \[\mathbb{E}\big{[}Q_{k}\,|\,\hat{\rho}_{\mathcal{S}}(t_{k})\big{]} =\iint\mathrm{d}I^{\prime}\mathrm{d}Q^{\prime}\,Q^{\prime}f(I^{ \prime},Q^{\prime}|\hat{\rho}_{\mathcal{S}}(t_{k}))\] \[=\sqrt{\eta\kappa\Delta t}\operatorname{Tr}\!\Big{[}\hat{\rho}_{ \mathcal{S}}(t_{k})\hat{L}_{Q}(t_{k})\Big{]}\] \[=\mathcal{O}(\sqrt{\Delta t}), \tag{114}\] where we have defined \[\hat{L}_{I}(t) =\bar{I}_{g}(t)\hat{\Pi}_{g}+\bar{I}_{e}(t)\hat{\Pi}_{e}+\bar{I}_{ f}(t)\hat{\Pi}_{f}, \tag{115}\] \[\hat{L}_{Q}(t) =\bar{Q}_{g}(t)\hat{\Pi}_{g}+\bar{Q}_{e}(t)\hat{\Pi}_{e}+\bar{Q}_{ f}(t)\hat{\Pi}_{f}. \tag{116}\] More importantly, we also have \[\mathbb{E}\big{[}I_{k}^{2}\,|\,\hat{\rho}_{\mathcal{S}}(t_{k}) \big{]} =\sum_{a}\rho_{\mathcal{S},aa}\left(\eta\kappa\Delta t\bar{Q}_{a} ^{2}+\frac{1}{4}\right)\] \[=\eta\kappa\Delta t\operatorname{Tr}\!\Big{(}\hat{\rho}_{ \mathcal{S}}\hat{L}_{I}^{\dagger}\hat{L}_{I}\Big{)}+\frac{1}{4}\] \[=\mathcal{O}(1), \tag{117}\] \[\mathbb{E}\!\left[I_{k}Q_{k}\,|\,\hat{\rho}_{\mathcal{S}}(t_{k})\right] =\eta\kappa\Delta t\,\text{Tr}\!\left(\hat{\rho}_{\mathcal{S}}\hat {L}_{I}\hat{L}_{Q}\right)\] \[=\mathcal{O}(\Delta t). \tag{107}\] We observe that the second moments of \(I_{k}\) and \(Q_{k}\) are nonvanishing as \(\Delta t\to 0\) but the correlation between \(I_{k}\) and \(Q_{k}\) would vanish for small \(\Delta t\), which implies that measurements of \(I_{k}\) and \(Q_{k}\) are related to two independent random processes. We can keep computing higher moments, such as \[\mathbb{E}\!\left[I_{k}^{2}Q_{k}\,|\,\hat{\rho}_{\mathcal{S}}(t_{ k})\right]\] \[=\frac{1}{4}\sqrt{\eta\kappa\Delta t}\,\text{Tr}\!\left(\hat{\rho }_{\mathcal{S}}\hat{L}_{Q}\right)+\eta\kappa\Delta t\,\text{Tr}\!\left(\hat{ \rho}_{\mathcal{S}}\hat{L}_{I}^{2}\hat{L}_{Q}\right)\] \[=\mathcal{O}(\sqrt{\Delta t}), \tag{108}\] \[\mathbb{E}\!\left[I_{k}Q_{k}^{2}\,|\,\hat{\rho}_{\mathcal{S}}(t_ {k})\right]\] \[=\frac{1}{4}\sqrt{\eta\kappa\Delta t}\,\text{Tr}\!\left(\hat{\rho }_{\mathcal{S}}\hat{L}_{I}\right)+\eta\kappa\Delta t\,\text{Tr}\!\left(\hat{ \rho}_{\mathcal{S}}\hat{L}_{I}\hat{L}_{Q}^{2}\right)\] \[=\mathcal{O}(\sqrt{\Delta t}), \tag{109}\] but it's clear that terms containing \(I_{k}^{2}\) and \(Q_{k}^{2}\) are not negligible and should be examined carefully as \(\Delta t\to 0\). In the diffusive limit, we introduce two random processes, \(W_{I}(t)\) and \(W_{Q}(t)\), related to \(I(t)\) and \(Q(t)\) by \[I_{k} =\sqrt{\eta\kappa\Delta t}\,\text{Tr}\!\left[\hat{\rho}_{\mathcal{ S}}(t_{k})\hat{L}_{I}(t_{k})\right]+\frac{\Delta W_{I}(t_{k})}{2\sqrt{\Delta t}}, \tag{110}\] \[Q_{k} =\sqrt{\eta\kappa\Delta t}\,\text{Tr}\!\left[\hat{\rho}_{\mathcal{ S}}(t_{k})\hat{L}_{Q}(t_{k})\right]+\frac{\Delta W_{Q}(t_{k})}{2\sqrt{\Delta t}}, \tag{111}\] where \(\Delta W_{I}(t_{k})=W_{I}(t_{k+1})-W_{I}(t_{k})\) and \(\Delta W_{Q}(t_{k})=W_{Q}(t_{k+1})-W_{Q}(t_{k})\). From the moments of \(I_{k}\) and \(Q_{k}\), one can easily verify that, up to the first order in \(\Delta t\), \[\mathbb{E}\!\left[\Delta W_{I}(t_{k})\,|\,\hat{\rho}_{\mathcal{S }}(t_{k})\right]=\mathbb{E}\!\left[\Delta W_{Q}(t_{k})\,|\,\hat{\rho}_{ \mathcal{S}}(t_{k})\right]=0, \tag{112}\] \[\mathbb{E}\!\left[\Delta W_{I}(t_{k})\Delta W_{Q}(t_{k})\,|\,\hat {\rho}_{\mathcal{S}}(t_{k})\right]=0. \tag{113}\] \[\mathbb{E}\!\left[(\Delta W_{I}(t_{k}))^{2}\,|\,\hat{\rho}_{\mathcal{S}}(t_{k })\right]=\mathbb{E}\!\left[(\Delta W_{Q}(t_{k}))^{2}\,|\,\hat{\rho}_{ \mathcal{S}}(t_{k})\right]=\Delta t, \tag{114}\] Hence, \(W_{I}\) and \(W_{Q}\) can be treated as two independent Wiener processes. Moreover, since the moments of the Wiener increments are independent of the qutrit state, we can drop the conditioning above. However, note that the qutrit state depends on the past trajectory of \(W_{I}\) and \(W_{Q}\), which is why \(\hat{\rho}_{\mathcal{S}}\) should always be interpreted as the conditional states. Putting it differently, according to Eq.(110) and (111), we notice that the history of \(I\) and \(Q\) are determined once we have specified a realization of \(W_{I}\) and \(W_{Q}\); therefore, we can generate the quantum trajectories of the qutrit during the dispersive measurement by simulating all possible realizations of \(W_{I}\) and \(W_{Q}\). With the preparation above, we are ready to derive the stochastic master equation in the diffusive limit from Eq.(74). In the limit as \(\Delta t\to 0\), we can expand Eq.(74) to first order in \(\Delta t\) with the caution that the Wiener processes follow the Ito's rule, i.e., \((\Delta W_{I})^{2}=(\Delta W_{Q})^{2}=\Delta t\) and \(\Delta W_{I}\Delta W_{Q}=0\) as \(\Delta t\to 0\). In addition, the calculation is considerably simplified with the following observation: The Kraus operator \[\hat{K}_{IQ}(t_{k})\] \[=\mathscr{N}_{k}\sum_{a\in\{g,e,f\}}\exp\bigg{\{}-\left[I-\sqrt{ \eta\kappa\Delta t}\,\tilde{I}_{a}(t_{k})\right]^{2}\] \[\qquad\qquad\qquad\qquad\qquad-\left[Q-\sqrt{\eta\kappa\Delta t} \,\tilde{Q}_{a}(t_{k})\right]^{2}\bigg{\}}\hat{\Pi}_{a}\] \[\approx\mathscr{\tilde{N}}_{k}\exp\bigg{\{}-\left[I-\sqrt{\eta \kappa\Delta t}\sum_{a}\tilde{I}_{a}(t_{k})\hat{\Pi}_{a}\right]^{2}\] \[\qquad\qquad\qquad\qquad-\left[Q-\sqrt{\eta\kappa\Delta t}\sum_{a }\tilde{Q}_{a}(t_{k})\hat{\Pi}_{a}\right]^{2}\bigg{\}}\] \[=\mathscr{\tilde{N}}_{k}\exp\bigg{\{}-\left[I-\sqrt{\eta\kappa \Delta t}\hat{L}_{I}(t_{k})\right]^{2}\] \[\qquad\qquad\qquad\qquad-\left[Q-\sqrt{\eta\kappa\Delta t}\hat{L}_ {Q}(t_{k})\right]^{2}\bigg{\}} \tag{115}\] to the first order in \(\Delta t\). Since \(\left[\hat{L}_{I},\hat{L}_{Q}\right]=0\), we immediately have \[\hat{K}_{IQ}(t_{k})\approx\mathscr{\tilde{N}}_{k}\exp\!\left\{- \!\left[I-\sqrt{\eta\kappa\Delta t}\hat{L}_{I}(t_{k})\right]^{2}\right\}\] \[\qquad\qquad\times\exp\!\left\{-\!\left[Q-\sqrt{\eta\kappa\Delta t }\hat{L}_{Q}(t_{k})\right]^{2}\right\}\!, \tag{116}\] which is a great simplification and a verification that the values of \(I\) and \(Q\) are uncorrelated in the diffusive limit. Next, we replace \(I\) and \(Q\) using Eq.(110) and (111) so that the Kraus operator at \(t_{k}\) is implicitly fixed by a realization of \(W_{I}\) and \(W_{Q}\): \[\hat{K}_{I_{k}Q_{k}}(t_{k})\] \[=\bar{\mathscr{N}}_{k}\exp\bigg{[}\sqrt{\eta\kappa}\hat{L}_{I} \Delta W_{I}(t_{k})+2\eta\kappa\Delta t\big{\langle}\hat{L}_{I}(t_{k})\big{\rangle} \hat{L}_{I}(t_{k})\] \[\qquad\qquad-\eta\kappa\Delta t\hat{L}_{I}^{2}(t_{k})-\eta\kappa \Delta t\big{\langle}\hat{L}_{I}(t_{k})\big{\rangle}^{2}-\sqrt{\Delta t}/2 \bigg{]}\] \[\times\exp\bigg{[}\sqrt{\eta\kappa}\hat{L}_{Q}\Delta W_{Q}(t_{k}) +2\eta\kappa\Delta t\big{\langle}\hat{L}_{Q}(t_{k})\big{\rangle}\hat{L}_{Q}(t_{ k})\] \[\qquad\qquad-\eta\kappa\Delta t\hat{L}_{Q}^{2}(t_{k})-\eta\kappa \Delta t\big{\langle}\hat{L}_{Q}(t_{k})\big{\rangle}^{2}-\sqrt{\Delta t}/2 \bigg{]}\] \[=\bar{N}_{k}\exp\bigg{[}\sqrt{\eta\kappa}\hat{L}_{I}\Delta W_{I}( t_{k})\] \[\qquad\qquad+2\eta\kappa\Delta t\big{\langle}\hat{L}_{I}(t_{k}) \big{\rangle}\hat{L}_{I}(t_{k})-\eta\kappa\Delta t\hat{L}_{I}^{2}(t_{k})\bigg{]}\] \[\times\exp\bigg{[}\sqrt{\eta\kappa}\hat{L}_{Q}\Delta W_{Q}(t_{k})\] \[\qquad\qquad+2\eta\kappa\Delta t\big{\langle}\hat{L}_{Q}(t_{k}) \big{\rangle}\hat{L}_{Q}(t_{k})-\eta\kappa\Delta t\hat{L}_{Q}^{2}(t_{k})\bigg{]}, \tag{101}\] where we have used Ito's rule and lumped all scalar terms into \(\bar{N}_{k}\). We have also adopted the notation \(\big{\langle}\hat{A}\big{\rangle}=\mathrm{Tr}\big{[}\hat{\rho}_{\mathcal{S}}( t)\hat{A}\big{]}\). In addition, note that we do not need to know the value of \(\tilde{N}_{k}\) because it appears in both the numerator and denominator of Eq.(74) and will thus be canceled. At this point, there is not much simplification possible, but the math is also quite straightforward. After some algebra and several applications of Ito's rule, we arrive at \[\hat{\rho}_{\mathcal{S}}(t_{k+1})\] \[=\frac{\hat{K}_{I_{k}Q_{k}}(t_{k})\hat{\rho}_{\mathcal{S}}(t_{k}) \hat{K}_{I_{k}Q_{k}}^{\dagger}(t_{k})}{\mathrm{Tr}\big{[}\hat{K}_{I_{k}Q_{k}} (t_{k})\hat{\rho}_{\mathcal{S}}(t_{k})\hat{K}_{I_{k}Q_{k}}^{\dagger}(t_{k}) \big{]}}\] \[=\hat{\rho}_{\mathcal{S}}(t_{k})+\eta\kappa\bigg{[}\hat{L}_{I}(t_{ k})\hat{\rho}_{\mathcal{S}}(t_{k})\hat{L}_{I}(t_{k})\] \[\qquad\qquad-\frac{1}{2}\hat{L}_{I}^{2}(t_{k})\hat{\rho}_{\mathcal{ S}}(t_{k})-\frac{1}{2}\hat{\rho}_{\mathcal{S}}(t_{k})\hat{L}_{I}^{2}(t_{k}) \bigg{]}\Delta t\] \[+\sqrt{\eta\kappa}\bigg{[}\hat{L}_{I}(t_{k})\hat{\rho}_{\mathcal{ S}}(t_{k})+\hat{\rho}_{\mathcal{S}}(t_{k})\hat{L}_{I}(t_{k})\] \[\qquad\qquad-2\big{\langle}\hat{L}_{I}(t_{k})\big{\rangle}\hat{ \rho}_{\mathcal{S}}(t_{k})\bigg{]}\Delta W_{I}(t_{k})\] \[+\eta\kappa\bigg{[}\hat{L}_{Q}(t_{k})\hat{\rho}_{\mathcal{S}}(t_{ k})\hat{L}_{Q}(t_{k})\] \[\qquad\qquad-\frac{1}{2}\hat{L}_{Q}^{2}(t_{k})\hat{\rho}_{ \mathcal{S}}(t_{k})-\frac{1}{2}\hat{\rho}_{\mathcal{S}}(t_{k})\hat{L}_{Q}^{2}(t _{k})\bigg{]}\Delta t\] \[+\sqrt{\eta\kappa}\bigg{[}\hat{L}_{Q}(t_{k})\hat{\rho}_{\mathcal{ S}}(t_{k})+\hat{\rho}_{\mathcal{S}}(t_{k})\hat{L}_{Q}(t_{k})\] \[\qquad\qquad-2\big{\langle}\hat{L}_{Q}(t_{k})\big{\rangle}\hat{ \rho}_{\mathcal{S}}(t_{k})\bigg{]}\Delta W_{Q}(t_{k}) \tag{102}\] to the first order in \(\Delta t\). By replacing \(\Delta t\) and \(\Delta W(t)\) by differential \(\mathrm{d}t\) and \(\mathrm{d}W(t)\), respectively, and let \(\mathrm{d}\hat{\rho}_{\mathcal{S}}(t)=\hat{\rho}_{\mathcal{S}}(t+\Delta t)- \hat{\rho}_{\mathcal{S}}(t)\) as \(\Delta t\to 0\), we finally obtain the stochastic master equation \[\mathrm{d}\hat{\rho}_{\mathcal{S}}(t)\] \[=\eta\kappa\mathscr{D}\big{[}\hat{L}_{I}(t)\big{]}\hat{\rho}_{ \mathcal{S}}(t)\mathrm{d}t+\eta\kappa\mathscr{D}\big{[}\hat{L}_{Q}(t)\big{]} \hat{\rho}_{\mathcal{S}}(t)\mathrm{d}t\] \[\quad+\sqrt{\eta\kappa}\Big{[}\hat{L}_{I}(t)\hat{\rho}_{ \mathcal{S}}(t)+\hat{\rho}_{\mathcal{S}}(t)\hat{L}_{I}(t)\] \[\qquad\qquad\qquad-2\big{\langle}\hat{L}_{I}(t)\big{\rangle}\hat{ \rho}_{\mathcal{S}}(t)\Big{]}\mathrm{d}W_{I}(t)\] \[+\sqrt{\eta\kappa}\Big{[}\hat{L}_{Q}(t)\hat{\rho}_{\mathcal{S}}(t)+ \hat{\rho}_{\mathcal{S}}(t)\hat{L}_{Q}(t)\] \[\qquad\qquad\qquad\qquad-2\big{\langle}\hat{L}_{Q}(t)\big{\rangle} \hat{\rho}_{\mathcal{S}}(t)\Big{]}\mathrm{d}W_{Q}(t). \tag{103}\] In an experiment, we do not have control over \(W_{I}\) and \(W_{Q}\); instead, we observe current/voltage-like quantities of the form \[V_{I,k} \doteq\frac{2I_{k}}{\sqrt{\Delta t}}=\sqrt{\eta\kappa}\big{\langle} 2\hat{L}_{I}(t_{k})\big{\rangle}+\frac{\Delta W_{I}(t_{k})}{\Delta t}, \tag{104}\] \[V_{Q,k} \doteq\frac{2Q_{k}}{\sqrt{\Delta t}}=\sqrt{\eta\kappa}\big{\langle} 2\hat{L}_{Q}(t_{k})\big{\rangle}+\frac{\Delta W_{Q}(t_{k})}{\Delta t}, \tag{105}\] which, in the continuous limit, become \[V_{I}(t) =\sqrt{\eta\kappa}\big{\langle}2\hat{L}_{I}(t)\big{\rangle}+\xi_{I }(t), \tag{106}\] \[V_{Q}(t) =\sqrt{\eta\kappa}\big{\langle}2\hat{L}_{Q}(t)\big{\rangle}+\xi_{Q} (t), \tag{107}\] where \(\xi_{I}(t)=\dot{W}_{I}(t)\) and \(\xi_{Q}(t)=\dot{W}_{Q}(t)\) are classical white-noise signals defined by its expectation and autocorrelation \[\mathbb{E}[\xi_{I}(t)]=\mathbb{E}[\xi_{Q}(t)]=\mathbb{E}[\xi_{I}(t) \xi_{Q}(t^{\prime})]=0, \tag{108}\] \[\mathbb{E}[\xi_{I}(t)\xi_{I}(t^{\prime})]=\mathbb{E}[\xi_{Q}(t) \xi_{Q}(t^{\prime})]=\delta(t-t^{\prime}). \tag{109}\] Since the ensemble average of the white noise is zero, Eq.(106) and (107) formally justify the reason why we can determine the qutrit state, which is encoded in \(\big{\langle}\hat{L}_{I}(t)\big{\rangle}\) and \(\big{\langle}\hat{L}_{Q}(t)\big{\rangle}\), by making measurement of \(V_{I}\) and \(V_{Q}\). There is still one detail to be corrected, which is the fact that we have set \(\eta=1\) in the above derivation. Following the standard treatment of detection inefficiency (i.e., \(0\leq\eta<1\)), we introduce two new Wiener processes \(W_{I}^{\prime}(t)\) and \(W_{Q}^{\prime}(t)\) such that the efficiencies associated with them are both set to be \((1-\eta)\). In addition, the four Wiener processes should be independent (think of \((W_{I},W_{I}^{\prime})\) and \((W_{Q},W_{Q}^{\prime})\) as two Poisson branching processes but in the diffusive limit). Consequently, the stochastic master equation has four stochastic terms \[\mathrm{d}\hat{\rho}_{\mathcal{S}}\] \[=\eta\kappa\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\xi_{I}=\dot{W}_{I}\) and \(\xi_{Q}=\dot{W}_{Q}\). Eq.(106) is only the resonator part; the decoherence of the qutrit and the qutrit-resonator coupling can be added directly, resulting in \[\mathrm{d}\hat{\rho}=\] \[\qquad+\gamma_{1,gf}\mathscr{D}\big{[}\hat{\sigma}_{gf}\big{]}\hat {\rho}+\gamma_{1,ef}\mathscr{D}\big{[}\hat{\sigma}_{ef}\big{]}\hat{\rho}\] \[\qquad+\frac{\gamma_{\phi,ge}}{2}\mathscr{D}\big{[}\hat{\sigma}_ {z,ge}\big{]}\hat{\rho}+\frac{\gamma_{\phi,gf}}{2}\mathscr{D}\big{[}\hat{ \sigma}_{z,gf}\big{]}\hat{\rho}\] \[\qquad+\frac{\gamma_{\phi,ef}}{2}\mathscr{D}\big{[}\hat{\sigma}_ {z,ef}\big{]}\hat{\rho}\Big{)}\mathrm{d}t\] \[\qquad+\sqrt{\eta\kappa}\mathscr{M}\big{[}\hat{a}\big{]}\hat{ \rho}\,\mathrm{d}W_{I}+\sqrt{\eta\kappa}\mathscr{M}\big{[}-\mathrm{i}\hat{a} \big{]}\hat{\rho}\,\mathrm{d}W_{Q}. \tag{107}\] For simplicity, we have denoted \(\hat{\rho}=\hat{\rho}_{\mathcal{SR}}\) as the state of the combined system conditioned on the measurement record. To find an effective qutrit SME, we use the displacement operator introduced in Eq.(104). Since we already solved the time evolution of the density operator in the displaced frame without the heterodyne measurement, we only need to deal with terms that come from the two measurement superoperators: \[\mathrm{d}\hat{\rho}^{\mathsf{p}}=\] \[\qquad\qquad-\big{\langle}\hat{a}+\hat{a}^{\dagger}\big{\rangle}^ {\mathsf{p}}\hat{\rho}^{\mathsf{p}}-\big{\langle}\hat{\Pi}_{\alpha}+\hat{\Pi} _{\alpha}^{\dagger}\big{\rangle}^{\mathsf{p}}\hat{\rho}^{\mathsf{p}}\Big{]} \mathrm{d}W_{I}\] \[\qquad+\sqrt{\eta\kappa}\Big{[}-\mathrm{i}\big{(}\hat{a}\hat{ \rho}^{\mathsf{p}}-\hat{\rho}^{\mathsf{p}}\hat{a}\big{)}-\mathrm{i}\big{(}\hat{ \Pi}_{\alpha}\hat{\rho}^{\mathsf{p}}-\hat{\rho}^{\mathsf{p}}\hat{\Pi}_{\alpha}^ {\dagger}\big{)}\] \[\qquad\qquad\qquad+\mathrm{i}\big{\langle}\hat{a}-\hat{a}^{ \dagger}\big{\rangle}^{\mathsf{p}}\hat{\rho}^{\mathsf{p}}+\mathrm{i}\big{(} \hat{\Pi}_{\alpha}-\hat{\Pi}_{\alpha}^{\dagger}\big{)}^{\mathsf{p}}\hat{\rho}^ {\mathsf{p}}\Big{]}\mathrm{d}W_{Q} \tag{108}\] Note that \(\big{\langle}\hat{c}\big{\rangle}^{\mathsf{p}}=\mathrm{Tr}\left(\hat{c}\hat{ \rho}^{\mathsf{p}}\right)\) is the expectation value of \(\hat{c}\) in the displaced frame. Following the same procedure of tracing out the resonator subspace shown in Appendix C, we obtain, for example, \[\mathrm{d}\rho_{\mathcal{S},gg}\] \[=\bigg{(}\sum_{n}\hat{\rho}^{\mathsf{p}}_{nngg}\bigg{)}\mathrm{d}t\] \[=\gamma_{1,ge}\rho_{\mathcal{S},ee}\mathrm{d}t+\gamma_{1,gf}\rho_ {\mathcal{S},ff}\mathrm{d}t\] \[\qquad+2\sqrt{\eta\kappa}\bigg{[}\operatorname{Re}(\alpha_{g})- \sum_{a\in\{g,e,f\}}\operatorname{Re}(\alpha_{g})\rho_{\mathcal{S},aa}\bigg{]} \rho_{\mathcal{S},gg}\mathrm{d}W_{I}\] \[\qquad+2\sqrt{\eta\kappa}\bigg{[}\operatorname{Im}(\alpha_{g})- \sum_{a\in\{g,e,f\}}\operatorname{Im}(\alpha_{g})\rho_{\mathcal{S},aa}\bigg{]} \rho_{\mathcal{S},gg}\mathrm{d}W_{Q}, \tag{109}\] where we have used the fact that \(\rho^{\mathsf{p}}_{n\text{\text{\ instead of Eq.(110), Eq.(111) changes to \[\mathrm{d}\rho_{\mathcal{S},ge}\] \[=\Big{[}\mathrm{RHS\ of\ Eq.(116)}\Big{]}\mathrm{d}t\] \[+\sqrt{\eta\kappa}\Bigg{[}\mathrm{Re}(\alpha_{g}+\alpha_{e})- \sum_{a\in\{g,e,f\}}2\,\mathrm{Re}(\alpha_{g})\rho_{\mathcal{S},aa}\Bigg{]} \rho_{\mathcal{S},ge}\mathrm{d}W_{I}\] \[+\sqrt{\eta\kappa}\Bigg{[}\mathrm{Im}(\alpha_{g}+\alpha_{e})- \sum_{a\in\{g,e,f\}}2\mathrm{Im}(\alpha_{g})\rho_{\mathcal{S},aa}\Bigg{]} \rho_{\mathcal{S},ge}\mathrm{d}W_{Q}\] \[=\Big{[}\mathrm{RHS\ of\ Eq.(116)}\Big{]}\mathrm{d}t\] \[+\langle g|\left(\sqrt{\eta\kappa}\mathcal{U}\big{[}\hat{L}_{I} \big{]}\hat{\rho}_{\mathcal{S}}\mathrm{d}W_{I}+\sqrt{\eta\kappa}\mathcal{U} \big{[}\hat{L}_{Q}\big{]}\hat{\rho}_{\mathcal{S}}\mathrm{d}W_{Q}\right)|e \rangle\,. \tag{117}\] Combining the stochastic differential equations of diagonal and off-diagonal terms gives the effective qutrit SME in Eq.(87) of the main text, provided that we ignore the measurement-induced frequency shifts. The effective qudit SME with \(D>3\) can also be derived following the same logic. Furthermore, the heterodyne records used in the SME of the combined system should be modified to depend only on the qutrit operators. To wit, we use the displaced frame and the fact that \(\rho_{nmab}^{\mathsf{P}}=0\) with \(n,m>0\) again: \[V_{I} =\sqrt{\eta\kappa}\big{\langle}\hat{a}+\hat{a}^{\dagger}\big{\rangle} ^{\mathsf{P}}+\sqrt{\eta\kappa}\big{\langle}\hat{\Pi}_{\alpha}+\hat{\Pi}_{ \alpha}^{\dagger}\big{\rangle}^{\mathsf{P}}+\xi_{I}\] \[=\sqrt{\eta\kappa}\Big{\langle}\hat{\Pi}_{\alpha}+\hat{\Pi}_{ \alpha}^{\dagger}\big{\rangle}^{\mathsf{P}}+\xi_{I}\] \[=\sqrt{\eta\kappa}\sum_{n}\sum_{a\in\{g,e,f\}}(\alpha_{a}+\alpha_ {a}^{*})\rho_{nnaa}^{\mathsf{P}}+\xi_{I}\] \[=\sqrt{\eta\kappa}\big{\langle}2\hat{L}_{I}\big{\rangle}+\xi_{I}. \tag{118}\] Note that the expectation value in the last line of Eq.(118) is computed with respect to the qutrit state only, i.e., \(\big{\langle}2\hat{L}_{I}\big{\rangle}=\mathrm{Tr}\Big{(}2\hat{L}_{I}\hat{\rho }_{\mathcal{S}}\Big{)}\). The other quadrature follows a similar stochastic differential equation and, in summary, we have \[V_{I}(t)=\sqrt{\eta\kappa}\big{\langle}2\hat{L}_{I}(t)\big{\rangle}+\xi_{I}(t), \tag{119}\] \[V_{Q}(t)=\sqrt{\eta\kappa}\big{\langle}2\hat{L}_{Q}(t)\big{\rangle}+\xi_{Q}(t), \tag{120}\] which is exactly Eq.(117) and (118). For simplicity, we have assumed that \(\phi=0\), i.e., the cable delay is ignored. One can replace \(\hat{a}\) and \(\alpha_{a}\) with \(\hat{a}e^{-\mathrm{i}\phi}\) and \(\alpha_{a}e^{-\mathrm{i}\phi}\), respectively, in all the equations above to account for a constant phase shift, which is included in the main text for generality.
2306.14752
MedLSAM: Localize and Segment Anything Model for 3D CT Images
Recent advancements in foundation models have shown significant potential in medical image analysis. However, there is still a gap in models specifically designed for medical image localization. To address this, we introduce MedLAM, a 3D medical foundation localization model that accurately identifies any anatomical part within the body using only a few template scans. MedLAM employs two self-supervision tasks: unified anatomical mapping (UAM) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. Furthermore, we developed MedLSAM by integrating MedLAM with the Segment Anything Model (SAM). This innovative framework requires extreme point annotations across three directions on several templates to enable MedLAM to locate the target anatomical structure in the image, with SAM performing the segmentation. It significantly reduces the amount of manual annotation required by SAM in 3D medical imaging scenarios. We conducted extensive experiments on two 3D datasets covering 38 distinct organs. Our findings are twofold: 1) MedLAM can directly localize anatomical structures using just a few template scans, achieving performance comparable to fully supervised models; 2) MedLSAM closely matches the performance of SAM and its specialized medical adaptations with manual prompts, while minimizing the need for extensive point annotations across the entire dataset. Moreover, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced segmentation performance. Our code is public at \href{https://github.com/openmedlab/MedLSAM}
Wenhui Lei, Xu Wei, Xiaofan Zhang, Kang Li, Shaoting Zhang
2023-06-26T15:09:02Z
http://arxiv.org/abs/2306.14752v4
# MedLSAM: Localize and Segment Anything Model for 3D Medical Images ###### Abstract The Segment Anything Model (SAM) has recently emerged as a groundbreaking model in the field of image segmentation. Nevertheless, both the original SAM and its medical adaptations necessitate slice-by-slice annotations, which directly increase the annotation workload with the size of the dataset. We propose MedLSAM to address this issue, ensuring a constant annotation workload irrespective of dataset size and thereby simplifying the annotation process. Our model introduces a few-shot localization framework capable of localizing any target anatomical part within the body. To achieve this, we develop a Localize Anything Model for 3D Medical Images (MedLAM), utilizing two self-supervision tasks: relative distance regression (RDR) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then establish a methodology for accurate segmentation by integrating MedLAM with SAM. By annotating only six extreme points across three directions on a few templates, our model can autonomously identify the target anatomical region on all data scheduled for annotation. This allows our framework to generate a 2D bounding box for every slice of the image, which are then leveraged by SAM to carry out segmentations. We conducted experiments on two 3D datasets covering 38 organs and found that MedLSAM matches the performance of SAM and its medical adaptations while requiring only minimal extreme point annotations for the entire dataset. Furthermore, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced performance. Our code is public at [https://github.com/openmedlab/MedLSAM](https://github.com/openmedlab/MedLSAM). Keywords:SAM Medical Image Segmentation Contrastive Learning ## 1 Introduction The Segment Anything Model (SAM) [14] has recently demonstrated remarkable capabilities in a broad range of segmentation tasks due to its ability to manage diverse objects. Previous research has explored the application of SAM to medical image segmentation [5, 8, 9, 11, 20, 22, 25, 28, 29]. Particularly with fine-tuned medical adaptations [20, 25], SAM models have achieved impressive performance in this specialized area. However, SAM's application to medical image segmentation poses a significant challenge due to its requirement for manual annotations. These include labeled points or bounding boxes that delineate the segmentation region, which are both time-intensive and costly to produce. In this paper, we introduce MedLSAM, an automated medical image segmentation model designed to significantly reduce the annotation workload. As illustrated in Fig. 1, MedLSAM employs a two-stage methodology. The first stage involves a few-shot localization framework that automatically identifies the positions of target organs within volumetric medical images. The subsequent stage utilizes the bounding boxes generated in the first stage by applying the SAM model to execute precise image segmentation. The result is a fully autonomous pipeline that eliminates the need for manual intervention. The few-shot localization framework, MedLAM, is an extension of our previous work [17] and is premised on the observation that the spatial distribution of organs maintains strong similarities across different individuals. Our previous studies trained the model on a relatively smaller dataset. In contrast, this current study significantly expands the dataset to include 14,012 CT scans from 16 Figure 1: The overall segmentation pipeline of MedLSAM operates as follows. Given a dataset of any size, MedLSAM first applies a localization process (MedLAM) to identify the six extreme points (in the z, x, and y directions) of any anatomical region of interest. This process results in the generation of a 3D bounding box encompassing the targeted organ or structure. Subsequently, for each slice within this 3D bounding box, a corresponding 2D bounding box is generated. These 2D bounding boxes are then utilized by the Segment Anything Model (SAM) to carry out precise segmentation of the target anatomy, thereby automating the entire segmentation process. different datasets. This allows us to train a unified, comprehensive model capable of localizing structures across the entire body. The training process involves a projection network that predicts the 3D physical offsets between any two patches within the same image, thereby mapping every part of the scans onto a shared 3D latent coordinate system. While our localization approach is robust, we acknowledge that individual variations in anatomical positioning could result in different anatomical structures sharing the same latent coordinates across various images. To mitigate this issue, we refine our localization accuracy by extracting pixel-level features from our points of interest. This method enables us to identify the most similar feature within the vicinity of the initially localized point, thus enhancing the overall localization accuracy. Our approach is inspired by the self-supervised learning tasks proposed in [26, 27], which strive to maximize the similarity between the original and augmented instances of the same image point and minimize the similarity between different points. This technique improves the accuracy and precision of our model's localization. For the segmentation stage, we employ the original SAM and the well-established MedSAM [20] as the foundation for our segmentation process. MedSAM, previously fine-tuned on a comprehensive collection of medical image datasets, has exhibited considerable performance in 2D and 3D medical image segmentation tasks. The use of such a robust model bolsters the reliability and effectiveness of our proposed pipeline. The effectiveness of MedLSAM is validated through experiments on two 3D datasets comprising 38 organs. The results demonstrate that MedLSAM parallels the performance of SAM and its medical adaptations while significantly reducing the burden of manual annotation. ## 2 Methodology In this section, we elucidate the mechanisms underpinning MedLAM's functionality. Initially, in Section 2.1, we explore the training of MedLAM, during which global anatomical coordinates and local image features are extracted from any given point within a scan, aiding in identifying the most similar point within a query scan. Subsequently, in Section 2.2, we detail the inference process of MedLAM and its integration with the Segment Anything Model (SAM) to give rise to the comprehensive MedLSAM model. ### Training of MedLAM Our MedLAM model, as illustrated in Fig. 2, comprises two main components: Relative Distance Regression (RDR) and Multi Scale Similarity (MSS). We start by selecting a volumetric image \(\mathbf{v}\) from the unannotated training set. We then extract two large image patches from \(\mathbf{v}\), which serve as the source for further extractions. These large patches undergo a variety of transformations to produce two pairs of patches, namely, the original patch pair \((\mathbf{x_{q}},\mathbf{x_{s}})\) and the transformed patch pair \((\mathbf{x^{\prime}_{q}},\mathbf{x^{\prime}_{s}})\). #### 2.2.1 Relative Distance Regression (RDR) In this step, we leverage the RDR methodology, an extension of our previous work [17]. This involves mapping 3D scan images from different individuals onto a unified implicit 3D anatomical coordinate system, ensuring that identical anatomical structures from different individuals share the same coordinate. As a result, it allows us to perform an initial, coarse localization of the point within a query scan that shares the same implicit coordinate as our point of interest. The RDR model aims to predict the 3D offset between the query patch \(\mathbf{x_{q}}\) and the support patch \(\mathbf{x_{s}}\). Considering \(\mathbf{e}\in R^{3}\) as the pixel spacing of \(\mathbf{v}\), and \(\mathbf{c_{q}},\mathbf{c_{s}}\in R^{3}\) as the centroid coordinates of \(\mathbf{x_{q}}\) and \(\mathbf{x_{s}}\) in \(\mathbf{v}\) respectively, the ground truth offset \(\mathbf{d^{\prime}_{qs}}\) from \(\mathbf{x_{q}}\) to \(\mathbf{x_{s}}\) in the physical space can be calculated as: \[\mathbf{d^{\prime}_{qs}}=(\mathbf{c_{s}}-\mathbf{c_{q}})\cdot\mathbf{e} \tag{1}\] Both \(\mathbf{x_{s}}\) and \(\mathbf{x_{q}}\) undergo processing via an encoder to distill high-level features. Subsequently, fully connected layers map these features to their corresponding 3D latent vectors, \(\mathbf{p_{s}}\) and \(\mathbf{p_{q}}\), each \(\in R^{3}\). This process leads to the Figure 2: The learning process of MedLAM. predicted offset \(\mathbf{d_{qs}}\in R^{3}\) from the query patch \(\mathbf{x_{q}}\) to the support patch \(\mathbf{x_{s}}\) being computed as: \[\mathbf{d_{qs}}=r\cdot tanh(\mathbf{p_{s}}-\mathbf{p_{q}}) \tag{2}\] The utilization of the hyperbolic tangent function \(tanh\) in conjunction with the hyper-parameter \(r\) is intended to dictate the upper and lower bound of \(\mathbf{d_{qs}}\), thereby covering the largest feasible offset. Lastly, to measure the difference between \(\mathbf{d_{qs}}\) and \(\mathbf{d^{\prime}_{qs}}\), we employ the Mean Square Error (MSE) loss function: \[L_{D}=||\mathbf{d_{qs}}-\mathbf{d^{\prime}_{qs}}||^{2} \tag{3}\] #### 3.2.2 Multi Scale Similarity (MSS) Given the inherent variations in anatomical positioning across different individuals, regions sharing the same latent coordinates in various images may still correspond to different anatomical structures. Therefore, we need to further refine the precision of our localization by extracting local pixel-level features from our points of interest. This allows us to pinpoint the most similar feature within the vicinity of the initially localized point, thereby enhancing the overall localization accuracy. This is inspired by the work in [26, 27], which ensures that augmented instances of the same image yield highly similar features for the same point, while different points exhibit substantially divergent features. More specifically, as shown in Fig. 3, the inputs to our MSS process include multi-scale feature maps extracted from \(\mathbf{x_{s}}\) and \(\mathbf{x^{\prime}_{s}}\), along with a chosen point Figure 3: Deatils of the Multi Scale Similarity (MSS). \(c_{1}\) from \(\mathbf{x_{s}}\), whose corresponding point in \(\mathbf{x^{\prime}_{s}}\) is the \(c^{\prime}_{1}\). We extract the feature vectors corresponding to point \(c_{1}\) from the various scale feature maps of \(\mathbf{x_{s}}\), and we compute the similarity between these feature vectors and the corresponding scale feature maps in \(\mathbf{x^{\prime}_{s}}\). After resizing the resulting similarity maps to the original image size, we aggregate them. This process allows us to pinpoint the location within \(\mathbf{x^{\prime}_{s}}\) that exhibits the highest similarity to point \(c_{1}\), thereby further refining our localization. ### Inference of MedLSAM The inference stage for our MedLSAM framework combines the strengths of MedLAM for landmark localization and MedSAM for medical image segmentation, as shown in Fig. 4. Initially, we utilize MedLAM to localize the desired landmark in the query image. We conceptualize the localization task as maneuvering an agent from a randomly initialized position towards the target location. A patch is extracted from a random position within the query image, and simultaneously, a support patch is extracted from the support image, centered around the pre-specified landmark. Upon processing these two patches through the MedLAM model, we obtain a 3D offset that represents the estimated relative spatial displacement between the query and target positions. By updating the agent's location based on this offset, we achieve a coarse localization of the landmark within the query image. For refining the landmark localization, the Multi Scale Similarity (MSS) component of MedLAM is utilized. We extract multi-scale feature maps around the coarsely localized point in the query image and its corresponding point in the support image, perform similarity calculations, and aggregate the similarity maps to pinpoint the location with the highest feature similarity in the query image. This procedure significantly enhances the precision of our landmark localization. After successfully identifying the landmarks, we transition to the segmentation stage. For this, we utilize both SAM and MedSAM, a specialized variant of SAM that has been fine-tuned for medical image datasets. Both models serve as the foundation for our segmentation tasks. The versatility of SAM and the domain-specific adaptations of MedSAM help us provide robust segmentation results, thereby adding to the overall efficacy of the MedSLAM system. ## 3 Experiments ### Dataset Our MedLAM model is trained on an extensive set of 16 datasets, which collectively comprise a total of 14,012 CT scans. These scans encompass various regions of the human body, providing comprehensive anatomical coverage. An overview of the training datasets is provided in Table 1. The diverse and abundant training data ensures the robustness and generalizability of our model across different medical imaging contexts. To validate the effectiveness of our approach, we integrate MedLAM with two segmentation backbones: SAM [14] and MedSAM [20]. We test these combined models on two CT segmentation datasets: 1) StructSeg19 Task1 dataset for the 22 head-and-neck (HaN) organs with 50 scans; 2) the WORD dataset [19] for the 16 abdomen organs with 120 scans. For both datasets, we randomly select five scans as support volumes. For each organ in these scans, we compute the extreme coordinates and average the coordinates and features across the five images. This process generates an average representation of latent coordinates and features for each extreme point of the organ, which are then utilized in the succeeding stages of MedLAM as depicted in Sec. 2.2. Figure 4: Structure of our Localization Anything Model (MedLAM). \(\mathbf{x_{s}}\) and \(\mathbf{x_{q}}\) are the support and query patches centered at \(\mathbf{c_{s}}\) and \(\mathbf{c_{q}}\). We use a shared Pnet to transform \(\mathbf{x_{s}}\) and \(\mathbf{x_{q}}\) to 3D latent vectors \(\mathbf{p_{s}}\) and \(\mathbf{p_{q}}\), respectively. The Pnet contains convolution blocks to extract features and fully connected layers for projection. We apply scale factor \(r\) and hyperbolic tangent function \(tanh\) to get the predicted offset \(\mathbf{d_{qs}}\), i.e., relative position from \(\mathbf{x_{s}}\) to \(\mathbf{x_{q}}\). ### Implementation Details Our model was trained using four NVIDIA GTX 3090 Ti GPUs. We utilized the Adam optimizer [13] with a batch size of 8, an initial learning rate of 10\({}^{-3}\), and training duration of 250 epochs. In terms of pre-processing for MedLAM's training and testing, we rescaled the voxel spacing to [3, 3, 3] mm and standardized the cropping patch sizes to 64\(\times\)64\(\times\)64 pixels. To ensure that the scanning range was fully covered, we set the parameter \(r\) as [1500, 600, 600]. Upon utilizing SAM and MedSAM, the original images are subjected to a separate preprocessing routine in line with the standard procedures described in the original MedSAM methodology. This includes adjusting the slice resolution to 3 \(\times\) 1024 \(\times\) 1024, normalizing the intensity. Further, specific handling measures are adopted for different datasets based on their unique characteristics. For abdominal organs in the WORD dataset, in accordance with MedSAM, we exclude segmentation targets that consist of fewer than 100 pixels. While for the HaN organs in the StructSeg dataset, which are typically smaller, we adapt the criteria and exclude only those slices that contain fewer than 10 pixels. This adjustment ensures the model's robust performance in identifying and analyzing small yet potentially significant anatomical structures in HaN CT scans. For the 3D bounding box obtained from MedLAM localization, we extended it by [2, 10, 10] pixels in the z, x, and y directions, respectively. This strategy ensured that the targeted organ was completely encapsulated within the box, enabling effective segmentation. \begin{table} \begin{tabular}{l|c c} \hline Dataset & Number & Anatomical Region \\ \hline GLIA [3] & 1338 & HaN \\ ACRIN 6685 [18] & 260 & HaN \\ OPC-Radiomics [15] & 606 & HaN \\ Head-Neck-PET-CT [24] & 298 & HaN \\ HNSCC [7] & 591 & HaN/Thorax/Abdomen \\ autoPET [6] & 1014 & Whole \\ MELA [4] & 770 & Thorax \\ LIDC-IDRI [2] & 1308 & Thorax \\ STOIC2021 [23] & 2000 & Thorax \\ MSD-Lung [1] & 95 & Thorax \\ CBIS-DDSM [16] & 2620 & Thorax \\ AMOS 2022 [12] & 500 & Thorax/Abdomen \\ Kits19 [10] & 141 & Abdomen \\ MSD-Colon [1] & 190 & Abdomen \\ MSD-Pancreas [1] & 281 & Abdomen \\ FLARE2022 [21] & 2000 & Abdomen \\ \hline Total & 14,012 & Whole \\ \hline \end{tabular} \end{table} Table 1: Detailed information of the 16 CT datasets for MedLAM training. ### Experiments Results #### 3.3.1 Evaluation of LAM In this section, we begin by evaluating the localization performance of MedLAM, with the purpose of validating the viability of our universal localization model. We used the Intersection Over Union (IOU) as the metric to evaluate the accuracy of organ localization. The mean IOU of each organ is displayed in Fig. 5. In the WORD dataset, the model achieved excellent localization with the highest IOU for Head of Femur (L), reaching 0.737. Meanwhile, Gallbladder showed the lowest IOU with 0.180, suggesting room for improvement in localizing smaller organs. Figure 5: The mean IOU score of each organ in the WORD (top) and StructSeg (bottom) dataset. In the StructSeg dataset, the best localization performance was observed for Mandible R with an IOU of 0.805, while Opt Chiasma, being a relatively small organ, showed the lowest IOU of 0.104. In summary, MedLAM exhibited reliable localization performance, particularly for larger organs. For smaller organs, despite lower IOU scores, the subsequent preprocessing step--expanding the 3D bounding box--ensured that these organs were adequately captured for segmentation, thereby ensuring practical applicability across a wide range of organ sizes. This methodology thus provides a practical solution to the challenge of localizing smaller organs with MedLAM, balancing out the performance across different organ sizes. #### 4.2.2 Evaluation of MedLSAM After validating the localization performance of MedLAM, we proceeded to examine the segmentation performance of the proposed MedLSAM framework. In our experiments, the Dice Similarity Coefficient (DSC) was employed as a measure to gauge the accuracy of our method. MedL \begin{table} \begin{tabular}{c|c c|c c} \hline Localization & \multicolumn{2}{c|}{MedLAM} & \multicolumn{2}{c}{Manual} \\ \hline Organs & SAM & MedSAM & SAM & MedSAM \\ \hline Brain Stem & 53.5 \(\pm\) 5.5 & **64.7 \(\pm\) 6.3** & 65.2 \(\pm\) 3.7 & **72.8 \(\pm\) 3.3** \\ Eye L & **63.9 \(\pm\) 6.1** & 61.1 \(\pm\) 6.1 & **67.6 \(\pm\) 5.0** & 66.8 \(\pm\) 5.6 \\ Eye R & **66.3 \(\pm\) 5.3** & 63.4 \(\pm\) 5.2 & **69.5 \(\pm\) 4.6** & 67.6 \(\pm\) 4.9 \\ Lens L & 22.2 \(\pm\) 7.5 & **16.5 \(\pm\) 3.1** & **21.4 \(\pm\) 9.5** & 15.9 \(\pm\) 2.8 \\ Lens R & 20.6 \(\pm\) 6.9 & **13.6 \(\pm\) 2.8** & **20.7 \(\pm\) 10.8** & 13.8 \(\pm\) 3.5 \\ Opt Nerve L & **31.4 \(\pm\) 9.5** & 29.7 \(\pm\) 13.2 & **32.4 \(\pm\) 12.9** & 22.2 \(\pm\) 17.9 \\ Opt Nerve R & **34.6 \(\pm\) 8.9** & 32.1 \(\pm\) 12.2 & 32.2 \(\pm\) 15.9 & **36.4 \(\pm\) 13.1** \\ Opt Chiasma & **29.0 \(\pm\) 10.0** & 28.9 \(\pm\) 16.0 & **37.9 \(\pm\) 14.9** & 25.3 \(\pm\) 14.7 \\ Temporal Lobes L & 25.4 \(\pm\) 16.3 & **72.3 \(\pm\) 4.8** & 37.7 \(\pm\) 20.2 & **78.2 \(\pm\) 6.4** \\ Temporal Lobes R & 19.9 \(\pm\) 20.2 & **67.7 \(\pm\) 8.6** & 34.4 \(\pm\) 21.2 & **76.5 \(\pm\) 7.6** \\ Pituitary & **36.2 \(\pm\) 21.1** & 28.5 \(\pm\) 16.1 & **36.6 \(\pm\) 17.0** & 29.1 \(\pm\) 15.4 \\ Parotid Gland L & 7.1 \(\pm\) 6.5 & **44.2 \(\pm\) 10.3** & 27.8 \(\pm\) 10.0 & **50.7 \(\pm\) 11.0** \\ Parotid Gland R & 8.1 \(\pm\) 8.6 & **44.7 \(\pm\) 8.5** & 30.2 \(\pm\) 9.9 & **49.4 \(\pm\) 10.5** \\ Inner Ear L & **51.7 \(\pm\) 16.5** & 48.1 \(\pm\) 13.7 & **56.4 \(\pm\) 16.1** & 54.7 \(\pm\) 12.4 \\ Inner Ear R & **63.8 \(\pm\) 10.3** & 43.5 \(\pm\) 18.8 & **60.5 \(\pm\) 15.8** & 44.5 \(\pm\) 16.9 \\ Mid Ear L & **64.1 \(\pm\) 11.8** & 27.7 \(\pm\) 14.1 & **74.4 \(\pm\) 7.7** & 39.5 \(\pm\) 14.4 \\ Mid Ear R & **64.7 \(\pm\) 10.6** & 33.2 \(\pm\) 14.2 & **74.4 \(\pm\) 7.7** & 45.0 \(\pm\) 9.6 \\ TM Joint L & **54.0 \(\pm\) 8.3** & 34.9 \(\pm\) 12.4 & **62.8 \(\pm\) 11.5** & 37.8 \(\pm\) 16.1 \\ TM Joint R & **58.5 \(\pm\) 7.9** & 43.6 \(\pm\) 11.1 & **64.5 \(\pm\) 14.8** & 46.6 \(\pm\) 11.9 \\ Spinal Cord & **9.5 \(\pm\) 3.8** & 9.4 \(\pm\) 3.5 & **40.4 \(\pm\) 6.3** & 32.7 \(\pm\) 6.4 \\ Mandible L & **48.3 \(\pm\) 5.1** & 9.0 \(\pm\) 3.8 & **85.3 \(\pm\) 2.4** & 11.1 \(\pm\) 4.6 \\ Mandible R & **43.5 \(\pm\) 5.3** & 2.6 \(\pm\) 2.9 & **80.4 \(\pm\) 2.5** & 12.9 \(\pm\) 7.8 \\ \hline Average & **39.6 \(\pm\) 7.6** & 37.5 \(\pm\) 7.1 & **50.6 \(\pm\) 6.3** & 42.3 \(\pm\) 7.8 \\ \hline \end{tabular} \end{table} Table 2: DSC (mean\(\pm\)std %) evaluation of 3D head-and-neck organs segmentation in the StructSeg Task1 dataset. The table compares the performance of SAM and MedSAM as a segmentation basis within the MedLsAM framework, along with results from manually assisted localizations. SAM utilizes the localization information from MedLAM, together with either SAM or MedSAM, to perform segmentation tasks. In addition to the automatic localizations generated by MedLAM, we also included manual bounding boxes in our evaluation. These were simulated based on the ground-truth masks, following the same approach used for generating bounding boxes in MedSAM. During training, a bounding box prompt was generated from each ground-truth mask, with a random perturbation of 0-20 pixels introduced to mimic the potential inaccuracies that would be present in manually-drawn bounding boxes. The DSC scores of StructSeg Task1 dataset are shown in Table 2, it is clear that MedLAM performed comparably to manual localisation in the context of small organs. For example, organs like the left and right eyes, MedLAM based SAM and MedSAM have DSC values of around 61-67%, which are close to the DSC values of 67-69% from manual localization. For minute organs such as the left and right lenses, MedLAM shows a similar DSC value to manual localisation (around 22% for SAM and 16% for MedSAM). However, MedLAM based SAM and MedSAM have lower performance in the context of the mandible (left and right), with DSC values of 48.3% and 43.5% for SAM and only 9.0% and 2.6% for MedSAM, compared to manual localization DSC values of 85.3% and 80.4%. Meanwhile, our evaluation on the WORD dataset for abdominal organ segmentation, as shown in Table 3, revealed that MedLAM demonstrated robust \begin{table} \begin{tabular}{c|c c|c c} \hline Localization & \multicolumn{2}{c|}{MedLAM} & \multicolumn{2}{c}{Manual} \\ \hline Organs & SAM & MedSAM & SAM & MedSAM \\ \hline Liver & 55.5 \(\pm\) 9.5 & **67.1 \(\pm\) 8.7** & **84.5 \(\pm\) 6.3** & 76.3 \(\pm\) 6.7 \\ \multirow{2}{*}{Spleen} & **60.9 \(\pm\) 12.4** & 40.3 \(\pm\) 20.8 & **87.2 \(\pm\) 7.3** & 60.2 \(\pm\) 19.4 \\ Kidney (L) & **78.5 \(\pm\) 12.1** & 66.2 \(\pm\) 13.1 & **92.0 \(\pm\) 4.4** & 72.3 \(\pm\) 7.3 \\ Kidney (R) & **83.0 \(\pm\) 11.0** & 62.3 \(\pm\) 9.7 & **92.9 \(\pm\) 2.3** & 67.1 \(\pm\) 6.7 \\ \multirow{2}{*}{Stomach} & **43.0 \(\pm\) 13.1** & 35.2 \(\pm\) 15.9 & **79.4 \(\pm\) 8.6** & 62.2 \(\pm\) 14.6 \\ Gallbladder & **33.2 \(\pm\) 25.0** & 27.8 \(\pm\) 22.7 & **72.9 \(\pm\) 9.7** & 67.3 \(\pm\) 10.5 \\ Esophagus & **24.6 \(\pm\) 11.2** & 22.1 \(\pm\) 13.8 & **68.2 \(\pm\) 6.8** & 48.2 \(\pm\) 13.5 \\ \multirow{2}{*}{Pancreas Duodenum} & **34.1 \(\pm\) 12.1** & 28.4 \(\pm\) 11.0 & **64.0 \(\pm\) 11.8** & 56.7 \(\pm\) 9.2 \\ Duodenum & **22.9 \(\pm\) 13.2** & 18.5 \(\pm\) 9.8 & **59.7 \(\pm\) 11.9** & 42.3 \(\pm\) 11.3 \\ Colon & **18.5 \(\pm\) 6.3** & 12.8 \(\pm\) 8.3 & **42.9 \(\pm\) 9.0** & 22.0 \(\pm\) 9.9 \\ \multirow{2}{*}{Intestine} & **35.5 \(\pm\) 9.3** & 20.2 \(\pm\) 8.8 & **60.2 \(\pm\) 7.2** & 31.3 \(\pm\) 9.1 \\ \multirow{2}{*}{Adrenal} & **3.8 \(\pm\) 5.2** & 3.4 \(\pm\) 3.1 & **18.7 \(\pm\) 12.1** & 17.3 \(\pm\) 10.0 \\ \multirow{2}{*}{Rectum} & **37.6 \(\pm\) 11.1** & 29.9 \(\pm\) 12.9 & **74.4 \(\pm\) 5.5** & 53.7 \(\pm\) 12.9 \\ Bladder & **68.4 \(\pm\) 21.3** & 62.4 \(\pm\) 17.8 & **85.8 \(\pm\) 11.6** & 73.7 \(\pm\) 10.5 \\ \multirow{2}{*}{Head of Femur (L) Head of Femur (R)} & **74.5 \(\pm\) 7.7** & 48.2 \(\pm\) 15.6 & **89.3 \(\pm\) 4.7** & 62.1 \(\pm\) 12.8 \\ Head of Femur (R) & **71.5 \(\pm\) 5.3** & 46.6 \(\pm\) 10.1 & **87.9 \(\pm\) 3.6** & 65.5 \(\pm\) 7.0 \\ \hline Average & **45.9 \(\pm\) 11.2** & 37.7 \(\pm\) 12.4 & **72.5 \(\pm\) 3.1** & 54.9 \(\pm\) 6.7 \\ \hline \end{tabular} \end{table} Table 3: DSC (mean\(\pm\)std %) evaluation of 3D head-and-neck organs segmentation in the WORD dataset. The table compares the performance of SAM and MedSAM as segmentation basis within the MedLSAM framework, along with results from manually assisted localizations. performance for organs such as the left and right kidneys. For instance, when integrated with SAM, the Dice Similarity Coefficient (DSC) reached 78.5% for the left kidney, and escalated to an impressive 83.0% for the right kidney. When combined with MedSAM, the model maintained satisfactory performance with a DSC of 66.2% for the left kidney and 62.3% for the right kidney. However, for smaller organs like the adrenal gland, both SAM and MedSAM yielded lower DSC values under 4% in comparison to a manual localization DSC of 18.7%. ## 4 Discussion & Conclusions The presented work introduced MedLSAM, the first completely automated medical adaptation of the SAM model, designed to significantly alleviate the annotation workload in the segmentation of medical images. By cleverly integrating MedLAM, a few-shot localization framework, with SAM, the system was able to achieve comparable performance to SAM and its medical adaptations, yet required only minimal extreme point annotations for the entire dataset. This endeavor was fueled by the observation that the spatial distributions of organs across different patients maintain strong similarities. Consequently, MedLAM was designed to project every part of the scans onto a shared 3D latent coordinate system, accurately localizing target anatomical parts within the body. Coupling this approach with SAM's segmentation capabilities led to an efficient and accurate process for image segmentation. Figure 6: Visualization examples of segmentation results on WORD and StructSeg Task1 datasets using pre-trained MedSAM and SAM, post landmark localization with MedLAM. Moreover, MedLSAM demonstrated its effectiveness across two 3D datasets covering 38 different organs, providing robust evidence of its versatility. Importantly, this automated approach is not burdened by an increased annotation workload as data size increases. It also holds promise for direct integration with potential future 3D SAM models in the medical field, which could further enhance its performance and utility.
2308.13591
Queering the ethics of AI
This book chapter delves into the pressing need to "queer" the ethics of AI to challenge and re-evaluate the normative suppositions and values that underlie AI systems. The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination, including binarism, and amplify existing inequalities due to the lack of representative datasets and the affordances and constraints depending on technology readiness. The chapter argues that a critical examination of the neoliberal conception of equality that often underpins non-discrimination law is necessary and cannot stress more the need to create alternative interdisciplinary approaches that consider the complex and intersecting factors that shape individuals' experiences of discrimination. By exploring such approaches centering on intersectionality and vulnerability-informed design, the chapter contends that designers and developers can create more ethical AI systems that are inclusive, equitable, and responsive to the needs and experiences of all individuals and communities, particularly those who are most vulnerable to discrimination and harm.
Eduard Fosch-Villaronga, Gianclaudio Malgieri
2023-08-25T17:26:05Z
http://arxiv.org/abs/2308.13591v1
## 1 Quereing the ethics of AI1 ## 2 Abstract This book chapter delves into the pressing need to "queer" the ethics of AI to challenge and re-evaluate the normative suppositions and values that underlie AI systems. The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination, including binarism, and amplify existing inequalities due to the lack of representative datasets and the affordances and constraints depending on technology readiness. The chapter argues that a critical examination of the neoliberal conception of equality that often underpins non-discrimination law is necessary and cannot stress more the need to create alternative interdisciplinary approaches that consider the complex and intersecting factors that shape individuals' experiences of discrimination. By exploring such approaches centering on intersectionality and vulnerability-informed design, the chapter contends that designers and developers can create more ethical AI systems that are inclusive, equitable, and responsive to the needs and experiences of all individuals and communities, particularly those who are most vulnerable to discrimination and harm. Keywords: artificial intelligence, intersectionality, queering ethics, discrimination, vulnerability Al technologies are being rapidly integrated into various sectors of society, from industry to healthcare. These systems can increase productivity and resource efficiency, due in part to the harvesting of vast amounts of data that can be processed extremely fast and predict probable outcomes. However, using past data to generate probable futures may replicate scenarios that society no longer considers desirable. Research shows that AI algorithms display biases towards certain genders, ages, races, and sexual orientations, which can result in harmful outcomes for large parts of society (Kenidis & Senden, 2019; Fosch-Villaronga et al., 2021). For instance, facial recognition systems may struggle to recognize dark-skinned women (Buolamwini and Gebru, 2018; Gebru, 2020), and content moderation tools may incorrectly flag drag queens' language as toxic (Thiago, Marcelo & Gomes, 2021). These biases are often the result of limited datasets that fail to fully represent the society or the systematic configuration biases within the AI scientific community. Like many other fields, the AI community struggles to account for diversity and address inequality. The underrepresentation of women, people of color, and LGBTQ+ individuals in research labs and leadership roles results in a lack of diversity and inclusion considerations in AI development. While much attention has been paid to issues such as how automation may replace the human workforce and privacy and data protection, there is also a need to consider how AI intersects with issues of gender, sexuality, and identity (Fosch-Villaronga & Poulsen, 2022). Queering the ethics of AI requires us to examine how these technologies can perpetuate or challenge discrimination and oppression and to develop strategies for building more inclusive and equitable systems. After this introduction, the second section--"Complex and intersecting factors shaping individuals' experiences of discrimination in AI"--expands on some of the inadvertent challenges of AI, its regulation and the ethical configurations that unfortunately support discrimination and bias. And the third and final section "Working towards more inclusive AI practices, methods, and approaches" explores how integrating technical anticipatory methodologies with broader social and ethical considerations is crucial for ensuring that AI technologies serve all individuals' diverse needs and aspirations while minimizing harm and fostering a more just and inclusive society. ### Complex and intersecting factors shaping individuals' experiences of discrimination in AI ### The dualism nature of engineering practice reinforces binarism Classifying things as opposites has been deeply ingrained in many societies for centuries. For instance, pairs, doubles, and visual oppositions are prominent in several art forms in most cultures in Andean prehistory, e.g., bichrome silver and gold depicting the moon and the sun, representing female and male figures, are spread out in pre-inca societies. According to Bernier (2009), these representations reflect the importance of symbolic duality in religions, ritual performances, and social order and have impacted our way of thinking throughout history. This dualistic way of thinking has impregnated many constructions throughout history, including legal systems, across the globe. In engineering, there is also a general inclination for using conceptual dichotomies, like classified/unclassified, yes/no, male/female, concrete/abstract, and reductionist/holistic. This way of organizing things, Faulkner (2000) explains, is ineradicable of the engineering practice given that engineers, in the quest for certainty, usually operate in terms of binary oppositions--i.e., "it works/it does not work," "the light is on/the light is off," etc. For this reason, engineering is based on "a central paradox in which certainty/order/controllable is juxtaposed with uncertainty/chaos/uncontrollable" (Faulkner; 2000, p. 781). The world, however, cannot be (or seems to resist being) simplified into two distinct, often opposite and reductionist, categories (Bucciarelli, 1994; Johnston, 2018). These "false dichotomies" often have negative consequences when applied to AI. For example, if an AI system is trained to classify people as either male or female, the algorithm may not accurately recognize or include people who identify as non-binary or gender non-conforming (Hamidi et al., 2018). It may exclude the intersex community or misclassify the transgender population (Keyes, 2018). Misgendering can have various consequences depending on the application context and the person involved. In social media, for instance, it may lead, at least, to receiving adverts geared to a different audience. However, this practice can also lead to exclusion, discrimination, and harm (Howanski et al., 2021; Fosch-Villaronga et al., 2021). In more sensitive contexts like medicine, an error or misclassification may lead to a misdiagnosis, which can have fatal consequences for the individual (Cirillo et al., 2020; Fosch-Villaronga et al., 2022). Coupled with the practical mindset of engineering, this problem results from the traditional understanding of concepts such as sex and gender, which are typically reduced to a binary opposite: masculine vs. feminine (Nielsen et al., 2021), especially in the so-called gender classifier systems (Rai, & Khanna, 2012). As one can imagine, these dualistic practices can further entrench discrimination and inequality (Buolamwini & Gebru, 2018), especially because there is, with sex, three types (male, female, intersex) and with gender, many more that may or not correspond to the sex that had been assigned at birth. More worrisome is the fact that the engineering and medical communities (The EUGenMed et al., 2015) often confuse these two terms even though they have different meanings. Gender is a person's internal sense of their identity; sex is the assigned gender at birth based on medical factors (e.g., genitalia, chromosomes, and hormones). The most crucial difference between these concepts is that sex is assumed to be determined objectively, whereas gender is inherently subjective to the person. Consequently, inferring gender from apparently objective means may be prone to errors (Fosch-Villaronga et al., 2022). Simply put, a system may be wrong if it assumes information that has not been disclosed directly. This means that any form of automated inferences about gender is potentially problematic or harmful for the individuals involved. But the problem is not only when AI systems automatically attribute gender to an individual; it is also operative when well-meaning human agents intervene in response to these mistakes. Consider, for example, how transgender people experience security control at airports (Costanza-Chock, 2020). Body scanners are usually based on a binarist design paradigm--man/woman--thus systemically excluding transgender individuals by design. Upon a flagged error or misalignment with such a binary system, the intervention of human police officers often follows. However, these human agents often lack appropriate training in diversity and are, therefore, unprepared to deal with the complexities of gender identity, which can exacerbate feelings of distress and discomfort by the affected individual. Fortunately, there is increasing attention now being paid to diversity and inclusion in police training. However, recent research shows that although the training provides expanded knowledge on these topics, they fail to impact strategic approaches, which may indicate that current diversity training methods are unlikely to bring about significant changes in the behavior of law enforcement personnel at all (Lai & Lisnek, 2023). These examples demonstrate how "human intervention," usually considered an essential ethical safeguard for using and deploying AI, could be ineffective or even harmful. Coupled with many other examples facing intersectional considerations, these examples indicate the need for a paradigm shift in AI ethics--what we call a "quearing" of AI ethics--rather than the typical search for simple and quick safeguards. Queering the ethics of AI means reconceptualizing AI ethics with a more open approach, one which can overcome top-down categorization and be more accessible to diverse subjectivites and layered perspectives. ### Technological affordances and constraints affect communities differently Both the embodiment of a particular technology (be it a robot for instance that can pick a glass from a table or a user interface in a software application) and its abilities (to what extent a user can perform a pre-conceived task or a new one) afford and constraint user behavior (Majchrzak & Markus, 2013). A specific area of concern for the Ethics of AI is emerging socially complex virtual environments based on data-intense AI systems. An important example is the metaverse, meant in the broad sense as any possible form of augmented or virtual reality, generally facilitated by wearable technologies (Burrows, 2022). The metaverse has the potential to afford and empower users to self-identify and be seen in ways that could break down social, economic, and physical barriers (Rigotti & Malgieri, 2023). At the same time, user behavior will be constrained by the readiness of the technology, the limitations of the virtual environment, the users involved in the platform, and of course the inevitable implications of the usage of such complex environments. As more people engage in the metaverse, questions around the representation of diverse identities and the potential for discrimination within virtual environments become increasingly relevant. Hackl et al. (2022) explain that the creation of the avatar in the metaverse will soon become extremely valuable to our sense of self and social acceptance. When users embody an avatar, they feel a sense of ownership and agency over the avatar's body and perceive self-location within its boundaries (Kilteni et al., 2012; Lenggenhager et al., 2007; Slater et al., 2009). This phenomenon is called mediated embodiment and refers to the experience of perceiving an avatar's body as one's own through technology (Aymerich-Franch, 2018). Three components define the illusion of mediated embodiment: body ownership, self-location, and agency (Kilteni et al., 2012; Longo et al., 2008). Body ownership pertains to the feeling of possessing the avatar's body, while self-location is the perception of one's position in space, and agency is the subjective sense of controlling the avatar's actions (Aymerich-Franch & Ganesh, 2015; Gallagher, 2000; Tsakiris, 2010; Blanke & Metzinger, 2009). The experience of mediated embodiment is not limited to human-like avatars and has been observed in non-human entities such as animals and robots (Ahn et al., 2016; Aymerich-Franch et al., 2016, 2017a, 2017b). The creation of avatars in the metaverse is valuable for our sense of self and social acceptance. People can express themselves freely and break free from socio-physical constraints. Transgender users of the metaverse can create avatars that better reflect their true selves and avoid social marginalization. Similarly, sex workers can separate their online lives from offline work and avoid socio-economic obstacles. However, there might be social pressure for meta-users to "normalize" themselves and fit in with other participants (Burrows, 2022). It could also lead to people conforming to socially accepted norms and experiencing "cosmetic vulnerability" due to aesthetic consumerism (Garcia-Sanchez, 2016). The physical appearance of the human body links closely to societal pressure to conform to accepted norms. Consequently, individuals are increasingly attempting to manage their appearance and abilities to enhance social interaction and gain recognition. Their goal is to present themselves as non-marginalized individuals deserving of social acceptance and inclusion in society. Non-white individuals in predominantly white-designed virtual worlds may lighten their avatar's skin to fit in (something called whitewashing) and avoid discrimination or exclusion; women may choose to present as men to prevent harassment or discrimination within virtual spaces. These choices highlight the way dominant social norms can influence how individuals present themselves within virtual environments. However, it is essential to recognize that these choices are not made in a vacuum and are influenced by larger social forces such as systemic racism, sexism, heteronormativity, homophobia, and transphobia. The creation of conformist avatars perpetuates the societal pressure to conform to accepted norms of appearance and performance (Rigotti & Malgieri, 2023; Narula, 2022). This pressure has led to physical training, dieting, cosmetic surgery, and the crafting of social media profiles to pursue a desirable appearance. The normativity expected to emerge in the avatar creation process is not limited to protected categories such as gender/sex, race, age, and disability. Other personal characteristics such as breast size, height, and fashion style fetishized in society could also come into play. As a result, individuals may feel the need to create a conformist avatar to address bodily dissatisfaction and social subordination. However, they may also fear how others react to a conformist avatar that does not correspond to their physical being. In the offline world, undergoing cosmetic surgery is often shrouded in secrecy, and deception is common in online and offline dating. In the metaverse, this could result in individuals feeling pressure to create avatars that do not truly reflect their physical appearance or personal characteristics. All in all, the metaverse's potential for dismantling barriers is notable. However, it raises worries about conforming to appearance norms and avatar-related vulnerabilities, highlighting how offline societal pressures persist online (Rigotti & Malgieri, 2023). ### Existing datasets lack representative data Existing datasets have been shown to be limited in terms of providing adequate representation of human diversity. This results from several factors. Data collection may be biased due to inadequate sampling techniques, non-response bias, or lack of representation of certain groups in the population. For example, certain groups, such as low-income households or rural communities, may be underrepresented in surveys or censuses, leading to incomplete data. However, most of the time, the system's designer's sampling technique causes data incompleteness. For instance, in a recent review of databases used for affective computing, some researchers found out that the usual mean age of subjects is almost exclusively between 20 and 30 years of age across datasets, excluding older age groups (50 and up), which are highly underrepresented, probably because many research groups use undergraduate or graduate students from their programs to participate in their studies (Verhoef & Fosch-Villaronga, 2023). Another finding is that general-purpose affective computing datasets do not mention any inclusion of populations of subjects with varying (mental) health conditions. Mental health conditions such as depression or schizophrenia (Gur & Gur, 2022), however, can affect facial expressions and speech, making it difficult to identify and classify emotions accurately. Careful data collection with subjects in the lab is time-consuming and costly (Hox & Boeije, 2005). It is, therefore, no surprise that more recent datasets are often created by scraping data from the web. Using the web has apparent advantages, such as the large volume of data that can be collected in this fashion, which will also increase the inclusion of more diverse data sources. However, a major disadvantage is that basic demographic information concerning data subjects is often unavailable, making it hard to measure and correct potential biases (Zimmer, 2010). As a result, data collection efforts may unknowingly be focused on specific areas or groups, leading to an incomplete dataset. An increasing number of datasets used in AI contain vast amounts of data collected from the web, such as written reviews on Amazon (Blitzer, Dredze & Pereira, 2007; Dredze, Crammer & Pereira, 2008) or IMDB (Maas et al., 2011) for textual sentiment analysis, or images and movies for the bodily gesture and facial expression recognition through Google image search or YouTube (Mollahosseini, Hasani & Mahoor 2017). Since such datasets do not usually involve recruiting test subjects in a lab, there is no demographic information about the people, making it virtually impossible to assess diversity dimensions for these sources. In addition to addressing problems with data collection, it is equally important to test systems on a diverse set of participants to ensure accuracy in generalizing and equal treatment across demographic differences. Especially with datasets that make extensive use of data downloaded from the internet, as it is essential to identify and mitigate probable biases by testing the resultant technology on a diverse set of users. Given that AI is often used in sensitive and protected domains, including the (mental) healthcare industry, it is reasonable to apply similar standards surrounding diversity and inclusion across clinical trials (Verhoef & Fosch-Villaronga, 2023). In general, the underrepresentation of marginalized communities in the datasets for AI training is often the product of structural, political reasons (Costanza-Chock, 2020). Additionally, the link between AI testing and marginalized communities can also be the product of political decisions, like the alleged choice of the Chinese government to test software on the Islamic minority of Uighurs to profile them better (Mozur, 2019). Another potential cause of a lack of diversity in the datasets used for AI training results from concerns about data privacy. Many people are hesitant to share their personal data due to worries about privacy and the security of their data, especially for data that might trigger human vulnerabilities in specific contexts (health conditions, racial data, sexual life data, sexual orientation data, etc.). These privacy concerns have also been translated into legal provisions. The legal protection of personal data (especially in the EU) generally prohibits the processing of special categories of personal data, with few circumscribed exceptions accompanied by additional safeguards (Quinn and Malgieri, 2021). The reluctance to reveal sensitive data coupled with the legal constraints for the processing of these data can lead to incomplete datasets, where already marginalized communities are underrepresented, especially when it comes to sensitive personal information regarding health or ethnicity. As a result, there is a trade-off between privacy and the need for diverse data. Striking a balance between privacy and diversity, then, is essential to address bias and ensure fairness in AI systems (Zliobaite and Custers, 2016). The proposed EU AI Act is trying to address this tension. Article 10(5) states that "to the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data" as defined in the GDPR "subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures". Considering that these initial provisions might not be sufficient to resolve the tensions between non-discrimination and privacy in the EU law, the discussion will probably continue to evolve in the next few years (Van Bekkum and Zuiderveen Borgesius, 2022). Addressing these issues will require efforts to improve data collection techniques, address privacy concerns, and allocate sufficient resources to data collection efforts. **Fixed categories in anti-discrimination law do not account for intersectionality** Anti-discrimination law is generally based on protected categories (race, sex, sexual orientation, political opinion, religion, belief, disability or chronic illness, civil status, age, nationality) and in specific fields (workplace, access to products and services, welfare state rights, and more). This approach is disputable for practical and theoretical reasons. Looking at the practical reasons, building non-discrimination on fixed groups and fields is elusive, considering that new groups and forms of discrimination emerge daily through AI (Wachter, 2020). Often, people are unaware of being victims of disparate treatment or belonging to a targeted group (Taylor, Floridi, & van der Sloot, 2017). In addition, it has been noted that, in AI design, there is often an exclusive focus on some forms of discriminatory biases (e.g., ethnicity or gender) rather than others (Peters, 2022). The approach of anti-discrimination law based on closed groups and fields also faces the challenge of intersectionality. Individuals can experience multiple forms of discrimination that intersect and compound each other, resulting in unique experiences of expression (Creshaw, 1990; Nobel, 2018). For example, a disabled transgender person may face discrimination not only based on their disability but also their gender, creating what has been called double jeopardy (Williams, 2014). The intersection of these two identities, for example, may also create a distinct form of discrimination due to the particular social and cultural connotations surrounding the categories, which are not adequately addressed by traditional anti-discrimination laws (Buchanan et al., 2009; Shaw, Chan, and McMahon, 2012). From the AI design perspective, preventing AI biases through an intersectional approach is problematic. Computing "intersections" of discrimination sources is indeed quite complex. Recently, computer scientists have tried to address this issue by simplifying intersectional discrimination through a "multi-dimensional" approach (Roy, Horstmann & Ntoutsi, 2023). According to the intersectional approaches, the intersection of more sources of discrimination (gender, race, disability) gives rise to a new form of discrimination, which is unique and not based on the mere arithmetical addition of single sources of discrimination. This intersection discrimination uniqueness is difficult to be automated through a de-biasing process. Generally, and especially in Al fairness, it is not easy to explain and operationalize how "being black and transgender" differs from "being black + being transgender" in each context. That is why, despite some efforts (Wang et al., 2022), the typical solution for Al designers is "multi-dimensionality," where all factors of discrimination are computed in addition to the others (Roy, Horstmann & Ntoutsi, 2023). To address these challenges, some scholars have proposed alternative approaches. For instance, instead of focusing on fixed categories, a more dynamic method recognizing the fluidity of identity and the intersectionality of discrimination could help address related impacts (Collins, 2015). Others have suggested targeting systemic discrimination and addressing the root causes of inequality (Sampson & Wilson, 2020). However, that may involve more significant structural changes in a particular society and entail shaking deep-rooted power structures (Costanza-Chock, 2020). Focusing on theoretical reasons, non-discrimination law is often considered to be based on a neo-liberal conception of "equality," where individuals should be treated equally in similar conditions. Scholars have questioned this approach primarily in the last decades. Fineman (2017) criticizes the traditional approach to equality, affirming that what should be considered is individuals' inherent vulnerability and dependence and the inevitable inequality of humans. This reasoning challenges the traditional liberal focus on individual choice. Instead, it emphasizes all individuals' inherent vulnerability and interdependence, recognizing that everyone is vulnerable to harm and discrimination, but some individuals and communities are more vulnerable than others due to systemic factors such as poverty, racism, and homophobia (Arnold, Rebchook, & Kegeles, 2014). This inevitable inequality should be addressed not through traditional categories and formal or substantive ideas of equality but through an approach to social justice that would challenge the liberal reliance on individual choice. It is in this context that the need to "queer" the ethics of AI arises as one way forward to challenge and rethink the normative suppositions and values underlying AI systems. ### Working towards more inclusive AI practices, methods, and approaches As we delve into the queering of AI, it becomes evident that inclusivity, diversity, and equality principles should extend to the realm of machines, algorithms, and robots (Poulsen, Fosch-Villaronga, & Soraa, 2020). Embracing a queer perspective in AI can help us challenge normative assumptions, dismantle biases, and foster more inclusive AI applications that respect every individual's fundamental rights. Understanding the potential adverse consequences of AI on specific communities requires developers, researchers, and engineers to employ a combination of different methodologies, including technical and socio legal (Zaga et al., 2023). These methodologies aim at leveraging several forms of expertise and tools to assess and mitigate risks proactively and as a joint effort. ### Queering Artificial Intelligence Solving the binarism problem in AI (different from discussing this issue) requires rethinking how we categorize and classify people and data and developing more diverse and representative training datasets for AI systems. Implementing inclusive sampling strategies, standardized documentation of demographic factors, and adopting diversity and inclusion guidelines like those used in clinical trials within the Artificial Intelligence (AI) field is imperative to realize the ideals of diversity, equity, and inclusion (Verhoef & Fosch-Villaronga, 2023). However, actively incorporating data from marginalized and underrepresented communities without a careful thought process may not be sufficient. Understanding the potential adverse consequences of AI on specific communities may require the application of anticipatory methodologies that can help foresee the inadvertent consequences that AI models may have for individual users and their communities. These methodologies may enable stakeholders to identify and address potential harms _before_ they occur. An intersectional analysis, for instance, is a critical framework for understanding the complex interplay of multiple social identities and systems of oppression. Applying this methodology to AI involves examining how AI systems may disproportionately impact marginalized communities on intersecting identities such as race, gender, sexuality, disability, or socioeconomic status. From the design perspective, adopting a queer perspective in AI necessitates approaches centering on intersectionality. This involves understanding and accommodating users' diverse needs, preferences, and identities first, via user engagement or familiarity with interdisciplinary literature (Ovalle et al., 2023), and then using this information in subsequent practices like the diversification of training datasets. We call this _vulnerability-sensitive design_. The discussion on the value-sensitive design of digital technologies has increasingly included participatory governance and multi-stakeholder participation in critical decision-making of technological business models (Costanza-Chock, 2020). In sum, designers and engineers should actively involve representatives from marginalized communities in the design process to ensure their perspectives are considered. This means that every design process should wonder about potential impacts on individuals and groups of individuals. After pre-assessing the impacts (in terms of risks and severity of interference on fundamental rights), it is possible to identify the most impacted categories and involve their representatives in the critical decisions about the design of those technologies (Gilman, 2022). Additionally, the field of embodied technologies offers new avenues for queering the field. Since most users experience embodied avatars as representations of themselves, they tend to design their avatars to reflect their physical selves (Freeman & Maloney, 2021). However, people may also want their avatars to reveal a different self in online games other than that presented by their physical appearance (Kafai et al., 2010). In any case, research continuously shows the lack of representation of several communities, such as the disabled population, in avatar design on mainstream social VR platforms (Zhang et al., 2022) or in affective computing (Verhoef and Fosch-Villaronga, 2023). These findings reveal that although technology may offer certain affordances with respect to diversity, there is still a long go to making it a reality. Robots have the potential to challenge normative assumptions about gender, sexuality, and identity and embody diverse bodies and selves (Soraa, 2017). Matsuko Deluxe, a well-known celebrity and advocate for those who are different, such as plus-sized individuals, transgender individuals, and members of the LGBTQIA+ community, serves as a paragon for celebrating uniqueness and fostering inclusivity. Together with Japanese researchers, they created Matsukoroid, a robotic likeness of Matsuko Deluxe that helps reflect on the diversity of human bodies, experiences, and identities. The physical appearance of Matsukoroid and other gendered robots can be intentionally designed to break away from traditional gender stereotypes and offer a more expansive representation of bodies and identities. For example, the robot could have a body shape that is not confined to typical gender norms, with a range of physical attributes that reflect the diversity of human forms (Nomura, 2017). This design flexibility allows individuals, regardless of their gender identity or body type, to relate to and feel represented by the robotic figure. Robots like Matsukoroid can also promote inclusivity through their behavior, interactions, and programming. They can be programmed to use gender-neutral language, respect personal pronouns, and engage in conversations that embrace diverse perspectives, which ensure that the robot's behavior aligns with principles of inclusivity and sensitivity towards different identities. ### Queering the ethics of AI: Intersectionality The queering of the ethics of AI is essential to promoting a more inclusive and equitable technological landscape. By adopting a queer perspective in AI ethics, we strive to create a world that embraces diversity in all its forms. This may involve rethinking how we categorize and classify people and data and developing more diverse and representative training datasets for AI systems. However, simply incorporating data from marginalized and underrepresented communities is not enough. Understanding the adverse consequences of AI on specific communities requires the application of anticipatory methodologies that can foresee the inadvertent consequences of these technologies. An intersectional analysis can help understand the complex interplay of multiple social identities and systems of oppression (Christensen, & Jensen, 2012). Applying this methodology to AI involves examining how AI systems may disproportionately impact marginalized communities based on intersecting identities such as race, gender, sexuality, disability, or socioeconomic status (Ciston, 2019). By considering these communities' unique experiences and vulnerabilities, researchers can identify potential biases, discriminatory outcomes, or exclusionary practices in AI systems proactively. Participatory design and co-creation methodologies also necessitate active community members, particularly those most likely to be affected, in the design, development, and evaluation of AI systems. This inclusive approach ensures that community values, needs, and concerns are integrated into the decision-making processes, helping to mitigate the risks and negative impacts on marginalized groups. Ethical impact assessments are systematic frameworks that evaluate emerging technologies' potential implications and consequences. This methodology involves thorough assessments of AI systems to identify potential harms and unintended consequences on communities (Stahl et al., 2022). By examining issues such as privacy, fairness, accountability, and power dynamics, ethical impact assessments can provide insights into the potentially adverse impacts of AI and inform the development of mitigating strategies and policies. Impact assessments should be comprehensive and include an analysis of the societal and ethical implications of new technologies and any potential impact they may have on fundamental rights (whether individual or collective) (Mantelero, 2022). Unfortunately, traditional non-discrimination discussions tend to focus only on a few fixed categories of discrimination, with restricted ex-post solutions and in limited contexts. The more diverse and complex our society is, the more we should aim at embracing a layered approach to human vulnerabilities (Luna, 2009 and 2019). Users of digital technologies might be not only at the intersection of two or more traditional groups but also unaware of being part of some groups who are particularly vulnerable in specific contexts (Malgieri, 2023). A fluid, open, and layered approach to assessing new technology's risks and harms is necessary, even though every practical solution might be highly contextual and difficult to generalize in terms of a specific one-fits-all ethical framework of reaction. By employing these anticipatory methodologies, including life-cycle assessments, we can gain a comprehensive understanding of the adverse consequences AI may have on specific communities (Rose et al., 2021; Chiang et al., 2021; Wender et al., 2014). This knowledge allows for proactive measures to address biases, inequalities, and discriminatory outcomes, ensuring that AI technologies are developed and deployed in a way that respect and safeguard all individuals' rights and well-being. ### Queering (non-discrimination, data protection and consumer protection) law Our proposal to "queer" the ethics of AI also entails queering the different legal frameworks that affect AI, particularly non-discrimination, data protection, or consumer law. Although our call for queering this field sounds like a mere provocation, we aim to seriously open a debate towards a more fluid and layered approach to regulating AI systems and their effects on individuals and society. There are numerous ways to change the paradigm and overcome many of the issues we have identified (binarism in law and computer science, static approach to discrimination, fallacious neoliberal paradigm in addressing inequalities, legal inefficiencies in governing social oppressions, and privilege-based conformism on social media). More radical proposals call for a "personalization" of law (Ben Shaar, 2021) to reconsider the concept of equality. Even if we are not advocating for a personalized law, we call for a fluidity of law, where layered subjective perspectives (especially of marginalized and underrepresented individuals and groups) are increasingly considered through a change of paradigm. To combat power imbalances in the AI ecosystem, we should perhaps look for alternative solutions, like ex-ante participation of different stakeholders in the design of AI systems and open-minded approaches to impact assessments of AI, e.g., based on vulnerability theories rather than the traditional legal paradigms of non-discrimination law, consumer protection law, etc. Algorithm auditing could also help check whether the training data, among other aspects, are inclusive and diverse. Though the financial, aviation, chemical, food, and pharmaceutical industries have effectively included audits to control risk, there has been little to doing the same for AI (Calleja, Drukarch, & Fosch-Villaronga, 2022). Algorithmic auditing involves systematically analyzing and evaluating AI systems to uncover biases, discriminatory patterns, and other adverse consequences (Raji et al., 2020). Auditing techniques can range from statistical analysis to interpretability methods, helping to shed light on how AI systems might reinforce or exacerbate existing social practices and inequalities (Bartley et al., 2021). Using these techniques, it is possible to identify potential harms and inequities that may disproportionately impact specific communities by examining the underlying algorithms, training data, and decision-making processes. However, it is not enough to identify problematic patterns or biases; auditing processes must lead to tangible and meaningful actions that drive positive change. Actionable algorithmic auditing involves going beyond analysis and actively implementing measures to mitigate harms, promote fairness, and enhance transparency. It requires developing and implementing clear regulations that hold developers of algorithmic systems accountable and creating mechanisms for ongoing monitoring and evaluation. Algorithm auditing, thus, necessitates collaboration between auditors, developers, policymakers, and affected communities to ensure that recommendations are practical, feasible, and inclusive and effectively prevent adverse outcomes ranging from discrimination, privacy invasion, or safety (Raji & Buolamwini, 2019). By emphasizing actionable outcomes, algorithmic auditing can pave the way for the responsible, equitable, and just use of automated decision-making systems. ## Conclusion Technology users are vital in shaping social constructions, relationships, and practices (Douglas, 2012). They engage in activities such as consuming, modifying, domesticating, designing, reconfiguring, and resisting technological advancements. Consequently, when technology is designed solely based on traditional heteronormative perspectives, it risks excluding various minority groups, as it prioritizes the needs and experiences of the majority (Poulsen et al., 2020). Failure to incorporate the intersectional realities, perspectives, and identities of users in the development of AI will result in the persistence of implicit biases. Furthermore, this exclusion will keep many individuals largely invisible, voiceless, marginalized, and unaware of the potential impacts of these technologies on their lives (Criado-Perez, 2019). As AI will increasingly play a more significant role in shaping complex virtual spaces, it is essential to consider how these technologies can promote more inclusive and equitable representations of diverse identities. This promotion can include using AI to develop more diverse and representative avatars and implementing anti-discrimination policies and practices within virtual environments. Additionally, it is paramount to acknowledge and challenge dominant social norms and power structures that perpetuate discrimination and exclusion in virtual spaces. The deployment of AI in society will increasingly be an influential factor in shaping individuals' sense of self in the modern world. However, it is crucial to recognize that efforts toward diversity and inclusion in the AI realm are necessary to avoid perpetuating normative views that deny the existence and experiences of everyone, but in particular to specific collectives that have been traditionally marginalized and excluded at best by society, such as transgender community. Addressing this issue requires holistic inclusion strategies that extend to multiple levels, including how these communities can benefit from or can be impacted by AI technology (Poulsen, Fosch-Villaronga & Soraa, 2020). To achieve this, we propose to queer the ethics of AI so that more efforts take place to understand how different communities, including women, LGBTQ+, and persons with disabilities, interact with and value AI technologies. This knowledge can inform the design, creation, and implementation processes, ensuring the meaningful inclusion of these communities. Queering the ethics of AI goes beyond technical considerations. It requires fostering diversity and inclusivity within the AI research and development community. Encouraging and supporting individuals from underrepresented backgrounds, including queer individuals, to pursue careers in AI can bring unique perspectives and insights to the field. This diversity of voices is crucial in shaping AI systems that cater to a wide range of human experiences and contribute to a more equitable society. Indeed, since AI is not neutral or objective but reflects and amplifies its creators' and users' biases and values, addressing these issues can contribute to creating AI systems that are truly fair, just, and equitable for all. ## Acknowledgement This paper is part of the Safe and Sound project, a project that has received funding from the European Union's Horizon-ERC program Grant Agreement No. 101076929. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
2307.04668
Quantifying the Echo Chamber Effect: An Embedding Distance-based Approach
The rise of social media platforms has facilitated the formation of echo chambers, which are online spaces where users predominantly encounter viewpoints that reinforce their existing beliefs while excluding dissenting perspectives. This phenomenon significantly hinders information dissemination across communities and fuels societal polarization. Therefore, it is crucial to develop methods for quantifying echo chambers. In this paper, we present the Echo Chamber Score (ECS), a novel metric that assesses the cohesion and separation of user communities by measuring distances between users in the embedding space. In contrast to existing approaches, ECS is able to function without labels for user ideologies and makes no assumptions about the structure of the interaction graph. To facilitate measuring distances between users, we propose EchoGAE, a self-supervised graph autoencoder-based user embedding model that leverages users' posts and the interaction graph to embed them in a manner that reflects their ideological similarity. To assess the effectiveness of ECS, we use a Twitter dataset consisting of four topics - two polarizing and two non-polarizing. Our results showcase ECS's effectiveness as a tool for quantifying echo chambers and shedding light on the dynamics of online discourse.
Faisal Alatawi, Paras Sheth, Huan Liu
2023-07-10T16:11:33Z
http://arxiv.org/abs/2307.04668v2
# Quantifying the Echo Chamber Effect: ###### Abstract The rise of social media platforms has facilitated the formation of echo chambers, which are online spaces where users predominantly encounter viewpoints that reinforce their existing beliefs while excluding dissenting perspectives. This phenomenon significantly hinders information dissemination across communities and fuels societal polarization. Therefore, it is crucial to develop methods for quantifying echo chambers. In this paper, we present the Echo Chamber Score (ECS), a novel metric that assesses the cohesion and separation of user communities by measuring distances between users in the embedding space. In contrast to existing approaches, ECS is able to function without labels for user ideologies and makes no assumptions about the structure of the interaction graph. To facilitate measuring distances between users, we propose EchoGAE, a self-supervised graph autoencoder-based user embedding model that leverages users' posts and the interaction graph to embed them in a manner that reflects their ideological similarity. To assess the effectiveness of ECS, we use a Twitter dataset consisting of four topics - two polarizing and two non-polarizing. Our results showcase ECS's effectiveness as a tool for quantifying echo chambers and shedding light on the dynamics of online discourse. Echo Chamber, Polarization, Social Media, Ideology Detection, User Representation, Graph Auto-Encoder ## I Introduction In the age of digital communication, social media platforms have revolutionized the way we disseminate and consume information. Nevertheless, this evolution has brought about notable challenges, particularly the emergence of echo chambers and polarization [1, 2, 3]. These phenomena are often characterized by high levels of controversy between members of different groups and homogeneity among members of the same group [4]. This reinforces pre-existing beliefs [1], discourages critical thinking [5], promotes the spread of misinformation [6, 7], and leads to societal divisions. Hence, it is crucial to devise methods for measuring the extent and impact of echo chambers on social media. By quantifying them, we can better understand these phenomena and, consequently, devise strategies to mitigate echo chamber effects and foster more balanced and nuanced discussions. Ultimately, this could contribute to a better informed, open-minded, and empathetic society. Such efforts are particularly crucial in today's world, where topics such as politics, health, economics, and environmental issues, which are susceptible to echo chambers [4, 8], have far-reaching implications for society. Echo chambers are contingent on two dynamics: the interaction among users and the individual ideological leanings of these users. Numerous measures and metrics have been developed to leverage these dynamics, either separately or in conjunction. One such method, is to leverage the interactions graph to compute graph-specific metrics such as modularity [9], or resort to other techniques like random walkers [10]. However, utilizing the graph introduces a difficulty, as a graph may exhibit modularity without necessarily being polarized or containing an echo chamber [9, 11]. An alternate approach involves assessing the ideological disparity between users and their adjacent nodes within the graph, investigating correlations between a user's ideology and that of their neighbors [1, 12], or observing ideological deviations from the center of an opinion scale after deploying opinion-spreading models [13, 14]. These methodologies, although insightful, are fraught with challenges. Labeling users to ascertain their ideologies or opinions is a laborious task that is susceptible to errors. Similarly, semi-supervised methods that depend on weak labels also present their own unique set of complications. In response to these issues, we introduce the Echo Chamber Score (ECS) a metric that captures the essence of the echo chamber concepts by focusing on the dynamic interactions both within and across different user communities. The crux of our approach is to gauge the similarity of users with their respective communities (i.e., cohesion) and across different communities (i.e., separation). Here, an interaction graph can be characterized as exhibiting an echo chamber-like structure if it exhibits a low average distance between users of a single community (i.e., high cohesion) and a high average distance between users across different communities (i.e., high separation). This strategy of using the distance allows us to bypass reliance on potentially incidental graph structures and eliminates the need to split the graph into two separate communities, an action that erroneously assumes inherent polarization. Further, our method uses similarity in the embedding space as a proxy for ideological distance, thereby circumventing the arduous and error-prone task of detecting individual users' ideologies. To facilitate the measurement of ideological distance, we propose EchoGAE, a self-supervised Graph Auto-Encoder [15] (GAE) based user embedding model. EchoGAE is designed to capture the ideological similarities among users through their interactions and shared posts, operating on two core principles: homophily [16], where individuals associate and interact with those similar to themselves, and linguistic homophily [17, 18], the tendency of socially connected users to use language in similar ways. EchoGAE leverages homophilic interactions such as retweets, regarded as endorsements of similar ideologies [10, 19], along with the content of user posts. Both serve as inputs to capture and map these ideological similarities. The model architecture comprises an encoder that positions similar nodes closely together in the embedding space, and a decoder that uses users' embedding to reconstruct the graph structure in a self-supervised manner. Additionally, it utilizes Sentence-BERT [20], a BERT-based language model, to embed tweets, thus reflecting their semantic similarities. By uniquely combining the interaction graph structure and linguistic information from user posts, EchoGAE generates representations that accurately reflect ideological similarities, establishing it as a robust tool for measuring the effects of echo chambers and polarization. In this research, we evaluate the ability of the Echo Chamber Score (ECS) to measure echo chamber effects within homophilic social interaction networks. Our experiments are based on real-life Twitter datasets related to four topics: two polarizing and two non-polarizing. Our findings confirm that the ECS metric accurately identifies polarized interaction graphs and quantifies the echo chamber effect in a manner consistent with existing state-of-the-art methods. Furthermore, ECS proves successful in determining which communities within the interaction graph are more polarized, demonstrating its unique ability to rank communities based on their polarization. We also verify that EchoGAE's user embedding effectively reflects ideological distances between users, showcasing its capacity to detect user ideologies. To promote reproducibility and foster further development in this field, we make our datasets and code available to the public1. Footnote 1: [https://github.com/faalatawi/echo-chamber-score](https://github.com/faalatawi/echo-chamber-score) ## II Related Work Echo chambers and polarization measures can be divided into two main types: graph-based and ideology-based methods. **Graph-based methods** are based on the concept of a graph representing interactions between users on a given topic. These methods operate on the assumption that polarization can be observed within the graph itself. For instance, the modularity of a graph, which quantifies how well a graph can be divided into distinct communities, has been used to measure echo chambers [9]. However, challenges arise from this approach, as modularity and other similar methods may not accurately represent echo chamber phenomena due to the possibility that non-polarized graphs can also exhibit high modularity [9, 11]. To address these limitations, new methods have been developed that scrutinize the interactions between communities within a graph. These improved methods involve dividing the graph into two distinct communities and measuring polarization at the boundaries between them [11]. An alternative approach involves using the Random Walk Controversy [10] (RWC), a popular polarization method [19, 21] that calculates the probability of a random walker starting at one community and ending at another. Nonetheless, these methods have their own drawbacks, such as the necessity of splitting the communities in the graph and making an inherent assumption that the graph is already polarized. This results in difficulties in measuring polarization that may not actually exist. Our novel approach, the Echo Chamber Score (ECS), alleviates these issues. The ECS does not require the division of the graph into two communities and is capable of measuring the effects of echo chambers and polarization across any number of communities, making it a more flexible and accurate method for assessing polarization. **Ideology-based methods** for measuring echo chambers and polarization take a different approach, focusing on a user's ideological leaning and the users they interact with. Two primary approaches exist within this category: (1) measuring the ideological distance between a user and their neighboring users in the graph, and (2) measuring the divergence from an ideological center after applying an opinion-spreading model. In the first approach, the ideological leanings of all users are estimated and then compared to their neighboring users. The fundamental idea here is that an echo chamber is formed when users mostly interact with others who share similar opinions [1, 12, 22]. For instance, the ideology of users can be inferred from the hashtags they share or the content they post [1, 12]. The polarization is then quantified by measuring the Pearson correlation between a user's ideological score and the average ideological score of their neighbors [1, 12]. In the second approach, opinion-spreading models such as the Friedkin-Johnsen or DeGroot opinion model are utilized [13, 14, 23, 24]. For instance, the Friedkin-Johnsen model operates by updating a node's opinion through repeatedly averaging the opinions of its neighbors until reaching equilibrium [13, 14]. Polarization is then measured by how much opinions at equilibrium deviate from the average [13, 14]. Alternatively, the DeGroot opinion model is used to construct a Polarization Index (PI) based on the probability density distribution of individuals' opinions [24]. A bimodal distribution would suggest the existence of polarization, while a unimodal distribution would indicate its absence [24]. Both these ideology-based approaches have challenges, such as the laborious and error-prone task of estimating users' ideological leanings from their content or interactions. Therefore, we have opted instead for a model based on similarity in the embedding space as a proxy for ideology, eliminating the need for ideology estimation. ## III Methodology This section presents our approach to quantifying echo chambers in online conversations. Our objective is to assess whether the discussion surrounding a given topic exhibits polarization and whether the communities formed by users can be characterized as echo chambers or comprise a diverse group of individuals with varying ideologies. To achieve this, we construct a graph \(G=(V,E)\), where \(V\) represents the set of social media users, and \(E\) represents the edges denoting homophilic interactions, such as retweets. Additionally, we obtain a set of communities \(\Omega\) from a community detection algorithm, where each community consists of a group of users. Our primary aim is to measure the level of polarization within the entire graph by computing the Echo Chamber Score (ECS) for each community. Consequently, this section presents our novel ECS metric for quantifying echo chambers. However, as ECS relies on user embedding, we begin by introducing our user embedding framework, EchoGAE, which enables the representation of users based on their ideological similarity. ### _Embedding Social Media Users_ The EchoGAE model (see figure 1) is essential to our methodology for quantifying echo chambers in online conversations. Its purpose is to embed users in a way that reflects their ideological similarity, facilitating the calculation of the Echo Chamber Score (ECS). By placing ideologically similar users closer in the embedding space, EchoGAE enables the measurement of cohesion and separation of communities in the graphs, the two components of ECS, as we will explain later in the section. EchoGAE is an adaptation of the Graph Auto-Encoder (GAE) model [15], tailored for user embedding based on tweets and interactions. As a self-supervised model, EchoGAE eliminates the need for user ideological labeling. It employs two graph convolutional layers to encode the graph into a latent representation, which is subsequently decoded to reconstruct the graph structure. EchoGAE aims to minimize the binary cross-entropy between the real and reconstructed adjacency matrices. The EchoGAE model consists of two main components: an Encoder and a Decoder. The Encoder takes both the tweets and the graph as input to create node embeddings, which serve as the user embeddings. The Encoder is divided into two parts. Firstly, the tweets component utilizes Sentence-BERT [20] to embed the user's tweets, and the average of these tweet embeddings is taken to form the content embeddings (In fig 1, it's represented as the matrix \(\mathbf{X}\)). Secondly, the network component leverages the adjacency matrix (\(\mathbf{A}\) in fig 1) of the graph. Together, these components contribute to the creation of nodes embeddings (or users embeddings \(\mathbf{Z}\in\mathbb{R}^{n\times d}\) where \(n\) is the number of users in the graph and \(d\) is the dimension of user embedding) that capture the information from both the users' content and their network interactions. The Decoder performs an inner product operation [15] on the node representations (\(\sigma(\mathbf{Z}*\mathbf{Z}^{\mathbf{T}})\)) obtained from the Encoder, resulting in a reconstructed adjacency matrix (\(\mathbf{\hat{A}}\)). Subsequently, the binary cross-entropy loss is used to train the model and ensure accurate graph reconstruction. ### _Measuring the Echo Chamber Effect_ We introduce ECS (Echo Chamber Score), a measure for quantifying the echo chamber and polarization effects on social media. To measure the echo chamber effect using user embedding, we assess in-group cohesion and between-group separation [25]. We utilize the distance in the embedding space as a proxy for these factors, reflecting how closely related users within a community are (cohesion) and how distinct a community is from others (separation). Let \(Z\in\mathbb{R}^{n\times d}\) represent user embeddings, where \(n\) is the number of users and \(d\) is the embedding dimension. Additionally, let \(\Omega=\{\omega_{1},\omega_{2},\ldots,\omega_{M}\}\) denote the set of communities, where \(\omega_{i}\subset V\) represents the \(i^{th}\) community consisting of users. For a user \(u\in\omega\), we compute the cohesion value (\(\lambda_{u}\)) as the average distance between \(u\) and other users in the same community using Equation 1. \[\lambda_{u}=\frac{1}{|\omega|}\sum_{\begin{subarray}{c}v\in\omega\\ v\neq u\end{subarray}}dist(u,v) \tag{1}\] Here, \(|\omega|\) denotes the number of users in the community \(\omega\), and \(dist(u,v)\) represents the distance (e.g., Euclidean) between users \(u\) and \(v\) in the embedding space (\(Z^{(u)}\) and \(Z^{(v)}\) respectively). Similarly, we compute the separation value (\(\Delta_{u}\)) as the average distance between \(u\) and the nearest community other than \(\omega\) using Equation 2. \[\Delta_{u}=\min_{\begin{subarray}{c}\omega\in\Omega\\ u\notin\omega\end{subarray}}\left[\frac{1}{|\omega|}\sum_{v\in\omega}dist(u,v)\right] \tag{2}\] To calculate the Echo Chamber Score (ECS) for a community \(\omega=\{u_{1},u_{2},\ldots,u_{N}\}\), we use a formula inspired by the silhouette score [26] (in the appendix we show how to derive the ECS from the silhouette). Equation 3 produces a score between 0 and 1, with a higher score indicating a greater likelihood of an echo chamber effect within the community. \[ECS^{*}(\omega)=\frac{1}{|\omega|}\sum_{u\in\omega}\frac{max(\Delta_{u}, \lambda_{u})+\Delta_{u}-\lambda_{u}}{2*max(\Delta_{u},\lambda_{u})} \tag{3}\] The Echo Chamber Score can be computed for the entire graph using Equation 4, where \(\Omega\) represents the set of communities obtained from a community detection algorithm such as Louvain [27] or Leiden [28]. \[ECS(\Omega)=\frac{1}{|\Omega|}\sum_{\omega\in\Omega}ECS^{*}(\omega) \tag{4}\] The Echo Chamber Score (ECS) allows for comparison across different graphs representing various controversial topics. A higher ECS indicates a higher degree of echo chamber within a conversation. The components of ECS can provide additional insights, such as ranking communities based on their polarization, using Equation 3. Note that our approach does not assume a specific number or size of communities and is independent of the community detection method. Moreover, it does not require prior knowledge of users' internal ideologies, setting it apart from related works [10, 24]. ### _Estimating Users' Ideology_ Our embedding model, EchoGAE, aims to position users with similar ideological leanings closer to each other in the embedding space. Therefore, we assume that we can utilize the distance in the embedding space to infer users' ideological leanings. This helps us evaluate whether EchoGAE embeds users in a way that reflects their ideology, which is the core idea behind ECS. After applying the EchoGAE embedding, we employ a clustering algorithm (e.g., KMeans) to detect two communities of users in the embedding space, denoted as \(\omega_{1}\) and \(\omega_{2}\). These communities represent the pro and anti sides of the debate, respectively. We follow similar works [10, 24] that split the ideology spectrum into two sides. The ideology score for each user is calculated using Equation 5. It is determined by the difference between the average distance of the user \(u\) to other users in \(\omega_{1}\) and the average distance to users in \(\omega_{2}\). \[I(u)=\frac{1}{|\omega_{1}|}\sum_{\begin{subarray}{c}v\in\omega_{1}\\ v\neq u\end{subarray}}dist(u,v)-\frac{1}{|\omega_{2}|}\sum_{\begin{subarray}{ c}v\in\omega_{2}\\ v\neq u\end{subarray}}dist(u,v) \tag{5}\] Here, \(dist\) represents any distance function normalized between 0 and 1. In our implementation, we employ the Euclidean distance, but other distance measures can be used. The ideology scores \(I(u)\) range from -1 to +1. Importantly, values of -1 and +1 do not inherently indicate "good" or "bad" ideologies. In Equation 5, the order of the communities (\(\omega_{1}\) and \(\omega_{2}\)) affects the sign of the ideology score. If a user belongs to \(\omega_{1}\), their score is positive when \(\omega_{1}\) is in the first term. Reversing the order of communities changes the sign but not the magnitude of the score. This introduces an additional layer of complexity in evaluating our method, which we address in the experimental results section. ## IV Experiments In this section, we present the experiments we used to assess the effectiveness of our proposed method, Echo Chamber Score (ECS), in analyzing the echo chamber effect. To evaluate its performance and reliability, we compare ECS with two commonly used methods, Random Walk Controversy (RWC) and Polarization Index (PI). Additionally, we utilize ECS to analyze echo chambers at the community level, examining the distances between users in the embedding space to gain insights into the cohesion and separation of user communities. Furthermore, We conduct an experiment to determine if the distances in the embedding space can predict the ideological leaning of users. Finally, we perform an ablation study to examine the impact of using tweets in measuring the echo chamber effect and predicting user ideology. These experiments provide valuable insights into the performance and applicability of ECS in analyzing echo chambers, predicting user ideology, and assessing the role of tweets in these measurements. ### _Datasets_ To investigate the echo chamber phenomenon, we selected four topics to examine user interactions related to these subjects. Two topics were controversial: abortion and gun debates, while the other two were non-controversial: the SXSW conference and the Super Bowl. The inclusion of non-controversial topics aimed to assess our method's performance in non-polarized settings. The datasets used in our experiments are outlined in Table I, and we have made them publicly available2 to ensure reproducibility and facilitate further research in the field of echo chamber analysis and detection. Footnote 2: [https://github.com/aalatawi/echo-chamber-score](https://github.com/aalatawi/echo-chamber-score) **Data collection.** To collect data for each topic, we identified frequently used keywords in discussions (see Table I) and monitored the conversation. We then gathered the retweeteters of the most popular tweets associated with these keywords. Fig. 1: The EchoGAE Model comprises two primary components: an Encoder and a Decoder. The Encoder employs both user content embeddings (**X**) and adjacency matrix (**A**) to generate user embeddings (**Z**). The Decoder then reconstructs the adjacency matrix (**Å**) using the user representations. This data was used to construct a graph for each topic, where users were represented as nodes, retweet interactions formed the edges, and users' tweets provided node attributes. We collected up to 200 of the users' most recent tweets (excluding retweets) to ensure an adequate amount of user-generated text for analysis. The gun debate dataset was collected during the period of intense debate following the Uvalde school shooting in Uvalde, Texas, on May 24, 2022. Unfortunately, school shootings in the United States often ignite polarized discussions [11] on gun violence and constitutional gun ownership rights. To capture this discourse, we selected commonly used words from both sides of the debate and monitored the conversation from May to July. We then selected the top 1200 most retweeted tweets and constructed the retweet graph. The resulting graph (shown in the lower left panel of Figure 2) exhibited two communities, as identified by the Louvain algorithm [27], indicating the presence of two polarized communities [10]. Similarly, we collected the retweet graph from the abortion rights debate following the US Supreme Court ruling on abortion that was issued on June 24, 2022, using relevant keywords. Both the gun debate [11, 1, 29] and abortion [10, 30] have been widely studied as topics for analyzing echo chambers and polarization. On the other hand, for non-controversial topics, we selected the topics that have been used to study echo chambers Super Bowl [22] and SXSW [10, 31]. The Super Bowl is an annual sports event in the US, while the SXSW conference is an annual event that combines music, film, and interactive media in Austin, Texas. We followed the same data collection procedure as with the controversial topics. **Labeling.** To evaluate the embedding quality of EchoGAE in capturing ideological similarity, we estimated users' ideological leanings. Following previous works that used news URLs to infer political leanings [32, 33, 34, 35], we obtained ideological labels for URLs from the non-partisan media watchdog AllSides3. To assign labels to users, we utilized the news URLs they post as indicators of their ideology, using AllSides' political leanings for news websites' URLs. A user's political leaning is calculated as the average of the news articles they share. AllSides' ratings consist of five categories: left, center-left, center, center-right, and right, to which we assigned values of -1, -0.5, 0, 0.5, and 1, respectively. It is important to note that these values indicate opposing sides of the debate and do not inherently represent good or bad ideologies. We only used labels for users who shared at least five links. The number of labeled users for each dataset is specified in Table I. Notably, controversial topics tend to have more labeled users due to the nature of user engagement with these topics, as users are more likely to express their ideological leanings in these topics. Footnote 3: [https://www.allsides.com/media-bias](https://www.allsides.com/media-bias) ### _Measuring the Echo Chamber Effect_ In this experiment, our objective is to evaluate the effectiveness of our proposed method in measuring the echo chamber effect. To accomplish this, we compare our method with commonly used techniques for calculating polarization and echo chamber effects. This comparison aims to demonstrate that our method performs comparably to existing methods and produces reliable results for measuring the echo chamber effect. For our experiments, we utilize two widely used baselines: Random Walk Controversy (RWC) [10] and Polarization Index (PI) [24]. We then compare these baselines with our proposed method, Echo Chamber Score (ECS). RWC measures the likelihood of transitioning from one community to another in a network, where a value close to one indicates polarization and close to zero indicates no polarization. On the other hand, PI measures the degree of segregation within a population by modeling the propagation of opinions based on the probability density distribution of individuals' opinions. To compute RWC, we partition the graph into two communities using the FluidC [36] algorithm. Subsequently, we calculate the probability of transitioning from one partition to another. For PI, we employ the DeGroot opinion model [23] with labeled users as seeds to disseminate opinions, and then we compute the PI index for each graph. In contrast to RWC, our proposed method ECS does not require dividing the graph into two communities. The graph may consist of multiple communities, and any community detection method can be employed. In this study, we use the Louvain algorithm [27] to identify the communities, which are then used to compute ECS. Furthermore, unlike PI, our method does not rely on any labeled users, as we utilize the embeddings obtained from EchoGAE. As shown in Table II, our approach effectively assigns higher scores to controversial topics (e.g., Gun debate and Abortion) compared to non-controversial ones, demonstrating its ability to perform on par with existing methods. Our method aligns with PI, a highly regarded technique that employs ideology labels to gauge polarization. PI's approach closely approximates the actual labels, and our method exhibits strong agreement with it, as evidenced by a 0.99 Pearson correlation. In contrast, there are notable differences between our method and RWC. For instance, both ECS and PI indicate that the Gun Control debate is more polarized than the Abortion debate, which contradicts the findings of RWC. We posit that the requirement of RWC to partition the graph into only two communities hinders its performance. By relaxing this requirement, our measure ECS can evaluate any number of communities identified by various community detection algorithms. These techniques (RWC, PI, and ECS) enable us to rank topics based on their polarization levels, from highest to lowest. Both PI and our method (ECS) consistently rank the topics in a similar manner. It is worth noting that our method considers the Gun debate more polarized than the Abortion debate, aligning with opinion polls. According to the Pew Research Center4, in 2022, 61% of Americans supported abortion access, while only 53% advocated for stricter gun laws. This demonstrates greater disagreement and polarization within the Gun debate compared to the Abortion debate. Footnote 4: [https://www.pewresearch.org/](https://www.pewresearch.org/) ### _Analysing the Echo Chamber Effect on Community Level_ To showcase ECS's capability in analyzing the echo chamber at a more detailed level, we conducted an experiment to examine the insights provided by our measure at the community level. The objective was to determine which community within a topic exhibited a higher level of polarization. For this experiment, we focused on the controversial topics, namely the Gun debate and Abortion, and explored how we could investigate the interaction both between and within communities. These topics were chosen due to the presence of echo chambers, as identified in the previous experiment. Upon examining the gun dataset, we observed that the debate surrounding guns and school shootings exhibited a higher level of polarization compared to abortion, as evidenced by an ECS score of 0.714 compared to 0.626 (see Table II). Applying the Louvain algorithm, we identified two communities in the interaction graph, with sizes of 3984 and 2582 nodes, respectively. Computing the ECS* (equation 3) for each community, we obtained echo chamber scores of 0.739 and 0.676, indicating polarization and ideological homogeneity within both communities. Notably, the larger community demonstrated a little bit higher level of polarization. Upon labeling a sample of ten users from each community, we discovered that the larger community aligned with the anti-gun group, while the smaller community represented the pro-gun group. By examining the 2D projection of the EchoGAE embedding of users (refer to Figure 2), we observed that the blue community (anti-gun) appeared to have similar size as the other community (pro-gun), suggesting close levels of polarization between the two communities. However, the anti-gun higher ECS score indicates that this group is more homogenized than the other group, which is surprising. It is possible that this debate around guns is not a right and left issue, and more centrists voices are participating in the debate. This analysis would be challenging to perform using PI or RWC techniques. However, ECS, being community-independent and not reliant on ideology labels, enables such analysis without prior knowledge of community divisions and ideologies. In the abortion dataset, we identified two communities with sizes of 3933 and 1154. The ECS* scores for these communities were 0.6 and 0.69, respectively. To gain deeper insights, we conducted a random sampling of ten users from each community and manually examined their Twitter accounts. Our analysis revealed that the larger community primarily consisted of supporters of abortion rights. On the other hand, the anti-abortion community exhibited a higher level of polarization compared to the other community. This finding aligns with the opinion polls mentioned earlier, as the anti-abortion group tends to hold a more fringe position compared to the pro-abortion group. Additionally, this alignment can be observed in the abortion rights vote that took place in Kentucky, which is considered a conservative state. During the vote, the majority of voters rejected5 the proposal to restrict abortion rights. Footnote 5: [https://www.pbs.org/newshour/politics/kentucky-voters-reject-constitutional-amendment-on-abortion](https://www.pbs.org/newshour/politics/kentucky-voters-reject-constitutional-amendment-on-abortion) ### _Using Ideology Detection to Verify the Embedding Space_ We assume that the distance in the embedding space could be used to predict the political leaning of users and that users with similar ideological leanings are closer to each other in the embedding space. If we prove that the distance in the embedding space could be used to estimate the ideology of users, we could then use the distance to measure the echo chamber effect, as we rely on the distance to measure the separation (eq 2) and cohesion (eq 1) of communities in order to gauge the echo chamber effect. After labeling users, we then split the labeled users into training and validation sets (10% - 90% respectively). Since our model is unsupervised, the training set is used by the baseline model only, and we use the validation set to validate the estimation of both models. For the baseline model, we used the DeGroot opinion model [23], in which the user's ideology is the average ideology of their neighbors. After embedding Fig. 2: The top two panels show the 2D projection (using t-SNE [37] algorithm) of the embedding of users in each graph after embedding them using EchoGAE. The lower two panels are the two graphs. We used the ForceAtlas2 [38] algorithm to plot the graphs. The colors represent the communities that were discovered by Louvain Algorithm users using EchoGAE, we employed the KMeans algorithm to detect two communities of users in the embedding space, referred to as \(\omega_{1}\) and \(\omega_{2}\), representing the pro and anti sides of the debate. Lastly, we calculated the ideology score of each user, taking into account their distances to the members of communities \(\omega_{1}\) and \(\omega_{2}\) in the embedding space as shown in equation 5. In Table III, we present our method's outcomes for estimating ideology compared to the baseline. The resulting ideology scores were compared to the pseudo-scores obtained from AllSides labeling. Our analysis involved comparing our ideology scores to those obtained from the AllSides labeling and the baseline model using Mean Absolute Error (MAE), and Mean Squared Error (MSE). The results shown in Table III demonstrate that our model performs comparably to the semi-supervised baseline, even though our method is unsupervised (we do not use any labels in our model). Furthermore, as depicted in Figure 3, a high degree of concurrence is observed between the distributions of the predicted and actual ideologies. It should be noted that in Equation 5, the order of the communities (i.e., \(\omega_{1}\) and \(\omega_{2}\)) influences the sign of the ideology score. For instance, if a user belongs to \(\omega_{1}\) (i.e., is more closely associated with users in \(\omega_{1}\)), their ideology score would be positive if \(\omega_{1}\) appeared in the equation's first term. However, if the order of the communities is reversed, the score's magnitude remains the same, but the sign changes. Consequently, in our measurement, we tried both orders (i.e., \(\omega_{1}\) first then it becomes the second) and report the minimum value. ### _Ablation Study_ The primary objective of this study is to examine the impact of the components of EchoGAE on the performance of two tasks: measuring the echo chamber effect and predicting the ideology of users. Specifically, the study explores the significance of using textual information, i.e., tweets, in these tasks. Table IV presents the results obtained from this study. It demonstrates that the model's performance is enhanced when tweets are utilized. This finding emphasizes the importance of linguistic similarity in measuring echo chambers and estimating ideology. Therefore, the study suggests that investing more resources to extract knowledge from tweets could lead to improved accuracy in both tasks. However, the study also observes that good results can be achieved with the graph component alone, in situations where textual information is unavailable. Notably, even in cases where the difference in echo chamber scores between controversial and non-controversial topics is not substantial, the tweet-less model still performs well by assigning higher scores to controversial topics. In conclusion, this study provides empirical evidence supporting the importance of incorporating textual information, such as tweets, in measuring echo chambers and estimating ideology. Nevertheless, it also highlights that satisfactory results can be obtained with graph-only models in the absence of textual data. ## V Conclusion In this paper, we introduced Echo Chamber Score (ECS), a novel metric for quantifying echo chambers and polarization in social media networks. ECS leverages an embedding space Fig. 3: A histogram of the predicted vs pseudo ideology scores of users for each topic. In all the topics, the estimated (predicted) distribution of ideology scores closely matches the ideology scores estimated from the URLs that user share. to measure the cohesion and separation of user communities, providing insights into the echo chamber effect. To enable this measurement, we presented EchoGAE, a self-supervised user embedding model that captures ideological similarities among users and generates accurate embeddings. Our evaluation of ECS on a Twitter dataset demonstrated its effectiveness in ranking topics based on echo chamber scores and ordering communities by polarization levels. Compared to existing metrics, ECS showcased unique capabilities in capturing the dynamics of online discourse. Our research contributes to understanding and quantifying echo chambers and polarization, which could help the development of strategies to mitigate their negative impacts and promote a more informed and open-minded society.
2304.05026
Excitation and voltage-gated modulation of single-mode dynamics in a planar nano-gap spin Hall nano-oscillator
We experimentally study the dynamical modes excited by current-induced spin-orbit torque and its electrostatic gating effect in a 3-terminal planar nano-gap spin Hall nano-oscillator (SHNO) with a moderate interfacial perpendicular magnetic anisotropy (IPMA). Both quasilinear propagating spin-wave and localized "bullet" modes are achieved and controlled by varying the applied in-plane magnetic field and driving current. The minimum linewidth shows a linear dependence on the actual temperature of the active area, confirming single-mode dynamics based on the nonlinear theory of spin-torque nano-oscillation with a single mode. The observed electrostatic gating tuning oscillation frequency arises from voltage-controlled magnetic anisotropy and threshold current of SHNO via modification of the nonlinear damping and/or the interfacial spin-orbit coupling of the magnetic multilayer. In contrast to previously observed two-mode coexistence degrading the spectral purity in Py/Pt-based SHNOs with a negligible IPMA, a single coherent spin-wave mode with a low driven current can be achieved by selecting the ferromagnet layer with a suitable IPMA because the nonlinear mode coupling can be diminished by bringing in the PMA field to compensate the easy-plane shape anisotropy. Moreover, the simulations demonstrate that the experimentally observed current and gate-voltage modulation of auto-oscillation modes are also closely associated with the nonlinear damping and mode coupling, which are determined by the ellipticity of magnetization precession. The demonstrated nonlinear mode coupling mechanism and electrical control approach of spin-wave modes could provide the clue to facilitate the implementation of the mutual synchronization map for neuromorphic computing applications in SHNO array networks.
Lina Chen, Yu Chen, Zhenyu Gao, Kaiyuan Zhou, Zui Tao, Yong Pu, Tiejun Zhou, Ronghua Liu
2023-04-11T07:20:40Z
http://arxiv.org/abs/2304.05026v1
Excitation and voltage-gated modulation of single-mode dynamics in a planar nano-gap spin Hall nano-oscillator ###### Abstract We experimentally study the dynamical modes excited by current-induced spin-orbit torque and its electrostatic gating effect in a 3-terminal planar nano-gap spin Hall nano-oscillator (SHNO) with a moderate interfacial perpendicular magnetic anisotropy (IPMA). Both quasilinear propagating spin-wave and localized "bullet" modes are achieved and controlled by varying the applied in-plane magnetic field and driving current. The minimum linewidth shows a linear dependence on the actual temperature of the active area, confirming single-mode dynamics based on the nonlinear theory of spin-torque nano-oscillation with a single mode. The observed electrostatic gating tuning oscillation frequency arises from voltage-controlled magnetic anisotropy and threshold current of SHNO via modification of the nonlinear damping and/or the interfacial spin-orbit coupling of the magnetic multilayer. In contrast to previously observed two-mode coexistence degrading the spectral purity in Py/Pt-based SHNOs with a negligible IPMA, a single coherent spin-wave mode with a low driven current can be achieved by selecting the ferromagnet layer with a suitable IPMA because the nonlinear mode coupling can be diminished by bringing in the PMA field to compensate the easy-plane shape anisotropy. Moreover, the simulations demonstrate that the experimentally observed current and gate-voltage modulation of auto-oscillation modes are also closely associated with the nonlinear damping and mode coupling, which are determined by the ellipticity of magnetization precession. The demonstrated nonlinear mode coupling mechanism and electrical control approach of spin-wave modes could provide the clue to facilitate the implementation of the mutual synchronization map for neuromorphic computing applications in SHNO array networks. ## I Introduction Spin Hall nano-oscillator [1; 2] is a new alternative to traditional spin-transfer-torque nano-oscillators (STNOs) [3; 4; 5; 6; 7] based on current-perpendicular-to-plane spin-valve or magnetic tunnel junction (MTJ) structures. SHNOs consist of a single ferromagnet (FM) and heavy metal (HM) with a strong spin-orbit coupling (SOC) bilayer,and utilize bulk spin Hall effect (SHE) of HM [8] and interfacial Rashba-Edelstein effect (IREE) at the HM/FM interface [9] to generate an out-of-plane spin current under passing an in-plane electric-current through the bilayer plane [10; 11]. Thanks to this simple in-plane structure for easy fabrication process and flexible and scalable two-dimensional architecture, SHNO constructed of numerous materials or different geometries, including ferromagnet metals and insulators with in-plane and out-of-plane magnetization, have been recently intensity studied [12; 13; 14; 15; 16; 17; 18; 19; 20]. However, previous reports have proved that the planar nanogap SHNO based on an extended Pt/Py bilayer with an easy-plane magnetization prefers simultaneous excitation of two dynamical modes, significantly degrading spectra purity of SHNO due to their mode-coupling [21; 22; 2; 12]. Meanwhile, SHNO with a suitable PMA extended film exhibits the dynamical bubble mode with the spectrum consisting of a primary peak and two sidebands at small in-plane magnetic fields, and mode transition from propagating mode to self-localized bullet mode at large in-plane magnetic fields [23; 15]. The very recent study also revealed that nonlinear damping could be controlled by the ellipticity of magnetization precession which was determined by magnetic anisotropy of the device [24]. Therefore, it is essential to experimentally explore the spectral characteristics of SHNOs with a moderate PMA to facilitate their promised applications. Additionally, synchronization with external rf source in individual SHNO [25] and as well as mutual synchronization between multiple coupled SHNOs in one-dimensional chains and two-dimensional arrays [26; 27] are drawing increasing attention because these mutually coupled nonlinear spin-based oscillators are promising to mimic human brain processing functions to develop new high-speed and low-power neuromorphic computing [27; 28; 29]. SHNOs are well suited to neuromorphic applications due to their intrinsically nonlinear behavior and strongly nonlinear interaction between oscillators or external stimulation signals. Therefore, exploring a more energy-efficient approach to electrically turn individual ones and mode coupling in SHNOs network arrays is important before scaling the neuromorphic computing to large nonlinear dynamics neural networks for application to a wide range of complex, high-dimensional tasks. Previous works in the conventional spintronics field have proved that voltage-controlled magnetic anisotropy is a highly energy-efficient approach to control magnetization [30], e.g., magnetization reversal and process, compared to the current-based approach. In addition, the in-plane configuration of SHNOs can easy to achieve both current- and voltage-based collaborative control of nonlinear dynamics in three-terminal SHNOs [31; 32; 33]. Since [Co/Ni] multilayer has a moderate interfacial PMA, low Gilbert damping, and large anisotropy magnetoresistance (AMR) as well as Permalloy (Ni\({}_{80}\)Fe\({}_{20}\)) [34; 35; 36], here we adopt 1.7 nm thick [Co/Ni]\({}_{3}\)/Co multilayer as the ferromagnetic layer to built three-terminal SHNOs and experimentally study the effects of current and electrostatic gating on SOT-induced magnetization oscillation. In addition to quasilinear propagating spin-wave emerging at low in-plane fields and small currents, a single self-localized "bullet" spin-wave mode with a frequency below ferromagnetic resonance (FMR) frequency can also be excited at large in-plane fields and large currents. In contrast to Py/Pt-based SHNO, the two-mode coexistence-induced decoherence phenomenon is not observed in our thin [Ni/Co]/Pt-based SHNO with a moderate PMA. Our micromagnetic simulations reveal that the perpendicular magnetic anisotropy via diminution of nonlinear mode-coupling and nonlinear damping can significantly lower the threshold current and suppress the previously observed secondary spin wave mode localized near the two edges of the center bullet mode in nano-gap SHNOs. Furthermore, the three-terminal SHNO shows a 200 MHz frequency tunability (7 MHz/V) by voltage gating due to the electric field modulating the threshold current and IPMA. ## II Experimental Section Figure 1 shows the schematic of our test device structure and the experimental setup. Our device is based on a stacked multilayer Cu(30)/BaTiO\({}_{3}\)(30)/[Co(0.2)/Ni(0.3)]\({}_{3}\)/Co(0.2)/Pt(4) deposited on an annealed sapphire substrate at room temperature (RT). All thicknesses are given in nanometers. The [Co(0.2)/Ni(0.3)]\({}_{3}\)/Co(0.2)/Pt(4) (abbreviated as [Co/Ni]\({}_{3}\)/Co/Pt) multilayer disk with 4 \(\mu\)m diameter and its top two triangle-shaped Au electrodes with approximately 100 nm gap were electrically isolated from the 30 nm thick Cu bottom gating electrode by a 30 nm thick dielectric layer BaTiO\({}_{3}\) [Fig. 1]. The 100 nm thick head-to-head triangular Au electrodes as two in-plane point contacts are used to inject current locally into the [Co/Ni]\({}_{3}\)/Co/Pt multilayer disk and achieve the highly localized current density in the Pt layer within the gap area. The 30 nm thick dielectric layer BaTiO\({}_{3}\) is grown by using ultrahigh vacuum pulsed laser deposition with 80 mTorr oxidant background gas (99% O\({}_{2}\) + 1% O\({}_{3}\)) at RT [37]. A KrF excimer laser (\(\lambda\) = 248 nm) with a repetition rate of 3 Hz and a laser floence of 1 J/cm\({}^{2}\) was used. The other metal layers are grown at RT by magnetron sputtering with base pressure less than 2 \(\times\) 10\({}^{-8}\) Torr. The device was fabricated by a combination of magnetron sputtering and electron beam lithography. In this three terminals device, when the in-plane electrical current with a high current density (\(\sim\) 10\({}^{8}\)A/cm\({}^{2}\)) passes through the Pt layer within the 100 nm wide nanogap, it will generate the spin currents due to the bulk SHE in Pt(2) layer and IREE at both Co/Pt and BaTiO\({}_{3}\)/Co interfaces perpendicularly injected into the [Co/Ni]\({}_{3}\)/Co multilayer. Similar to the previously studied nanogap SHNOs [2; 15], all the measurements of microwave spectra described below are performed at in-plane magnetic field geometry with an in-plane angle \(\theta\) between the in-plane field \(H\) and the direction of electrical current \(I\). ## III Results and Discussion ### Dependence of spectral characteristics on current and magnetic field at \(V_{g}\) = 0 To obtain better microwave spectra by suppressing thermal fluctuation broadening, we performed the spectra measurements at a cryogenic temperature \(T\) = 6 K. Spin current-induced auto-oscillations, indicated by the abrupt emergence of a sharp peak in the microwave spectra, can be achieved above the onset current \(I_{on}\)\(\sim\) 4.5 - 5.0 mA in the studied fields of 0.1 kOe to 2 kOe. The value of \(I_{on}\) is smaller than 5.7-6.1 mA in Py(3)/Pt(2)-based SHNO [31], suggesting more energy-efficient in this SHNO constructed by the [Ni/Co]\({}_{n}\)(1.7 nm)/Pt(4 nm) multilayer. Figures 2(a) and 2(b) show two representations of the generated microwave spectra obtained at \(I\) = 5.5 mA and 6.5 mA, \(\theta\) = 120\({}^{\circ}\) and the external magnetic field ranging from 200 to 2000 Oe. The central peak frequency \(f_{auto}\) of auto-oscillation [Fig. 2(c)] can be extracted by fitting the generated spectral peak using the Lorentzian function [the solid curves in Figs. 2(a) and 2(b)]. To obtain the magnetic properties of the [Co/Ni] Figure 1: The cross-sectional view of voltage-control SHNO device structure with multilayer order and the experimental setup with the directions of current flow \(I\) and the applied magnetic field \(H\), the angle \(\theta\) and electrostatic gating \(V_{g}\). The region of spin current-induced oscillating magnetization is localized in the multilayer [Co/Ni] under the central nanogap. multilayer and the relationship between the observed auto-oscillation mode and the uniform FMR mode, we measured the field-dependence of FMR of the device by using the ST-FMR technique [10; 11]. Figure 2(c) shows the field dependence of the FMR frequency \(f_{FMR}\) and auto-oscillation frequencies \(f_{auto}\) obtained at \(I\) = 5.5 mA and 6.5 mA. The \(f_{auto}\) is higher than \(f_{FMR}\) at small fields \(H\)\(\leq\) 1.6 kOe for a low driving current \(I\) 5.5 mA. Since the auto-oscillation shows a significant redshift with the driving current \(I\), \(f_{auto}\) goes to be below \(f_{FMR}\) for the large in-plane fields \(H\)\(\geq\) 1.0 kOe at \(I\) = 6.5 mA. The ST-FMR data are fitted by the Kittle formula \(f=\gamma\sqrt{H(H+4\pi M_{eff})}\) with a fitting parameter \(4\pi M_{eff}\) = 2.9 kOe [solid curve in Fig. 2(c)]. The effective demagnetizing field can be expressed as the following form of \(4\pi M_{eff}\) = \(4\pi M_{s}-\frac{2K_{s}}{M_{s}}\), where \(M_{s}\) is the saturation magnetization, \(K_{u}\) is the uniaxial anisotropy coefficient. The PMA coefficient \(K_{u}\) = 0.25 MJ/m\({}^{3}\) is determined from the FMR resonance frequency \(vs.\) field dispersion curve and the saturation magnetization of the film. Furthermore, we analyze the dependence of the generation spectra on the in-plane angle \(\theta\) formed by the applied field relative to the direction of the current flow. Figure 2(d) shows that the auto-oscillation frequency substantially decreases when the angle approaches \(\theta\) = 90\({}^{\circ}\). Based on the symmetry of SHE, the maximum excitation efficiency is reached when the spin polarization of spin currents generated by the Pt layer is antiparallel to the magnetization of the [Co/Ni]\({}_{n}\) multilayer, corresponding \(\theta\) = 90\({}^{\circ}\)[11]. Therefore, the observed frequency decrease toward \(\theta\) = 90\({}^{\circ}\) is consistent with that the excited spin-wave mode has a strong frequency redshift with increasing excitation current or \(\theta\)-induced excitation efficiency. It should be noted that the microwave spectral peaks vanish at \(\theta\) approaching 90\({}^{\circ}\). The reason is that the microwave signal is generated due to the AMR of [Co/Ni], which has the sinusoidal dependence on the orientation of **H** (or **M**) with a period of 180\({}^{\circ}\)[11; 39], magnetization oscillation cannot generate a signal at the fundamental harmonic of oscillation at angles close to 90\({}^{\circ}\) corresponding \(dR_{AMR}/d\theta\) = 0. To further confirm the nonlinear frequency redshift of the auto-oscillation mode, the current-dependencies of the generated microwave spectra are measured at three representative fields \(H\) = 100, 400, and 1000 Oe, \(\theta\) = 120\({}^{\circ}\) and 6 K, as shown in Fig. 3. At a very small field \(H\) = 100 Oe [Fig. 3(a)], a peak with a frequency higher than \(f_{FMR}\) begins to appear in microwave spectra at the onset of current \(I_{on}\) = 5.0 mA. Its frequency shows a near linear redshift with the excitation current [Fig. 3(b)]. The linewidth shows a quasilinear decrease near above \(I_{on}\) and reaches a minimum value of 50 MHz at \(I\) = 7.2 mA, corresponding to the maximum peak power spectral density (PSD) [Fig. 3(c)-3(d)]. This behavior is consistent with the theoretical model of spin-torque nano-oscillation in which the thermal linewidth will decrease with increasing oscillation power [40]. Above 7.2 mA, the spectral peak begins to broaden, and its magnitude decreases with a more significant frequency redshift. For the medium field \(H\) = 400 Oe [Fig. 3(e)], the oscillation peak shows a noticeable blueshift, a linear decrease of the linewidth and a rapid increase of power with increasing current at \(I>I_{on}\) = 4.5 mA [Fig. 3(f)-3(h)]. The linewidth decreases to a minimum value of 10 MHz at the same current \(I\) = 6.2 mA as the maximum peak PSD and the maximum frequency, and then increases with continued increasing current, accompanied by a significant redshift and a decrease of the peak PSD. At the relatively large field \(H\) = 1000 Oe [Fig. 3(i)], the dependence of oscillation peak on the excitation current exhibits similar overall behavior with the medium field 400 Oe except for having a frequency more close to \(f_{FMR}\) at the small currents and a more significant redshift behavior at the larger currents [Fig. 3(j)-3(l)]. The linewidth rapidly increases from its minimum value of 10 MHz at 5.5 mA and exhibits a peak at 6 mA, also correlated with the onset of a large frequency redshift [41]. The increase of the onset current \(I_{onset}\) and the minimum linewidth at small fields is likely correlated to the oscillation frequency close to the linear spin-wave spectrum, resulting in large damping due to their overlap or the magnetic anisotropy field-induced inhomogeneity of magnetic properties at low fields. Figure 2: Dependence of the microwave generation characteristics of SHNO on the magnitude of magnetic field \(H\) and the angle \(\theta\) between \(H\) and current \(I\) at 6 K. (a - b) Spectra obtained at \(\theta\) = 120\({}^{\circ}\), labeled \(H\) and \(I\) = 5.5 mA (a) or 6.5 mA (b). (c) Dependence of the uniform ferromagnetic resonance (FMR) frequency \(f_{FMR}\) (solid squares) and the auto-oscillation frequency \(f_{auto}\) (hollow symbols) obtained at 5.5 mA and 6.5 mA on the applied magnetic field \(H\), which were determined by spin-torque FMR (ST-FMR) technique and fitting the PSD spectra of (a) and (b) with Lorentzian function. The solid curve is the result of fitting the FMR data with the Kittel formula. (d) Pseudocolor maps of the dependence of the generated microwave spectra on the angle \(\theta\) at \(H\) = 500 Oe, \(I\) = 7 mA. ### Temperature effect on current-driven dynamical mode To explore the thermal effects on the spectral coherence of the generated microwave signals in this SHNO with a moderate interfacial magnetic anisotropy, we repeat the generated microwave spectra with a large field \(H\) = 1.0 kOe at different selected temperatures. Figure 4(a)-4(c)show the microwave-generation spectra acquired at additional three experimental temperatures \(T\) = 170 K, 190 K and 280 K, similarly to the behaviors at \(T\) = 6 K discussed above[Fig. 3(i)]. We note that the actual temperature of the active device area is higher than the experimental temperatures due to current-induced Joule heating. Similar to our previous works, we can quantitatively obtain the actual device temperature \(T_{a}\) by directly comparing \(R(I)\) and \(R(T)\) curves at each experimental temperature [2] or the COMSOL MULTIPYSICS simulation of Joule heating of the device [22]. In addition, as discussed above [Fig. 3], the nonlinearity can dramatically reduce the oscillation coherence and significantly broaden the linewidth of spectra at the large current-induced redshift region [40; 41]. To avoid these anomalous contributions, we analyze the minimum value of the linewidth at the current corresponding to the maximum peak PSD and the highest frequency. Figure 4(d) shows that the minimum linewidth of the oscillation mode approximately follows a linear temperature dependence. This linear dependence is consistent with the previously reported traditional spin-transfer-torque nano-oscillators [42; 43; 44] and SHNOs with a PMA FM layer [19], but is contrasted with the thermal effects in the planar-nanogap SHNOs with in-plane magnetized Py without PMA, where the linewidth shows an exponential dependence on temperature due to thermally activated transitions between the primary bullet and secondary edge modes [45; 22]. Previous nonlinear theory of spin-torque nano-oscillations with a single-mode indicated that the thermal noise could result in a linear broadening of linewidth with temperature [46; 47], well consistent with the demonstrated single-mode nature of the magnetization dynamics in [Co/Ni]-based SHNOs with PMA. Figure 4: Dependence of the microwave generation characteristics on current at different temperatures \(T\), \(H\) = 1.0 kOe, and \(\theta\) = 120\({}^{\circ}\). (a-c) Pseudocolor plots of the spectra obtained above \(I_{c}\) increased in 0.2 mA steps, \(T\) = 170 K (a), 190 K (b), 280 K (c). (d) The calculatedly actual temperature \(T_{a}\) vs. the minimum linewidth \(FWHM\) (symbols), corresponding to the highest intensity peak of PSD spectra, determined by fitting the spectra of (a-c) and Fig. 3(i) with Lorentzian function. The solid line is the linear fit. Figure 3: Dependence of the microwave generation characteristics of SHNO on current at 6 K, \(\theta\) = 120\({}^{\circ}\) and three different \(H\). (a) Pseudocolor maps of the dependence of the generated microwave spectra obtained at \(H\) = 100 Oe on current. (b)-(d) Dependence of the central generation frequency \(f_{c}\) (b), the full width at half maximum (FWHM) (c) and the intensity peak \(P_{peak}\) (d) on current \(I\) were determined by fitting the power spectra of (a) with Lorentzian function. (e-h) and (i-l) Same as (a-d), at \(H\) = 400 Oe and 1000 Oe, respectively. The dotted lines represent the corresponding FMR frequencies \(f_{FMR}\) of the device. ### Electric-field effect on current-driven dynamical mode Besides current-induced SOTs, voltage-controlled magnetic anisotropy (VCMA) offers an alternative approach to manipulate the damping constant and direction of magnetization [30; 31; 32; 48; 49; 50]. Therefore, we further investigate current- and voltage-based collaborative control of nonlinear magnetization oscillations and spin-waves in three-terminal SHNOs. Figure 5(a) shows the dependence of the leakage current \(I_{leak}\) between the Pt/[Co/Ni] layer and the Cu gate electrode on the voltage \(V_{g}\) applied to the gate. The leakage does not exceed 0.1 nA at gate voltages of up to \(\pm\) 15 V, indicating a high quality of the 30 nm thick BaTiO\({}_{3}\)(30) insulator characterized by the breakdown electric field of more than 5 MV/cm. We analyze the dependence of the oscillation characteristics on the gate voltage \(V_{g}\) at field \(H\) = 1.0 kOe, in which the SHNO shows a significant redshift and a large generation power, as discussed above. Figure 5(b) shows the power spectral density of the oscillation spectra at \(I\) = 6.5 mA for the bias voltage \(V_{g}\) ranging from -15 to 15 V. The shift \(\Delta f=f_{c}(V_{g})-f_{c}(0)\) of the central oscillation frequency exhibits a linear dependence on \(V_{g}\) with the slope of 1.5 MHz/V at \(I\) = 6.5 mA, as shown in the inset of Fig. 5(b). This positive slope is contrary to the negative trend in the previously reported nano-constriction W(5)/CoFeB(1.7)-based SHNO [32]. In that case, the SHNO has a large PMA value \(K_{u}\)\(\simeq\) 0.6 MJ/m\({}^{3}\), and exhibits a considerable frequency blueshift with the driven current at a large external magnetic field with an out-of-plane oblique angle of 60\({}^{\circ}\). The negative trend of the voltage-controlled modulation of auto-oscillation frequency is caused by the overall effect of two opposite contributions. For instance, from the FMR Kittel formula, the linear increase in the interfacial PMA coefficient \(K_{u}\) with negative gate voltage will result in a significant increase in the oscillating frequency; While the increase of the threshold current with negative gate voltage and blueshift with the driving current leads to an effective decrease of frequency. Therefore, to gain insight into the positive voltage modulation of frequency for our case, we further analyze the current dependence of the oscillation characteristics at \(\pm V_{g}\). Figure 5(c) shows the current-dependence of the central oscillation frequency acquired at \(V_{g}\) = -15 and 15 V. One can easily see that the effect of gating on the oscillation frequency can be described as being mostly a driving current shift, which is caused by the voltage-controlled change in the effective damping constant or/and the interfacial Rashba dampinglike torque efficiency in prior W/CoFeB-based nano-constriction SHNO with a large PMA [32] and Py/Pt-based nano-gap SHNO without PMA [31]. However, it should be noted that the sign of voltage-controlled modulation of the threshold current in the studied BTO/[Co/Ni]/Co/Pt is opposite to those prior two SHNOs. The reason may be related to the different types of the excited spin-waves mode (local bullet and propagating modes) or the different electronic band structures at different FM/insulators. The frequency difference \(\Delta f=f_{c}(V_{g}=15V)-f_{c}(V_{g}=-15V)\) vs. the driving current \(I\) curve [left vertical axis of inset in Fig. 5(c)] almost overlaps with the current-dependent redshift rate \(df/dI\) [right vertical axis of inset in Fig. 5(c)], which further confirms that the argument of the observed voltage-modulated frequency mainly coming from the excitation current shift. Directly comparing these two curves, we obtain the gating voltage-modulated excitation current shift of \(\pm\) 0.05 mA at \(V_{g}\) = \(\pm\) 15 V. The maximum of 200 MHz voltage-controlled frequency tun Figure 5: Effects of voltage gating on the generated microwave spectra of SHNO at 6 K. (a) Gate leakage current \(I_{leak}\) vs. gate voltage \(V_{g}\) for a gated SHNO device. (b) Symbols: Power spectral density (PSD) of generation spectra at the labeled values of the gate voltage \(V_{g}\) ranging from -15 to 15 V, at \(H\) = 1000 Oe and \(I\) = 6.5 mA. The curves are the results of fitting by the Lorentzian function. Inset: dependence of the central frequency shift \(\Delta f(V_{g})=f_{c}(V_{g})-f_{c}(0)\) of the spectral peak on the gate voltage (symbols), and the linear fit of the data (line). (c) Dependence of the central generation frequency \(f_{c}\) on current \(I\) at \(V_{g}\) = -15 V (squares) and 15 V (circles). The solid lines are given as guides to the eye. Inset: dependence of the central frequency shift \(\Delta f(V_{g})=f_{c}(15V)-f_{c}(-15V)\) between \(V_{g}\) = 15 V and -15 V (left vertical axis) and the differential \(df/dI\) (right vertical axis) on excited current \(I\). ability is achieved in this three-terminer nano-gap SHNO. Our results demonstrate that a high speed and large gating-tunability of oscillation frequency can be achieved by combining current-dependent redshift and voltage-controlled magnetic anisotropy and interfacial Rashba dampinglike torque or nonlinear damping. ### Micromagnetic simulations and mechanism for achieving single-mode oscillation To gain a physical understanding of the single dynamic mode observed experimentally in our SHNO with a moderate PMA, we perform micromagnetic simulations using the OOMMF software[51]. The simulated volume is a circular disk with a diameter of 1 \(\mu\)m and a thickness of 2 nm, which is divided into 5 \(\times\) 5 \(\times\) 2 nm\({}^{3}\) cells. The following material parameters are used in the simulations: exchange stiffness \(A\) = 10 pJ/m, saturation magnetization \(M_{s}\) = 760 kA/m, Gilbert damping constant \(\alpha\) = 0.03, effective STT efficiency \(P\) = 0.07, and three typical PMA constants \(K_{u}\) = 0, 0.2 and 0.35 MJ/m\({}^{3}\). To more precisely simulate a real SHNO, we perform the micromagnetic simulation using the actual spin current and Oersted field distributions, which are numerically calculated with the COMSOL MULTIPHYSICS package[23]. All micromagnetic simulations are done at \(T\) = 0 and the local Joule heating effect is neglected. To conveniently illustrate the trajectory of magnetization \(\mathbf{M}\), we select the magnetic field \(H\) perpendicular to the current \(I\) (\(\theta\) = 90\({}^{\mathrm{o}}\)). To directly compare with the previous experimental and simulation results of nano-gap SHNO without PMA, we first perform the dependence of spectra on the excitation current \(I\). The calculated power spectrum is obtained by performing the fast Fourier transform (FFT) of the time series of the in-plane \(m_{x}\) [Fig. 6(a)] or out-of-plane magnetization components \(m_{z}\) [Fig. 6(b)]. Figure 6(a) shows the dependence of the power spectra on \(I\) at \(H\) = 200 Oe characterized by the primary low-frequency bullet mode \(f_{1}\) with a frequency far below \(f_{FMR}\) and a significant redshift appeared first, and followed by the coexistence with the secondary high-frequency mode \(f_{2}\) at large currents. These results are consistent with the previous experimental observations and simulation results at \(\theta\) = 120\({}^{\mathrm{o}}\)[2; 22] and 90\({}^{\mathrm{o}}\)[1; 21]. In addition, for in-plane SHNO with an effective easy-plane shape anisotropy, the precessing magnetization vector \(\mathbf{M}\) creases dynamical demagnetizing field antiparallel to the out-of-plane component \(m_{z}\) of magnetization, and forces \(\mathbf{M}\) to do elliptical precession with the short axis normal to the film plane under spin torque at an in-plane magnetic field, as shown in Fig. 7(b). Besides the fundamental frequency, the elliptical precession also exhibits by the oscillation of the component of \(\mathbf{M}\) parallel to the applied field \(m_{y}\) at twice the frequency of precession, which is well consistent with the FFT spectrum of \(m_{y}\) in Fig. 6(b). In addition to the second-harmonic of Figure 6: Pseudocolor maps of the dependence of power spectra on current for an SHNO with \(K_{u}\) = 0 and \(H\) = 200 Oe. The power spectrum was obtained by performing the fast Fourier transform (FFT) of temporal in-plane components \(m_{x}\) (a) and \(m_{y}\) (b) of magnetization. The in-plane magnetic field is along the y-axis. The different peaks corresponding to the distinct dynamical modes were labeled by the primary mode \(f_{1}\), its second- and third-harmonics \(2f_{1}\) and \(3f_{1}\), the secondary mode \(f_{2}\), its second-harmonics \(2f_{2}\) and the inter-modes \(f^{*}=f_{2}-f_{1}\), \(2f_{1}\pm f^{*}\). The FMR frequency \(f_{FMR}\) was marked by the dashed line. Figure 7: (a) Representative FFT power spectra of three magnetization components \(m_{x}\), \(m_{z}\) and \(m_{y}\) for an SHNO with \(K_{u}\) = 0 obtained at \(H\) = 200 Oe and a small current \(I\) = 9 mA. The vertical dashed line presents its FMR frequency \(f_{FMR}\). (b) 3D-plot of the trajectory of magnetization \(\mathbf{M}\) (represented by the black arrow) located at the central nanogap region. (c)-(h) Spatial power maps (normalized by the maximum of \(f_{1}(m_{x})\)) of \(m_{x}\) (c, d) and \(m_{z}\) (e, f) at frequencies \(f_{1}\) = 3.10 GHz and \(f_{2}\) = 3.67 GHz, and \(m_{y}\) (g, h) at second-harmonic \(2f_{1}\) = 6.2 GHz and intermode \(f^{*}=f_{2}-f_{1}\) = 0.57 GHz, respectively. Dashed lines show the contours of two top Au electrodes. The bold arrow indicates the direction of the magnetic field \(H\). the primary bullet mode \(f_{1}\) and secondary mode \(f_{2}\), the strong intermodes \(f^{*}=f_{2}-f_{1}\), \(2f_{1}\pm f^{*}\) are also observed in the calculated power spectrum corresponding to \(m_{y}\) [Fig. 6(b)], indicating that there exist the strong nonlinear coupling between \(f_{1}\) and \(f_{2}\). It is also consistent with recent Brillouin light scattering (BLS) spectra experiment and simulations [24], which revealed that the nonlinear coupling in this spin Hall nano-device with the extended magnetic film is determined by the ellipticity of magnetization precession. To inspect the excitation mechanism of the secondary mode \(f_{2}\) and its relation to the primary bullet mode, we need to analyze the calculated spectrum, precession trajectory of \(\mathbf{M}\), and their spatial profiles comprehensively. At a small current \(I\) = 9 mA, the power spectrum was dominated by the low-frequency mode \(f_{1}\) for \(m_{x}\) and \(m_{z}\), but the second-harmonic \(2f_{1}\) for \(m_{y}\) [Fig. 7(a)] due to the easy-plane anisotropy-induced elliptical precession [Fig. 7(b)]. The power intensity of the high-frequency mode \(f_{2}\) is less than 2% of \(f_{1}\). The spatial power maps with three components (\(m_{x,y,z}\)) corresponding to the two modes and their intermode \(f_{1}-f_{2}\) are shown in Figs. 7(c)- 7(h). We note that the spatial power maps of \(f_{2}\) [Figs. 7(d) and 7(f)] include a certain background signal of \(f_{1}\) due to its tiny power intensity compared to \(f_{1}\) and a small frequency difference between \(f_{1}\) and \(f_{2}\). To get some more insight into the excitation mechanism of the secondary mode, we further analyze the auto-oscillation dynamical characteristics obtained at a large current \(I\) = 12 mA, as shown in Fig. 8. The high-frequency mode \(f_{2}\) is significantly enhanced and has a power intensity comparable to that of the bullet mode \(f_{1}\). Similar to \(I\) = 9 mA [Fig. 8], the primary bullet mode \(f_{1}\) is localized in the center nano-gap region of the device. In contrast, the secondary mode \(f_{2}\) is localized much weaker compared to \(f_{1}\) and exhibits two maxima located at a distance of about 150 nm from the center of the gap in two opposite directions collinear with the field. We note that \(m_{x}\) exhibits a larger spatial distribution than \(m_{z}\), which may be related to the in-plane magnetization and large oscillation amplitude of \(m_{x}\). These characteristics are consistent with the prior micro-focused BLS measurements and simulations [1, 21, 22]. Previous simulations infer that the secondary edge mode is stabilized by two effective potential wells created by the dipole field of the primary bullet mode [21, 22]. However, in the outer region of the nanogap, the spin current density \(J_{s}\) is too low to directly excite or maintain the high-frequency edge mode \(f_{2}\) because more than 80% \(J_{s}\) is Figure 8: (a) Typical FFT power spectra of \(m_{x}\), \(m_{z}\) and \(m_{y}\) for an SHNO with \(K_{u}\) = 0 obtained at \(H\) = 200 Oe and a large current \(I\) = 12 mA. The vertical dashed line presents the FMR frequency \(f_{FMR}\). (b) 3D-plot of the trajectory of magnetization \(\mathbf{M}\) (represented by the black arrow) located at the central nanogap region. (c)-(k) Spatial power maps (normalized by the maximum of \(f_{1}(m_{x})\)) of \(m_{x}\) (c-e) and \(m_{z}\) (f-h) at frequencies \(f_{1}\), \(f_{2}\), \(3f_{1}\) and \(m_{y}\) (i-k) at second-harmonics \(2f_{1}\), \(2f_{2}\) and intermode \(f^{*}=f_{2}-f_{1}\), respectively. The blue arrow and dashed lines show the applied field direction and the contours of the electrodes, respectively. Figure 9: (a) Typical FFT power spectra of \(m_{x}\), \(m_{z}\) and \(m_{y}\) for an SHNO with a moderate \(K_{u}\) = 0.2 MJ/m\({}^{3}\) obtained at \(H\) = 200 Oe and \(I\) = 3.5 mA. The vertical dashed line presents its FMR frequency \(f_{FMR}\). (b) 3D-plot of the trajectory of magnetization \(\mathbf{M}\) (represented by the black arrow) located at the central nanogap region. (c)-(e) Spatial power maps (normalized by the maximum of \(f_{1}(m_{x})\)) of \(m_{x}\) (c) and \(m_{z}\) (d) at the fundamental frequency \(f\) and \(m_{y}\) (e) at its second-harmonic \(2f\), respectively. Dashed lines and bold arrow show the contours of the electrodes and the applied field direction, respectively. localized in the center nanogap [22]. There must have an intermediary to transfer energy from the center nanogap region for compensating energy dissipation of the outer edge spin-wave mode. Numerous nonlinear theories and experiments have revealed that the nonlinear spin-wave coupling can enable energy transfer between different modes resulting in mode coexistence, transition, hopping and chaos phenomena [52; 53; 54; 55; 24; 40; 56; 22; 25]. Therefore, the formation of the secondary mode coexisting with the center bullet mode is likely attributed to the nonlinear coupling-induced energy transfer mechanism [53; 55; 24] and localized by the effective potential well generated by the spatially inhomogeneous dipole field raised from the center bullet mode [21; 22]. The latter determines the spatial location of the secondary mode \(f_{2}\). How to understand the nonlinear spin-wave mode coupling? As the discussion above [Fig. 6], the oscillation frequency of \(m_{y}\) is twice that of the primary bullet mode \(f_{1}\) of \(m_{x}\) and \(m_{z}\) due to the elliptical precession [Fig. 7(b) and Fig. 8(b)]. The frequencies of the second (\(2f_{1}\)) and third (\(3f_{1}\)) harmonics are above \(f_{FMR}\), within the linear spectrum of propagating spin waves. Therefore, the central local magnetization dynamics at these frequencies, associated with the bullet mode precession, can be expected to couple to propagating spin waves at the corresponding frequencies, and acts as a parametric pump [57; 58; 24] that drives energy transfer from the primary bullet mode \(f_{1}\) into the secondary edge mode \(f_{2}\), resulting in nonlinear damping and frequency decrease of the former. Indeed, the maps of these harmonics show intensity modulations consistent with spin-wave radiation from the central bullet regime. The observed strong intermode \(f^{*}=f_{2}-f_{1}\) at large currents also confirms nonlinear coupling between the nonlinear bullet \(f_{1}\) and the high-frequency edge modes \(f_{2}\). As far as we know, the effective PMA field can counteract the demagnetizing field. Therefore, one can expect to diminish the ellipticity of magnetization precession with the help of the PMA field, and suppress the nonlinear damping of the dominated spin wave mode discussed above. Furthermore, the secondary edge mode supported by a nonlinear coupling-induced energy transfer mechanism is expected to become suppressed in our [Ni/Co]/Pt-based SHNO with PMA compensating shape anisotropy. Additionally, the PMA field can also reduce the negative nonlinearity coefficient \(\aleph\) for the in-plane magnetized thin film, and drive nonlinearly localized mode to a propagating spin wave mode through elevating the oscillation frequency [59; 59; 18; 32; 18]. To verify this argument above, we bring in the different PMA constants \(K_{u}\) in our simulations of SHNO. First, we choose a relatively small 0.2 MJ/m\({}^{3}\), less than the demagnetizing field. In the same way as before, the calculated spectrum, precession trajectory and their spatial profiles are analyzed, as illustrated in Fig. 9. There are several differences from the case of \(K_{u}=0\). First, the power intensity of magnetization component \(m_{y}\) at twice frequency \(2f\) was reduced by more a half compared to the case without PMA [Fig. 7(a) and Fig. 8(a)], consistent with the ellipticity decrease of magnetization precession in trajectory chart [Fig. 9(b)]. Second, the stable auto-oscillation can be achieved at a small driving current \(I\) = 3.5 mA, far below 8 mA of \(K_{u}=0\) [Fig. 6], consistent with the diminishment of nonlinear damping discussed above. Third, in contrast to two-mode coexistence, a single dynamical mode is only observed at a small current 3.5 mA, and its frequency is above \(f_{FMR}\), consistent with the character of a propagating spin wave mode with intensity modulations in profile mapping [Fig. 9(c) - 9(e)]. We note that, in this case of \(K_{u}=0.2\) MJ/m\({}^{3}\), the auto-oscillation still exhibits a redshift with a negative \(\aleph\) and elliptical precession. Consequently, the frequency of the propagating mode can be driven by a large current to below \(f_{FMR}\) and becomes a localized mode or two modes coexistence at large currents and high in-plane fields. We further analyze the case with \(K_{u}=0.35\) MJ/m\({}^{3}\) equal to the demagnetizing field. Because the PMA and demagnetizing fields compensate each other, the magnetization precession becomes circular consistent with trajectory results obtained by simulation, shown as in Fig. 10(b). In contrast to the small \(K_{u}\) [Figs. 7(a), 8(a) and 9(a)], the power spectrum associated with \(m_{y}\) becomes be dominated by the fundamental frequency [Figs. 10(a)]. As follows from the discussions above, nonlinear damping due to nonlinear coupling-induced energy transfer from the central dominant mode is further minimized, which is supported by the calculated auto-oscillation exhibiting a well-defined single-mode and more low threshold current. In addition to a high oscillation frequency above \(f_{FMR}\)[Fig. 10(a)], the spatial power maps corresponding to three magnetization components show a large area of asymmetric elongated spatial profile with certain intensity modulations and the direction of elongation (propagating) along the in-plane magnetic field [Fig. 10(c) - 10(f)]. These characteristics are well consistent with the linear propagating spin wave mode. Figure 10(g) shows that the auto-oscillation shows a noticeable redshift with increasing the excitation current and becomes a local spin wave mode at the large current due to the negative nonlinearity coefficient and local Oersted field. We note that the out-of-plane oblique magnetic field is also expected to compensate for the shape anisotropy, modulate nonlinearity coefficient \(\aleph\) and nonlinear damping, and achieve a single-dynamical mode with high coherence in SHNOs without PMA by effectively suppressing nonlinear mode coupling induced secondary edge-mode. ## IV Conclusions To summarize, in a three-terminal spin Hall nano-oscillator based on BaTiO\({}_{3}\)/[Co/Ni]/Pt trilayers with a moderate interfacial PMA, we achieve good coherent single-mode dynamic with a current-modulated oscillation frequency rate of 15%/mA at \(f_{auto}\) = 5.5 GHz (or 0.85 GHz/mA) and a voltage-controlled tunability of frequency \(\sim\) 200 MHz. The current-modulated frequency is related to the intrinsic nonlinearity of the nano-oscillator and the nonlinear damping due to nonlinear coupling between the fundamental auto-oscillation mode and elliptical precession causing its higher order harmonics, within the linear spectrum of propagating spin waves. In comparison, the voltage-control mainly originates from gating voltage-induced current threshold shift due to the effects of electrostatic gating on the interfacial Rashba dampinglike torque and/or the effective damping constant. Furthermore, the simulations of the PMA-dependent auto-oscillation demonstrate that the secondary high-frequency mode, usually observed in in-plane magnetized nano-gap SHNO without PMA, is attributed to the combination of the nonlinear mode coupling and the spatially inhomogeneous dipole field generated by the center bullet mode. The nonlinear mode coupling, determined by the ellipticity of the magnetization precession, can be diminished by utilizing the effective PMA field to compensate for the demagnetizing field induced by shape anisotropy. Additionally, the PMA field also can drive nonlinearly self-localized bullet mode to a quasi-linear propagating mode by suppressing the negative nonlinearity coefficient \(\aleph\) and nonlinear damping. The simulation results support our experimentally observed coherent single-mode spin-wave in an SHNO with a moderate interfacial PMA. Closely associated with the modulation of nonlinear damping and mode coupling, the energy-efficient gate-voltage and current control of the oscillator demonstrated here can significantly facilitate the development of SHNO-based on-chip microscale microwave generators and neuromorphic computing. **Acknowledgements** This work was supported by the National Natural Science Foundation of China (Grant Nos. 12074178, 12004171 and 11874135), the Applied Basic Research Programs of Science and Technology Commission Foundation of Jiangsu Province, China (Grant No. BK20200309), Key Research and Development Program of Zhejiang Province (Grant No.2021C01039), the Open Research Fund of Jiangsu Provincial Key Laboratory for Nanotechnology, and Postgraduate Research & Practice Innovation Project of Jiangsu Province (Grant No. KYCX210699).
2308.12857
Fast Adversarial Training with Smooth Convergence
Fast adversarial training (FAT) is beneficial for improving the adversarial robustness of neural networks. However, previous FAT work has encountered a significant issue known as catastrophic overfitting when dealing with large perturbation budgets, \ie the adversarial robustness of models declines to near zero during training. To address this, we analyze the training process of prior FAT work and observe that catastrophic overfitting is accompanied by the appearance of loss convergence outliers. Therefore, we argue a moderately smooth loss convergence process will be a stable FAT process that solves catastrophic overfitting. To obtain a smooth loss convergence process, we propose a novel oscillatory constraint (dubbed ConvergeSmooth) to limit the loss difference between adjacent epochs. The convergence stride of ConvergeSmooth is introduced to balance convergence and smoothing. Likewise, we design weight centralization without introducing additional hyperparameters other than the loss balance coefficient. Our proposed methods are attack-agnostic and thus can improve the training stability of various FAT techniques. Extensive experiments on popular datasets show that the proposed methods efficiently avoid catastrophic overfitting and outperform all previous FAT methods. Code is available at \url{https://github.com/FAT-CS/ConvergeSmooth}.
Mengnan Zhao, Lihe Zhang, Yuqiu Kong, Baocai Yin
2023-08-24T15:28:52Z
http://arxiv.org/abs/2308.12857v1
# Fast Adversarial Training with Smooth Convergence ###### Abstract Fast adversarial training (FAT) is beneficial for improving the adversarial robustness of neural networks. However, previous FAT work has encountered a significant issue known as catastrophic overfitting when dealing with large perturbation budgets, _the adversarial robustness of models declines to near zero during training. To address this, we analyze the training process of prior FAT work and observe that catastrophic overfitting is accompanied by the appearance of loss convergence outliers. Therefore, we argue a moderately smooth loss convergence process will be a stable FAT process that solves catastrophic overfitting. To obtain a smooth loss convergence process, we propose a novel oscillatory constraint (dubbed ConvergeSmooth) to limit the loss difference between adjacent epochs. The convergence stride of ConvergeSmooth is introduced to balance convergence and smoothing. Likewise, we design weight centralization without introducing additional hyperparameters other than the loss balance coefficient. Our proposed methods are attack-agnostic and thus can improve the training stability of various FAT techniques. Extensive experiments on popular datasets show that the proposed methods efficiently avoid catastrophic overfitting and outperform all previous FAT methods. Code is available at [https://github.com/FAT-CS/ConvergeSmooth](https://github.com/FAT-CS/ConvergeSmooth). ## 1 Introduction Recent breakthroughs in deep learning [21, 32] have aroused researchers' interest in the security of neural networks [45, 48, 39]. In particular, the advanced research proves the vulnerability of deep models to adversarial attacks [8, 11, 27]. For instance, tiny crafted perturbations can fool models in various fields into making wrong decisions [15, 50, 35, 24]. Considering the security risks brought by adversarial attacks [18, 46, 33, 44], there is a quickly growing body of work [43, 46, 18] on improving the adversarial robustness of neural networks. Among them, adversarial training is widely applied by practitioners [31, 12]. In recent years, projected gradient descent based adversarial training (PGD-AT) [25, 34] has been widely employed for its stability and effectiveness. However, this mechanism is computationally expensive. It requires multiple gradient descent steps to generate the adversarial training data [38]. An alternative of PGD-AT is the fast adversarial training (FAT) [28], which only adopts a single-step fast gradient sign method (FGSM) [11] to generate training data. Compared to PGD-AT, FAT can efficiently train models, but easily falls into catastrophic overfitting [40, 29]. A number of FAT methods have been proposed to mitigate catastrophic overfitting. For example, Wong et al. [40] use randomly initialized perturbations to enhance the diversity of adversarial perturbations. Based on it, Andriushchenko et al. [2] raise a complementary regularizer named GradAlign to maximize the gradient alignment between benign and adversarial examples explicitly. Similarly, NuAT [36] and FGSM-MEP [42] adopt nuclear norm or p-norm to regularize the adversarial training, thereby increasing the prediction alignment between benign and adversarial examples. However, the above methods can only resolve catastrophic overfitting within the limited perturbation budget (\(\xi\leq\) 8/255). \(\xi\) specifies the perturbation degree of adversarial training data generated by various attacks. Besides, models trained by small perturbations are vulnerable to adversarial attacks with a large \(\xi\), _e.g_. the models trained by NuAT and FGSM-MEP at \(\xi\) = 8/255 perform 53% and 54% robustness against the PGD-50 attack with \(\xi\) = 8/255, but only 22% and 20% robustness against the same attack with \(\xi\) = 16/255 respectively. Therefore, we aspire to prevent catastrophic overfitting to improve the adversarial robustness of neural models at larger perturbation budgets. By analyzing adversarial training processes of representative work, we observe that catastrophic overfitting is usually accompanied by a slight fluctuation in the classification loss for benign samples and a sharp drop in the classification loss for adversarial examples. This motivates us to question whether a smooth loss convergence process is also a stable FAT process. Moreover, we find that an oscillating adversarial training phase may restart the FAT process after catastrophic overfitting. Fig. 1 shows the details. According to these observations, we introduce an oscillatory constraint that limits the difference in loss between adjacent training epochs, called ConvergeSmooth. A dynamic convergence stride of ConvergeSmooth is designed considering the nonlinear decay rate of loss functions. Inspired by the smoothness of loss convergence, we further verify the effect of the proposed weight centralization on model stability. Weight centralization refers to taking the weights average of the previously trained models as the convergence center of the current model weight. Our proposed methods are attack-agnostic and thus can be combined with existing adversarial strategies in FAT, such as FGSM-RS and FGSM-MEP, to evaluate their performance. The contributions are summarized in four aspects: **(1)** We verify that previous FAT works still suffer from catastrophic overfitting at a large \(\xi\) and then study catastrophic overfitting from the perspective of convergence instability of loss functions; **(2)** We propose a smooth convergence constraint, ConvergeSmooth, and design a dynamic convergence stride for it, to help various FAT methods avoid catastrophic overfitting on different perturbation budgets; **(3)** The weight centralization is proposed without introducing extra hyperparameters other than the loss balance coefficient to stabilize FAT; **(4)** Extensive experiments show that the proposed methods outperform the state-of-the-art FAT techniques in terms of efficiency, robustness, and stability. ## 2 Related Work **Adversarial attacks**. Adversarial attacks are usually used to deceive deep-learning models. Goodfellow et al. [11] first discuss the adversarial attack (FGSM) within the classification task. They prove that adversarial examples \(x^{\prime}\) generated by a single-step gradient backward can misclassify the model \(f(\cdot;\theta)\) with high confidence. \(\theta\) denotes the fixed model weights. \(x^{\prime}\) is generated by \[x^{\prime}=x+\xi\cdot\text{sgn}(\nabla_{x}\mathcal{L}(f(x;\theta),y)), \tag{1}\] where \(x\) is an input image, \(\xi\) represents the perturbation budget, \(\text{sgn}(\cdot)\) means the sign function, \(\mathcal{L}(\cdot)\) is usually the cross-entropy loss, \(\nabla_{x}\mathcal{L}(\cdot)\) calculates the gradient of loss at \(x\), and \(y\) denotes the ground truth labels of \(x\). Following the FGSM [11], researchers propose a series of attack methods based on the iterative gradient backward, _e.g_. I-FGSM [22], MIM [8], and PGD [25]. Taking the PGD attack as an example, the adversarial example \(x^{\prime}_{t+1}\) produced in the iteration _t+_1 can be formulated as \[x^{\prime}_{t+1}=\text{clip}_{\xi}(x^{\prime}_{t}+\epsilon\cdot\text{sgn}( \nabla_{x^{\prime}_{t}}\mathcal{L}(f(x^{\prime}_{t};\theta),y))), \tag{2}\] where \(\epsilon\) denotes the single-step stride and \(\text{clip}_{\xi}\) refers to projecting adversarial perturbations to a \(\xi\)-ball. **Adversarial training**. Madry et al. [25] formalize the adversarial training as a min-max optimization problem, \[\min_{\theta}\mathbb{E}_{(x,y)\sim D}[\max_{\delta\in[-\xi,\xi]}\mathcal{L}(f (x^{\prime};\theta),y)], \tag{3}\] where \(x^{\prime}\) = \(x\) + \(\delta\), \(\delta\) represents the adversarial perturbations generated by various attacks such as PGD and FGSM. \(D\) is the data generator. The internal maximization maximizes the classification loss to generate adversarial perturbations with fixed model weights. The external minimization minimizes the classification loss on the generated adversarial examples when optimizing the model weights. Actually, there is a trade-off between computational efficiency and adversarial robustness in recent adversarial training methods. Compared to PGD-AT [25], FGSM-based fast adversarial training (FAT) accelerates the training process but damages the robustness of models for the problem of catastrophic overfitting [19, 17]. To mitigate this issue, Wong et.al [40] demonstrate that the FGSM with a random start strategy (FGSM-RS) can achieve comparable performance against the PGD-AT. ZeroGrad [10] zeroes the elements of gradient that are too small to craft the perturbations. enforces the model loss to increase with the increase in perturbation size. Besides, GradAlign [2] prevents catastrophic overfitting by maximizing the alignment between gradients of benign samples and adversarial examples. Similarly, NuAT [36] introduces a nuclear norm regularization between logits for benign and adversarial examples and uses the Bernoulli noise as the initial perturbation. ATAS [16] learns an instantiated adaptive step size that is inversely proportional to the gradient norm. It applies the adversarial perturbations from the previous epoch as the initialization of FGSM in the current training phase. In addition, Jia et al. [42] propose several prior-guided initialization methods to replace the random start strategy of FGSM-RS. Specifically, FGSM-BP adopts the adversarial perturbations from the previous batch as the attack initialization in the current batch. FGSM-MEP employs a momentum mechanism to combine all adversarial perturbations from the previous epochs to yield adversarial initialization. Although these FAT methods have resolved catastrophic overfitting at small perturbation budgets, they still suffer from catastrophic overfitting under larger perturbation budgets (_e.g_. \(\xi\) = 16/255). Unlike these methods, we revisit the catastrophic overfitting problem from the perspective of loss convergence instability and prevent this exception by limiting the magnitude of loss fluctuations during training. ## 3 Proposed Method In this section, we first study the performance of previous FAT methods when subjected to a large perturbation budget \(\xi\). Then, the training processes of these methods have been analyzed for a better understanding of catastrophic overfitting. Finally, we detail the proposed methods. ### Performance of FAT methods on a large \(\xi\) Previous FAT techniques avoid catastrophic overfitting at small \(\xi\) (\(\xi\leq\) 8/255). Here, we investigate their performance on a large \(\xi\) (\(\xi\) = 16/255 by default). **FGSM-RS [40]:** Based on Eq. (3), this method adopts the samples with uniformly random perturbations \(\delta_{0}\sim\mathcal{U}(-\xi,\xi)\) as the attack initialization of FGSM, \[\min_{\theta}\mathbb{E}_{(x,y)\sim D}[\max_{\delta_{0}+\delta \xi[-\xi,\xi]}\mathcal{L}(f(x^{\prime};\theta),y)], \tag{4}\] \[x^{\prime}=x+\delta_{0}+\delta\quad s.t.\quad x^{\prime}\in[0,1 ],\quad\delta_{0}\sim\mathcal{U}(-\xi,\xi).\] **GradAlign [2]:** This approach increases the gradient alignment between benign samples \(x\) and perturbed samples \(x+\delta_{0}\), which is denoted as \[\begin{split}\mathbb{E}_{(x,y)\sim D}[1-\cos(\nabla_{x}\mathcal{ L}(x,\theta),\nabla_{x+\delta_{0}}\mathcal{L}(x+\delta_{0},\theta)],\\ \mathcal{L}(x,\theta)=\mathcal{L}(f(x;\theta),y).\end{split} \tag{5}\] \(\cos(\cdot)\) computes the cosine similarity between two matrices. **FGSM-MEP [42]:** Different from FGSM-RS, this method generates the initialization perturbations \(\delta_{0}\) based on all historical adversarial perturbations from the previous epochs and introduces a regularization expressed as \[\mathbb{E}_{(x,y)\sim D}[||f(x^{\prime};\theta)-f(x+\delta_{0};\theta)||_{2}^ {2}], \tag{6}\] where \(||\cdot||_{2}^{2}\) denotes the squared \(\mathcal{L}_{2}\) distance. Fig. 1 shows their detailed adversarial training processes of ResNet18 [13] on CIFAR-10 [20]. We find that they fall into catastrophic overfitting during the 5\(\sim\)20\({}_{\text{th}}\), 40\(\sim\)65\({}_{\text{th}}\) and 10\(\sim\)15\({}_{\text{th}}\) epochs, respectively. Graphical analysis of various models and datasets is given in the supplement. ### Analysis of training process From Fig. 1, we discover several typical phenomena of catastrophic overfitting: 1) Change from a smooth convergence state to an irregular fluctuation state; 2) A slight change (increase or decrease) in the standard classification loss \(\mathcal{L}(x,\theta)\) and a rapid descent of the adversarial classification loss \(\mathcal{L}(x^{\prime},\theta)\); 3) Rapid decline in the classification accuracy of adversarial examples. After catastrophic overfitting, the methods depicted in Fig. 1 are capable of restarting a stable FAT process, even though catastrophic overfitting may occur again. The observed phenomena during the FAT restart are: 1) Change from an irregular fluctuation state to a smooth convergence state; 2) Rapid increase in both \(\mathcal{L}(x,\theta)\) and \(\mathcal{L}(x^{\prime},\theta)\) after a period of decline; 3) Rapid decrease in the classification accuracy of benign samples, while the classification accuracy of adversarial examples experiences a rapid increase. On this basis, we can make the following conclusions. 1) \(\mathcal{L}(x,\theta)\) is more stable than \(\mathcal{L}(x^{\prime},\theta)\) and models are prone to overfitting to adversarial perturbations; 2) Catastrophic overfitting is closely correlated to the convergence insta Figure 1: The training process of previous FAT methods with ResNet18 on CIFAR-10. Each FAT method is trained 3 times. \(\xi\) = 16/255. ADV-Loss and BEN-Loss denote the classification loss of models to adversarial and benign examples during training, respectively. ADV-Acc and BEN-Acc represent the classification accuracy of models to adversarial and benign examples during testing, respectively. bility of adversarial training; 3) Exceptions in \(\mathcal{L}(x,\theta)\) and \(\mathcal{L}(x^{\prime},\theta)\) occur simultaneously; 4) An oscillating adversarial training phase may trigger the FAT process to restart. ### Smooth convergence for the stable FAT Next, we describe the proposed method in detail. **Why did previous methods fail to prevent catastrophic overfitting?** Despite the improvement in diversity achieved through random initialization in FGSM-RS, models are still susceptible to overfitting adversarial perturbations. GradAlign and FGSM-MEP enhance the stability of adversarial training through the constraints in Eqs. (5) and (6), respectively. However, Eq. (5) may reduce the stability of \(\mathcal{L}(x,\theta)\). Meanwhile, the prediction probability \(f(x+\delta_{0};\theta)\) in Eq. (6) is not the sweet spot to keep the FAT stable. Unlike these methods, we ensure the convergence stability of both \(\mathcal{L}(x,\theta)\) and \(\mathcal{L}(x^{\prime},\theta)\). **How can catastrophic overfitting be solved?** Since the catastrophic overfitting is usually accompanied by a slight change in \(\mathcal{L}(x,\theta)\) and a sharp decline in \(\mathcal{L}(x^{\prime},\theta)\), we consider a smooth loss convergence process to be a stable FAT process that resolves this issue. To this end, a complementary constraint \(\mathcal{L}_{CS}\) for Eq. (3) is proposed, \[\min_{\theta_{t}}\mathbb{E}_{(x,y)\sim D}[\mathcal{L}(x^{\prime}_{t},\theta_{ t})+\mathcal{L}_{CS}(t)], \tag{7}\] which can limit the difference in losses between adjacent epochs, expressed as \[\begin{split}\mathcal{L}_{CS}(t)=w_{1}\cdot|\mathcal{L}(x^{ \prime}_{t},\theta_{t})-\mathcal{L}(x^{\prime}_{t-1},\theta_{t-1})|+\\ w_{2}\cdot|\mathcal{L}(x,\theta_{t})-\mathcal{L}(x,\theta_{t-1} )|,\quad s.t.\quad\mathcal{C}(x)=1,\end{split} \tag{8}\] where \(\theta_{t}\) represents the model weights of the \(t_{\text{th}}\) training epoch. \(w_{1}\) and \(w_{2}\) are hyper-parameters, \(w_{1},w_{2}\in[0,1]\). \(|\cdot|\) calculates the absolute value. \(x^{\prime}_{t}\) = \(x\) + \(\delta_{0,t}\) + \(\delta_{t}\). \(\delta_{0,t}\) can be replaced by various attack initialization perturbations and \(\delta_{t}\) is generated by \[\max_{\delta_{t}\in[-\xi,\xi]}\mathcal{L}(x^{\prime}_{t},\theta_{t-1}), \tag{9}\] where \(\theta_{t-1}\) is fixed in generating \(\delta_{t}\). In the practical implementation of Eq. (8), storing and computing \(\mathcal{L}(x^{\prime}_{t-1},\theta_{t-1})\) and \(\mathcal{L}(x,\theta_{t-1})\) can consume a significant amount of memory. To overcome this challenge, we introduce \(u^{\prime}_{t-1}\) and \(u_{t-1}\) to replace these terms, respectively. \(u^{\prime}_{t-1}=\mathbb{E}_{(x,y)\sim D}\mathcal{L}(x^{\prime}_{t-1},\theta_ {t-1})\) and \(u_{t-1}=\mathbb{E}_{(x,y)\sim D}\mathcal{L}(x,\theta_{t-1})\) denote the mathematical expectation of loss in the \(t\)-\(1_{\text{th}}\) training epoch. We observe that training instability is primarily caused by overfitting or underfitting of small amounts of data. Thus, in Eq. (8), the additional loss term is only applied to partial data selected by the crafted condition \(\mathcal{C}(\cdot)\). Notably, exceptions in \(\mathcal{L}(x,\theta)\) and \(\mathcal{L}(x^{\prime},\theta)\) occur simultaneously during training. Hence, \(\mathcal{L}(x^{\prime}_{t},\theta_{t})\) and \(\mathcal{L}(x_{t},\theta_{t})\) share the same condition. \(\mathcal{C}\) is constructed by the distance between the pointwise loss \(\mathcal{L}(x,\theta_{t})\) and the mean value \(u_{t-1}\). This is because over-fitted or under-fitted data often yield excessively high or low classification losses. \[\mathcal{C}(x)=(|\mathcal{L}(x,\theta_{t})-u_{t-1}|\geq\gamma_{t}), \tag{10}\] where \(\gamma_{t}\) (\(\gamma_{t}\) \(\geq\) 0) represents the convergence stride to select abnormal data. It is crucial to choose an appropriate value for \(\gamma_{t}\) to ensure effective training. When \(\gamma_{t}=0\), the adversarial training process may fail to converge as it forces the predictions of all samples to remain unchanged. On the other hand, catastrophic overfitting occurs when \(\gamma_{t}=\infty\), leading to undesirable outcomes. Considering that the loss difference \(d_{t-1}\) (\(d_{t-1}\) = \(|u_{t-1}-u_{t-2}|\)) between two adjacent epochs tends to decrease non-linearly, \(\gamma_{t}\) should be a variable that varies during the training process. \[\gamma_{t}=\min(\max(d_{t-1},\gamma_{min}),\gamma_{max}), \tag{11}\] where \(\gamma_{min}\) and \(\gamma_{max}\) are hyper-parameters. \(\gamma_{max}\) controls the maximum convergence speed. \(\gamma_{min}\) ensures that the training process is not too smooth when \(d_{t-1}\) \(\rightarrow\) 0. **Example-based ConvergeSmooth.** This approach adds the constraint individually to each sample, described as \[\begin{split}\mathcal{L}_{CS}^{E}(t)=w_{1}\cdot|\mathcal{L}(x^{ \prime}_{t},\theta_{t})-u^{\prime}_{t-1}|+w_{2}\cdot|\mathcal{L}(x,\theta_{t}) -u_{t-1}|,\\ s.t.\quad|\mathcal{L}(x,\theta_{t})-u_{t-1}|>\gamma_{t}.\end{split} \tag{12}\] **Batch-based ConvergeSmooth.** Likewise, we can apply the complementary constraint to a data batch, \[\begin{split}\mathcal{L}_{CS}^{B}(t)=w_{1}\cdot|u^{\prime}_{B}-u^ {\prime}_{t-1}|+w_{2}\cdot|u_{B}-u_{t-1}|,\\ s.t.\quad|u_{B}-u_{t-1}|>\gamma_{t},\end{split} \tag{13}\] where \(u^{\prime}_{B}=\mathbb{E}_{(x,y)\sim D}\mathcal{L}(x^{\prime}_{t},\theta_{t})\), \(u_{B}=\mathbb{E}_{(x,y)\sim D}\mathcal{L}(x,\theta_{t})\). **Weight centralization.** To mitigate the problem of manual parameter tuning, we introduce the weight centralization without requiring extra hyperparameters other than the coefficient \(w_{3}\), \(|\theta_{t}-\theta_{t-1}|=0\rightarrow|\mathcal{L}(x,\theta_{t})-\mathcal{L}(x, \theta_{t-1})|=0\), \[\mathcal{L}_{CS}^{W}(t)=w_{3}\cdot||\theta_{t}-\frac{1}{len(\phi)}\cdot\sum_{j \in\phi}\theta_{j}||_{p}, \tag{14}\] where \(\|\cdot\|_{p}\) means the p-norm function (_p_ = 2), and \(\phi\) denotes a set of previous model weights. The model weights \(\theta_{t}\) are restricted to the center \(\frac{1}{len(\phi)}\cdot\sum_{j\in\phi}\theta_{j}\). The reasons behind Eq. (14) are: 1) The initial training process is stable, as indicated in Fig. 1; 2) Models at different training epochs tend to have similar weights after convergence [37]. ## 4 Experimental Results ### Experimental settings **Details.** To demonstrate the effectiveness of the proposed method, we conduct comprehensive experiments on several benchmark datasets, _i.e_. CIFAR-10 [20], CIFAR-100 [20], and Tiny ImageNet [7]. Following previous works [40, 23, 49, 42], we adopt ResNet18 [13] as the backbone on the CIFAR-10 and CIFAR-100, and choose PreActResNet18 [14] on the Tiny ImageNet. In all experiments, models are optimized using the SGD optimizer with a batch size of 128, weight decay of 5e-4, and momentum of 0.9. The initial learning rates on CIFAR-10, CIFAR-100, and Tiny ImageNet are set as 0.1, 0.1, and 0.01, respectively. Then, we optimize models with a total training epoch of 110 and decay the learning rate at the 100\({}_{\text{th}}\) and 105\({}_{\text{th}}\) epoch with a factor of 0.1. We apply the proposed ConvergeSmooth in combination with two attack initialization methods on CIFAR-10 and CIFAR-100, FGSM-RS [40] and FGSM-MEP [42]. Additionally, for the Tiny ImageNet, we use FGSM-BP [42] as the initialization method. As mentioned in [42], FGSM-MEP requires consuming memory to store the previous adversarial perturbations, which limits its application on large datasets. Regarding the hyperparameters, we set \(\gamma_{max}\) in Eq. (11) as 0.03, 0.06, and 0.03 for CIFAR-10, CIFAR-100, and Tiny ImageNet, respectively. \(\gamma_{max}\) = 1.5 \(\cdot\)\(\gamma_{min}\) and \(\xi\) = 16/255. Specific details of hyperparameter settings for \(w_{1\sim 3}\) are given in the supplement. All experiments are conducted on a single GeForce RTX 3090 GPU. **Baselines.** We include advanced FAT methods as baselines, namely FGSM-RS [40], GradAlign [2], ZeroGrad [10], NuAT [36], ATAS [16], and FGSM-MEP [42]. **Evaluation metrics.** We follow [40, 16] and adopt FGSM [11], PGD-10 [25], PGD-20 [25], PGD-50 [25], C\(\&\)W [3], APGD-CE [6] and Autoattack (AA) [6] to evaluate the adversarial robustness of models. PGD-n represents the PGD attack with n iterations. Autoattack combines APGD-CE and APGD-T (targeted APGD) as well as two complementary attacks (FAB [9] and Square attack [26]). The training easily exhibits fluctuations, and thus individual results can not objectively reflect the performance of methods. For each method, we repeat the training process three times and report the average evaluation results of the best model across the three runs (_mbest_). Additionally, the supplement includes the best results (_best_) and the average evaluation of the final model from the three runs (_mfinal_). In the following section, 'W-\({}^{*}\)', 'E-\({}^{*}\)', and 'B-\({}^{*}\)' mean 'weight centralization', 'example-based ConvergeSmooth', and 'batch-based ConvergeSmooth', respectively. ### Significance of the Results Before evaluating the adversarial robustness of trained models, we demonstrate the significance of the results [1]. Specifically, we show that adversarial attacks with \(\xi\) = 16/255 rarely change the true label of input. 1) We generate adversarial examples for the FGSM-RS, FGSM-MEP, and our proposed methods on CIFAR10 and CIFAR100 datasets with \(\xi\)=16/255 using the non-targeted attacks such as FGSM, PGD, C\(\&\)W, APGD-CE, and the targeted attack APGD-T. The majority of the generated examples retain their true labels; 2) Tab. 1 examines the robustness of models against an ensemble of the Square (SQ) [26] and Ray-S [5] attacks, as these attacks generate strong oracle-invariant examples [1]; 3) We generate adversarial training data with \(\xi\) = 16/255 and test the robustness on various levels of perturbation. Tab. 2 provide comparative experiments between our proposed method and other AT techniques, including OAAT [1], ExAT [30], ATES [4] and AWP [41]. It is important to note that we re-implement these AT methods and apply them in the context of FAT. Overall, our methods achieve optimal performance on both oracle-invariant attacks and classical evaluation attacks. Namely, the classical evaluation attacks used in \(\xi\) = 8/255 are reliable and \begin{table} \begin{tabular}{c c|c c c c} CIFAR10 & \(\xi\) & FGSM-RS & FGSM-MEP & B-RS & B-MEP \\ \hline \hline SQ\(\uparrow\) & 16/255 & 19.74 & 22.15 & 28.83 & **29.97** \\ SQ+Rays\(\uparrow\) & & 18.72 & 20.96 & 27.08 & **28.45** \\ \end{tabular} \end{table} Table 1: Quantitative results of various FAT methods against the Square and Ray-S attacks on CIFAR-10 with the backbone ResNet18. The number in bold and \(\bullet\) indicate the best and second-best results, respectively. \begin{table} \begin{tabular}{c|c c c c c c} CIFAR10 & Clean & \(\frac{8.5}{255}\) & \(\frac{10.5}{255}\) & \(\frac{12.5}{255}\) & \(\frac{16.5}{255}\) & SQ+RayS & Time \\ \hline OAAT[1] & 71.30 & 44.29 & 38.56 & 33.68 & 28.48 & 21.06 & 183 \\ AWP[41] & 78.86 & 35.91 & 32.70 & 29.89 & 26.95 & 19.36 & 134 \\ ATES[4] & 74.52 & 43.61 & 38.94 & 35.31 & 30.89 & 22.16 & - \\ ExAT[30] & 80.78 & 49.21 & 42.15 & 36.98 & 31.26 & 23.08 & 70 \\ E-MEP & 69.84 & **53.54** & **49.26** & **45.89** & **40.78** & 23.78 & 104 \\ B-MEP & 63.84 & 50.31 & 46.77 & 43.97 & 40.13 & **28.45** & 101 \\ \end{tabular} \end{table} Table 2: Quantitative results of methods against various levels of perturbation with ResNet18 as the backbone and CIFAR-10 as the dataset. \(\sharp\) denotes the PGD-10 attack. Figure 2: The training process of various FAT methods (PGD-10, \(\xi\) = 16/255) on CIFAR-10. significant even when the value of \(\xi\) is increased to 16/255. Notably, our methods are plugged into FGSM-RS and FGSM-MEP, which are fitted to the distribution of adversarial examples. Instead, methods such as OAAT are fitted to the distribution of benign samples and perform better on clean accuracy. Therefore, we use the settings as in OAAT and then apply OAAT and our B-OAAT to the FAT task. B-OAAT realizes 74.12% (+2.82% than OAAT) clean accuracy and 25.39% (OAAT) on SQ+RayS. ### Results on CIFAR-10 We conduct our initial experiments on CIFAR-10 using the ResNet18 backbone. The default \(\xi\) is set to 16/255. Tab. 3 presents the main comparisons. The observations are as follows: (1) Compared with previous FAT methods, the proposed approaches achieve optimal adversarial robustness against different attacks, _e.g_. B-RS outperforms all RS-based methods and B-MEP is superior to all other methods. Meanwhile, our methods exhibit similar performance to prior work in terms of clean classification accuracy. (2) B-MEP realizes adversarial robustness approaching PGD-AT, _e.g_. B-MEP performs 32.95% robustness against the PGD-50 attack, only 0.97% lower than PGD-AT; (3) As for time consumption, B-RS takes less time (75 minutes) than GradAlign (135 minutes) and NuAT (101 minutes). This is because ConvergeSmooth only requires the additional regularization when \(|u_{B}-u_{t-1}|{>}\gamma_{t}\) instead of adding constraints on all iterations. In addition, B-MEP (102 minutes) takes a bit more calculation cost than FGSM-MEP (92 minutes) but much less than PGD-AT (370 minutes). PGD-AT takes significantly longer than FAT works; (4) B-RS and B-MEP successfully prevent catastrophic overfitting, _e.g_. _mfinal_ (in the supplement) only shows a slight reduction than _mbest_ in terms of adversarial robustness. Fig. 2 visualizes the training process of various FAT \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Budgets (\(\xi\)) & Methods & Clean\(\uparrow\) & FGSM\(\uparrow\) & PGD-10\(\uparrow\) & PGD-20\(\uparrow\) & PGD-50\(\uparrow\) & C\&W\(\uparrow\) & APGD-CE\(\uparrow\) & AA \(\uparrow\) & Time (min)\(\downarrow\) \\ \hline \multirow{4}{*}{12/255} & GradAlign [2] & 66.52 & 43.06 & 31.66 & 27.95 & 26.04 & 27.05 & 25.99 & 21.65 & 135 \\ & NuAT [36] & 72.79 & 51.80 & 41.75 & 38.60 & 37.54 & 35.99 & 36.71 & 32.01 & 101 \\ & FGSM-MEP [42] & **74.71** & 52.05 & 38.45 & 36.35 & 35.52 & 33.05 & 33.4 & 27.23 & 92 \\ & B-MEP (Ours) & 72.63 & **54.40** & **45.23** & **42.85** & **42.14** & **36.81** & **41.62** & **33.26** & 101 \\ \hline \multirow{4}{*}{10/255} & GradAlign [2] & 83.10 & 55.23 & 36.58 & 30.47 & 28.64 & 31.01 & 26.51 & 23.84 & 135 \\ & NuAT [36] & 75.82 & 55.94 & 45.54 & 43.92 & 43.42 & **41.39** & 42.91 & 38.85 & 101 \\ \cline{1-1} & FGSM-MEP [42] & **83.43** & **59.51** & 42.76 & 39.28 & 37.33 & 37.26 & 35.91 & 32.52 & 92 \\ \cline{1-1} & B-MEP (Ours) & 75.96 & 57.26 & **47.25** & **45.98** & **45.66** & 41.00 & **45.26** & **39.20** & 101 \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative results of FAT methods on various \(\xi\) with ResNet18 as the backbone and CIFAR-10 as the dataset. Models are trained and evaluated under the same \(\xi\). \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Methods & Clean\(\uparrow\) & FGSM\(\uparrow\) & PGD-10\(\uparrow\) & PGD-20\(\uparrow\) & PGD-50\(\uparrow\) & C\&W\(\uparrow\) & APGD-CE\(\uparrow\) & AA \(\uparrow\) & Time (min)\(\downarrow\) \\ \hline PGD-AT [29] & 65.30 & 46.31 & 40.73 & 35.08 & 33.92 & 30.84 & 33.08 & 26.29 & 370 \\ \hline \hline FGSM-RS [40] & 50.12 & 38.28 & 26.13 & 21.55 & 20.43 & 18.96 & 19.40 & 14.84 & 67 \\ GradAlign [2] & 58.17 & 39.87 & 33.12 & 26.81 & 24.99 & 22.63 & 23.98 & 17.02 & 135 \\ ZeroGrad [10] & 74.16 & 43.96 & 32.67 & 21.98 & 18.37 & 20.76 & 16.44 & 12.07 & 67 \\ W-RS & 70.66 & 45.51 & 36.50 & 27.51 & 24.75 & 23.97 & 23.38 & 17.14 & 71 \\ Ours & E-RS & 62.38 & 42.07 & 36.78 & **30.80** & **28.90** & 23.64 & **27.71** & 17.55 & 77 \\ & B-RS & 65.42 & **45.94** & **37.54** & 30.01 & 27.85 & **26.28** & 26.52 & **19.43** & 75 \\ \hline NuAT [36] & 74.62 & 44.92 & 35.22 & 25.93 & 23.67 & 24.07 & 22.37 & 18.43 & 101 \\ ATAS\({}^{*}\)[16] & 64.11 & - & 31.39 & - & 28.15 & - & - & 21.09 & - \\ \hline FGSM-MEP [42] & 53.32 & 36.24 & 31.85 & 27.28 & 26.56 & 22.10 & 26.08 & 18.98 & 92 \\ Ours & E-MEP & 69.84 & **47.18** & **40.90** & 34.17 & 32.72 & 22.69 & 31.12 & 17.74 & 104 \\ & B-MEP & 63.84 & 45.48 & 40.13 & **34.21** & **32.95** & **28.19** & **32.04** & **23.68** & 101 \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative results of various methods (\(\xi\) = 16/255) on the CIFAR-10 with ResNet18 as the backbone. ‘ATAS\({}^{*}\)’ is the result of ATAS in [16], which is superior to our reproduction. We train each method three times. The results represent the evaluation average between the best models of three training processes. Weight centralization and regularization in MEP do not work together. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(\xi\) = 16/255 & CIFAR10 & CIFAR100 & Time \\ \hline Methods & Clean & AA & Clean & AA & hours \\ \hline GradAlign [2] & 55.58 & 13.45 & 35.93 & 7.13 & 8.79 \\ NuAT [36] & 74.25 & 13.19 & 20.48 & 7.30 & 7.66 \\ FGSM-MEP [42] & 65.56 & 15.43 & 20.69 & 6.73 & 6.15 \\ B-MEP (Ours) & **69.94** & **24.27** & **48.64** & **12.05** & 6.72 \\ \hline \hline \end{tabular} \end{table} Table 5: The adversarial accuracy of various FAT methods with WideResNet34-10 as the backbone. approaches. Compared with other methods, the proposed B-RS achieves optimal training stability and adversarial robustness, _e.g_. the classification accuracy of our method fluctuates slightly during three adversarial training sessions. **Various budgets.** Similar experiments are performed for budgets 10/255 and 12/255. Details are given in Tab. 4. It is evident that the proposed method achieves optimal adversarial robustness across various perturbation budgets. **Various networks.** We then adopt WideResNet34 with a width factor of 10 [47] as the backbone. The results are given in Tab. 5. Our proposed B-MEP also prevents the wider architectures from catastrophic overfitting. ### Results on CIFAR-100 The results on CIFAR-100 with the backbone ResNet-18 are presented in Tab. 6. (1) FAT methods perform similar training consumption on CIFAR-10 and CIFAR-100 as two datasets contain the same number and size of images; (2) All proposed methods can realize stable adversarial training, as evidenced by the comparable results of _mbest_ and _mfinal_ (in the supplement); (3) Among the RS-based FAT works, B-RS and E-RS achieve the best and second-best adversarial robustness against different attacks; (4) B-MEP outperforms all other FAT methods; (5) Although B-MEP performs slightly worse than PGD-AT, its training process is approximately 3.5 times faster than this competitor. ### Results on Tiny ImageNet We conduct experiments on the Tiny ImageNet using the PreActResNet18 backbone to demonstrate the scalability of the proposed method to large datasets. The results are given in Tab. 7. B-BP achieves higher robustness among the FAT methods and comparable robustness to PGD-AT. As for training efficiency, B-BP (15.4 hours) requires slightly more computational cost than FGSM-BP (14.3 hours), but significantly less time than PGD-AT (67.2 hours). ### Ablation studies In the following, B-RS is selected as the backbone. Figs. 3 and 4 conduct ablation studies on \(\gamma_{max}\) and \(\gamma_{max}/\gamma_{min}\) respectively. In Fig. 3, the classification accuracy of both benign and adversarial examples increases with \(\gamma_{max}\). However, the model trained with a large \(\gamma_{max}\) (\(>\)0.09) suffers from catastrophic overfitting. Fig. 4 shows that the model with \(\gamma_{max}/\gamma_{min}\) = 1.5 achieves optimal robustness. \(\gamma_{max}/\gamma_{min}\) = 1 denotes the static convergence stride. All models can be trained stably when \(\gamma_{max}\) = 0.06. all settings accomplish stable FAT. According to Eqs. (7) and (8), if \(w_{1}\)\(\neq\)0, for the data \(x\) which satisfies \(\mathcal{C}(x)\) = 1, \(\mathcal{L}(x_{t}^{\prime},\theta_{t})\) is assigned with higher weights (1+\(w_{1}\)) or lower weights (1-\(w_{1}\)), causing the model to excessively prioritize or neglect this portion of the data. This may weaken the performance of models. However, for Tiny ImageNet, models suffer catastrophic overfitting at \(w_{1}\) = 0. Then, we gradually increase \(w_{1}\) from 0.3 with a stride of 0.2 until the FAT process becomes stable. In addition, our weight centralization method only introduces a balance coefficient \(w_{3}\). \(w_{3}\) is set to 0.1 since catastrophic overfitting happens when \(w_{3}\) = 0 and classification performance is degraded when \(w_{3}\) = 0.2. ## 5 Conclusion In this paper, we tackle the issue of catastrophic overfitting by focusing on the convergence stability of loss functions. Through experimental analysis, we find that this issue is accompanied by abnormal convergence behavior of losses. Motivated by this phenomenon, we propose complementary constraints, namely E-ConvergeSmooth and B-ConvergeSmooth, based on the adversarial training constraint. To alleviate the burden of parameter tuning, weight centralization is designed utilizing priors from previous epochs. Extensive experiments on different network architectures and datasets show that the proposed methods effectively solve catastrophic overfitting and exhibit superior robustness against various adversarial attacks. **Acknowledgments.** This work was supported by the National Key R&D Program of China #2018AAA0102000 and the National Natural Science Foundation of China #62276046 and #U19B2039. Figure 4: Ablation study of \(\gamma_{max}/\gamma_{min}\) (\(\gamma_{max}\) = 0.06). We provide the classification accuracy of models (ResNet-18) to benign and adversarial examples (PGD-10, \(\xi\) = 16/255) on CIFAR-100. Figure 3: Ablation study of \(\gamma_{max}\) (\(\gamma_{max}/\gamma_{min}\) = 1.5). We provide the classification accuracy of models (ResNet-18) to benign and adversarial examples (PGD-10, \(\xi\) = 16/255) on CIFAR-100. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & \(w_{1}\) & Clean\(\uparrow\) & FGSM\(\uparrow\) & PGD-10\(\uparrow\) & PGD-20\(\uparrow\) & PGD-50\(\uparrow\) & C\(\&\)W\(\uparrow\) & APGD-CE\(\uparrow\) & AA \(\uparrow\) & Stability \\ \hline \multirow{8}{*}{CIFAR100} & 0.0 & **48.13** & **32.13** & **24.25** & **22.67** & **22.21** & **19.04** & **21.29** & **15.28** & \(\star\star\star\) \\ & 0.3 & 46.08 & 30.58 & 23.06 & 21.62 & 21.22 & 17.79 & 20.44 & 15.22 & \(\star\star\star\) \\ \cline{1-1} & 0.5 & 44.04 & 29.96 & 22.64 & 21.34 & 20.99 & 17.55 & 19.16 & 13.84 & \(\star\star\star\) \\ \cline{1-1} & 0.7 & 43.07 & 28.61 & 22.01 & 20.84 & 20.32 & 16.87 & 18.98 & 13.47 & \(\star\star\star\) \\ \cline{1-1} & 0.9 & 40.45 & 21.05 & 21.62 & 20.54 & 20.15 & 16.33 & 18.91 & 13.53 & \(\star\star\star\) \\ \hline \hline \end{tabular} \end{table} Table 8: Quantitative results of the proposed method on various \(w_{1}\) with ResNet18 as the backbone and the perturbation budget 12/255. ‘Stability’ represents the number of times the model is stable in three training repetitions. \(\gamma_{max}\) and \(w_{2}\) are set to 0.03 and 1, respectively. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Dataset & \(w_{2}\) & Clean\(\uparrow\) & FGSM\(\uparrow\) & PGD-10\(\uparrow\) & PGD-20\(\uparrow\) & PGD-50\(\uparrow\) & C\(\&\)W\(\uparrow\) & APGD-CE\(\uparrow\) & AA \(\uparrow\) & Stability \\ \hline \multirow{8}{*}{CIFAR100} & 0.5 & 36.79 & 23.97 & 18.60 & 17.51 & 17.23 & 14.65 & 16.28 & 12.23 & - \\ & 0.7 & 37.31 & 24.36 & 18.56 & 17.37 & 17.26 & 14.46 & 15.96 & 11.54 & - \\ \cline{1-1} & 1.0 & **48.13** & **32.13** & **24.25** & **22.67** & **22.21** & **19.04** & **21.29** & **15.28** & \(\star\star\star\) \\ \cline{1-1} & 1.3 & 44.84 & 30.42 & 23.31 & 21.72 & 21.31 & 18.36 & 19.64 & 14.72 & \(\star\star\star\) \\ \hline \hline \end{tabular} \end{table} Table 9: Quantitative results on various \(w_{2}\) (\(\gamma_{max}\) = 0.03 and \(w_{1}\) = 0) with ResNet18 as the backbone and \(\xi\) = 12/255.
2306.12323
Robustness and uniqueness of equilibrium states for certain partially hyperbolic systems
We prove that if $f$ is a $C^{1+}$ partially hyperbolic diffeomorphism satisfying certain conditions then there is a $C^1$-open neighborhood $\cA$ of $f$ so that every $g\in \cA\cap \operatorname{Diff}^{1+}(M)$ has a unique equilibrium state.
Juan Carlos Mongez, Maria José Pacifico
2023-06-21T15:06:37Z
http://arxiv.org/abs/2306.12323v1
# Robustness and uniqueness of equilibrium states for certain partially hyperbolic systems ###### Abstract. We prove that if \(f\) is a \(C^{1+}\) partially hyperbolic diffeomorphism satisfying certain conditions then there is a \(C^{1}\)-open neighborhood \(\mathcal{A}\) of \(f\) so that every \(g\in\mathcal{A}\cap\mathrm{Diff}^{1+}(M)\) has a unique equilibrium state. MJP and JCM were partially supported by CAPES-Finance Code 001. JCM was partially supported by FAPERJ Grant (Bolsa Nota 10) No. E-26/202.301/2022(276542). MJP was partially supported by FAPERJ Grant CNE No. E-26/202.850/2018(239069), Grant CNPq-Brazil No. 307776/2019-0 and PRONEX Dynamical Systems E-26/010.001252/2016. the metric entropy of the system, which measures the complexity of the system from the measure theoretical point of view. In other words, the Variational Principle provides a way to calculate the topological entropy of a dynamical system by considering its invariant measures and their corresponding entropies. To be more precise, denoted by \(\mathcal{M}_{f}\) the set of \(f\)-invariant probability measures, the variational principle establishes the following relation \[P(f,\phi)=\sup\{h_{\mu}(f)+\int\phi d\mu:\mu\in\mathcal{M}_{f}\} \tag{1.1}\] An invariant probability \(\mu\) is an _equilibrium state_ (for short, EE) for the potential \(\phi\) if it achieves the supremum in the above equation, that is, if \(h_{\mu}(f)+\int\phi d\mu=P(f,\phi)\). In particular, since the topological pressure coincides with the topological entropy when the potential \(\phi\) is identically zero, the variational principle establishes \[h_{top}(f)=\sup\{h_{\mu}(f):\mu\in\mathcal{M}_{f}\}. \tag{1.2}\] We say that a measure \(\mu\) achieving the supremum in equation (1.2) is a _measure of maximal entropy_ (for short, MME ). Maximal entropy measures and equilibrium states are key concepts in formal thermodynamical and dynamical systems theory since they provide a way to quantify the degree of chaos in a system and predict its long-term behavior. Thus, measures of maximum entropy and equilibrium states are essential tools to understand the behavior of complex systems and predict their evolution over time. In particular, a maximal entropy measure has a deep connection with statistical physics and stochastic processes (such as Markov chains) and is closely related to various geometric properties such as the growth rate of closed orbits, as observed by Margulis in his pioneer work [13]. The applicability of these tools to deduce the asymptotic behavior of the system depends on the existing number of such measures. The optimal case is when there is only one. It is not difficult to construct examples of partially hyperbolic systems with positive and finite entropy which have an infinite number of measures of maximal entropy. In these cases, studying the system through its measures of maximal entropy becomes much more difficult. Therefore, determining whether a system has a unique measure of maximum entropy has been one of the challenges in dynamical systems theory. The study of equilibrium states was initiated by Sinai, Ruelle and Bowen in the 1970s. Sinai was a pioneer in investigating the existence and finiteness of equilibrium states for Anosov diffeomorphisms for continuous Holder potentials [10]. Later, Ruelle and Bowen expanded this approach to include uniformly hyperbolic (Axiom A) systems [12, 14, 15]. Bowen, in [12] provides a criterion to determine whether a dynamical system has a unique equilibrium state for a given potential function: for homeomorphisms \(f\) defined on a compact metric space \(\mathbb{X}\), if \((\mathbb{X},f)\) is an expansive system with specification and the potential satisfies a certain regularity condition (known as the Bowen property) then the system has a unique equilibrium state. This criterion is easily applied to Axiom A diffeomorphisms but it is not always applicable to partially hyperbolic systems [17]. If an Anosov diffeomorphism has a unique measure of maximum entropy, then it is possible to prove that diffeomorphisms close to it also have a unique measure of maximum entropy. However, this is not always possible for partially hyperbolic systems, for instance, for a skew product of an Anosov by an irrational rotation. Climenhaga and Thompson improved the Bowen criterion in [18] and showed that the same conclusion holds using weaker non-uniform versions of specification, expansivity, and Bowen property over a significant set of orbit segments. This criterion applies to certain systems like the Bonatti-Viana family of diffeomorphisms in [10] and Mane's derived from Anosov diffeomorphism on \(\mathbb{T}^{3}\) in [10]. This criterion does not apply to flows with singularities, as it happens to Lorenz-type attractors. To address this issue, Pacifico et al. developed an improved version of the CT criterion [14], and using it they were able to prove that Lorenz-like attractors in any dimension have a unique maximum entropy measure [14]. Determining which partially hyperbolic systems have a unique measure of maximal entropy is a challenging task. However, using their criterion, Climenhaga and Thompson were able to prove that partially hyperbolic systems with central dimension one and such that every MME has central Lyapunov exponent negative and the unstable foliation is minimal have a unique measure of maximal entropy [10]. Since the above assumption about minimality of the unstable foliation is not open, it does not follow directly from this work that the uniqueness of the maximum entropy measure is a robust property. Here we address the problem of establishing conditions that guarantee the existence and uniqueness of equilibrium states for partially hyperbolic systems with a one-dimensional center being a robust property. To do so, we explore the properties of the notion of unstable and stable entropy, \(h^{u}(f)\) and \(h^{s}(f)\) respectively, introduced in [11, 12], see definition 2.10. Replacing the aforementioned assumption about the Lyapunov central exponent by \(h^{u}(f)-h^{s}(f)>0\) we were able to prove the following result: **Theorem A**.: _Let \(f:M\to M\) be a \(C^{1+}\) partially hyperbolic diffeomorphism of a compact manifold \(M\) with \(TM=E^{u}\oplus E^{c}\oplus E^{s}\) and \(\phi:M\to\mathbb{R}\) a Holder continuous potential. Assume that \(dimE^{c}=1\) and the unstable foliation \(\mathcal{F}^{u}(f)\) is minimal. If \(h^{u}(f)-h^{s}(f)>\sup\phi-\inf\phi\geq 0\) then there exists a \(C^{1}\) neighborhood \(\mathcal{U}\) of \(f\) so that \((g,\phi)\) has a unique equilibrium state for every \(C^{1+}\) diffeomorphism \(g\in\mathcal{U}\)._ _Organization of the paper._ In Section 2 we provide the readers with preliminaries on a criterion for uniqueness of measures of maximal entropy, set the notations and definitions, which are taken from [10, 14]. In particular, in this section, we define stable \(h^{s}(f)\) and unstable \(h^{u}(f)\) entropy for partially hyperbolic systems, taken from [11, 12]. In Section 3 we establish the consequences of \(h^{u}(f)-h^{s}(f)>0\). Finally, in Section 4 we provide the proof of Theorem A showing that every \(g\) in a neighborhood \(C^{1+}\) of a partially hyperbolic diffeomorphism \(f\) with a one-dimensional central bundle, such that the unstable foliation \(\mathcal{F}^{u}\) is minimal and such that \(h^{u}(f)-h^{s}(f)>0\) satisfies all the requirements of Theorem 2.5, finishing the proof. ## 2. Preliminaries ### A criterion to the uniqueness of Mme In this section, we provide a criterion developed in [14] to obtain the uniqueness of MME. For the convenience of those who are familiar with [10] or [14], the notations here are the same. So it is safe to skip most of this Section and move on to section 2.2. #### 2.1.1. Topological pressure Let \(M\) be a compact metric space and \(f:M\to M\) a homeomorphism. The nth Bowen metric associated with \(f\) is defined by \[d_{n}(x,y):=\sup\{d(f^{k}(x),f^{k}(y))|0\leq k<n\}.\] The \(n\)-Bowen ball with radius \(\varepsilon>0\) centred at \(x\in X\) is given by \[B_{n}(x,\delta):=\{y\in M:d_{n}(x,y)<\varepsilon\}.\] Given \(\delta>0,\)\(n\in\mathbb{N},\) a set \(E\subset M\) is \((n,\delta)\) is separated if for every pair of distinct points \(x,y\in X\) it holds \(d_{n}(x,y)>\delta.\) Given a continuous potential \(\phi:M\to\mathbb{R}\), write \(\Phi_{\varepsilon}(x,n)=\sup_{y\in B_{n}(x,\varepsilon)}\sum_{k=0}^{n-1}\phi( f^{k}(y)).\) In particular, \(\Phi_{0}(x,n)=\sum_{k=0}^{n-1}\phi(f^{k}(y)).\) We identify \(M\times\mathbb{N}\) with the space of finite orbit segments by identifying \((x,n)\) with \(\{x,f(x),\cdots,f^{n-1}(x)\}\). Given \(\mathcal{C}\subset M\times\mathbb{N}\) and \(n\in\mathbb{N}\) we write \(\mathcal{C}_{n}:=\{x\in M:(x,n)\in\mathcal{C}\}\). Fixed \(\varepsilon,\delta>0\) and \(n\in\mathbb{N}\) we consider the _partition function_ \[\Lambda(\mathcal{C},f,\phi,\delta,\varepsilon,n):=\sup\left\{\sum_{x\in E}e^{ \Phi_{\varepsilon}(x,n)}:E\subset\mathcal{C}_{n}\text{ is }(n,\delta)\text{-separated }\right\},\] and when \(\varepsilon=0\) we write \(\Lambda(\mathcal{C},\phi,\delta,n)\) instead of \(\Lambda(\mathcal{C},\phi,\delta,0,n)\). Note that \(\Lambda\) is monotonic in both \(\delta\) and \(\varepsilon\), although in different directions: if \(\delta_{1}<\delta_{2}\) and \(\varepsilon_{1}<\varepsilon_{2}\) then \[\Lambda(\mathcal{C},\delta_{1},\varepsilon,t)\geq\Lambda(\mathcal{C},\delta_{ 2},\varepsilon,t)\text{ and }\Lambda(\mathcal{C},\delta,\varepsilon_{1},t)\leq \Lambda(\mathcal{C},\delta,\varepsilon_{2},t).\] The pressure of \(\phi\) on \(\mathcal{C}\) at scale \(\delta,\varepsilon\) is \[P(\mathcal{C},f,\phi,\delta,\varepsilon):=\limsup_{n\to\infty}\frac{1}{n}\log \Lambda_{n}(\mathcal{C},f,\phi,\delta,\varepsilon).\] Note also that the monotonicity of \(\Lambda\) is naturally translated to \(P\). When \(\varepsilon=0\) we simplify the notation and write \(P(\mathcal{C},f,\phi,\delta)\) instead of \(P(\mathcal{C},f,\phi,\delta,0)\) and the pressure of \(\phi\) on \(\mathcal{C}\) is \[P(\mathcal{C},f,\phi):=\lim_{\delta\to 0}P(\mathcal{C},f,\phi,\delta).\] Given \(C\subset M\), we define \(P(C,f,\phi,\delta,\varepsilon,f):=P(C\times\mathbb{N},f,\phi,\delta,\varepsilon)\); observe that \(P(C,f,\phi)\) agrees with the usual notion of topological pressure in \(C\). When \(\phi=0\), we have the topology entropy of \(\mathcal{C}\), that is, \[h_{top}(\mathcal{C},f,\delta):=P(\mathcal{C},0,f,\delta)\quad\text{and}\quad h _{top}(\mathcal{C},f)=\lim_{\delta\to 0}h(\mathcal{C},\delta).\] Finally, we define \[P(f,\phi):=P(M,f,\phi)\quad\text{and}\quad h_{top}(f):=P(M,f),\] which are the usual notions of pressure and entropy. Write \(\mathcal{M}(M)\) for the set of the Borel probability measures of \(M\), \(\mathcal{M}_{f}(M)\) the set of \(f\)-invariant Borel probability measures and \(\mathcal{M}_{f}^{e}(X)\) for the set of ergodic measures in \(\mathcal{M}_{f}(M)\). The variational principle states that \[P(f,\phi)=\sup_{\mu\in\mathcal{M}_{f}(M)}\left\{h_{\mu}(f)+\int\phi d\mu \right\}=\sup_{\mu\in\mathcal{M}_{f}^{e}(M)}\left\{h_{\mu}(f)+\int\phi d\mu \right\}.\] A measure achieving the supremum is said an equilibrium state (abbrev. EE). When the potential is zero, the measure equilibrium state is called a measure of maximal entropy (abbrev. MME). Additionally, for \(\mu\in\mathcal{M}_{f}(M)\) we define \[P_{\mu}(f,\phi):=h_{\mu}(f)+\int\phi d\mu.\] #### 2.1.2. Obstruction to expansivity We start defining the bi-infinite Bowen ball around \(x\in X\) of size \(\varepsilon>0\) as the set \[\Gamma_{\varepsilon}(x):=\left\{y\in M:d\left(f^{k}x,f^{k}y\right)<\varepsilon \text{ for all }n\in\mathbb{Z}\right\}.\] If there exists \(\varepsilon>0\) for which \(\Gamma_{\varepsilon}(x)=\{x\}\) for all \(x\in M\), we say that \(f\) is expansive. When there exists \(\varepsilon\) such that \(h_{top}(\Gamma_{\varepsilon}(x))=0\) for every \(x\in M\) we say that \(f\) is entropy expansive, \(h-\)expansive for short. The set of non-expansive points at scale \(\varepsilon\) is \[\mathrm{NE}(\varepsilon):=\left\{x\in X:\Gamma_{\varepsilon}(x)\neq\{x\} \right\}.\] An \(f\)-invariant measure \(\mu\) is almost expansive at scale \(\varepsilon\) if \(\mu(\mathrm{NE}(\varepsilon))=0\). **Definition 2.1**.: _Given a potential \(\phi\), the pressure of obstructions to expansivity at scale \(\varepsilon\) is_ \[P^{\perp}_{\exp}(f,\phi,\varepsilon) =\sup_{\mu\in\mathcal{M}_{\varepsilon}(f)}\left\{h_{\mu}(f)+ \int\phi d\mu:\mu(\mathrm{NE}(\varepsilon))>0\right\}\] \[=\sup_{\mu\in\mathcal{M}_{\varepsilon}(f)}\left\{h_{\mu}(f)+ \int\phi d\mu:\mu(\mathrm{NE}(\varepsilon))=1\right\}.\] #### 2.1.3. Weak specification The specification property plays a central role in the work of Bowen [1] and Climenhaga-Thompson [16]. In [16] a weak specification definition is introduced for a set of finite orbit segments. **Definition 2.2**.: _We say that \(\mathcal{G}\subset M\times\mathbb{N}\) has weak specification at scale \(\delta>0\) if there exists \(\tau\in\mathbf{N}\) such that for every \((x_{i},n_{i})_{i=1}^{k}\subset\mathcal{G}\) there exists a point \(y\) and a sequence of "glueing times" \(\tau_{1},\cdots,\tau_{k-1}\subset\mathbb{N}\) with \(\tau_{i}\leq\tau\) such that writing \(N_{j}=\sum_{i=1}^{j}n_{i}+\sum_{i=1}^{j-1}\tau_{i}\), and \(N_{0}=\tau_{0}=0\) we have_ \[d_{n_{j}}(f^{n_{j-1}+\tau_{j-1}}(y),x_{j})<\delta\text{ for every }1\leq j\leq k\] _._ The specification property implies the existence of a point \(y\) whose orbit closely shadows the orbit of \(x_{1}\) for a certain period time \(n_{1}\). It then transitions to shadow the orbit of \(x_{2}\) for another period time \(n_{2}\), with bounded gaps no greater than \(\tau\) between each transition. #### 2.1.4. Bowen's property The bounded distortion property also referred to as Bowen's property, was initially introduced by Bowen in [1]. **Definition 2.3**.: _Given \(\mathcal{G}\subset M\times\mathbb{N}\), a potential \(\phi\) has the Bowen property on \(\mathcal{G}\) at scale \(\varepsilon>0\) if there exists \(K>0\) so that_ \[\sup\left\{\left|\Phi_{0}(x,n)-\Phi_{0}(y,n)\right|:(x,n)\in\mathcal{G},y\in B _{n}(x,\varepsilon)\right\}\leq K.\] #### 2.1.5. Dynamic Decompositions The most important observation in [16] is that unique equilibrium state can be obtained if the specification and Bowen property established in [1] are satisfied only on a significant set of orbit segments rather than the whole variety. **Definition 2.4**.: _A decomposition for \((M,f)\) consists of three collections \(\mathcal{P},\mathcal{G},\mathcal{S}\subset M\times(\mathbb{N}\cup\{0\})\) and three functions \(p,g,s:M\times\mathbb{N}\to\mathbb{N}\cup\{0\}\) such that for every \((x,n)\in M\times\mathbb{N}\), the values \(\hat{p}=\hat{p}(x,n),\hat{g}=\hat{g}(x,n)\), and \(\hat{s}=\hat{s}(x,n)\) satisfy \(n=\hat{p}+\hat{g}+\hat{s}\), and_ \[(x,\hat{p})\in\mathcal{P},\quad\left(f^{\hat{p}}(x),\hat{g}\right)\in\mathcal{ G},\quad\left(f^{\hat{p}+\hat{g}}(x),\hat{s}\right)\in\mathcal{S}.\] Note that the symbol \((x,0)\) denotes the empty set, and the functions \(\hat{p},\hat{g},\hat{s}\) are permitted to take the value zero. The main result in [11] establishes the existence and uniqueness of equilibrium states for systems with the Bowen property, weak specification, and a minor obstruction for expansiveness on a specific set of "good orbits." However, this criterion is not applicable in our specific scenario due to its rigorous assumptions on the specification. To overcome this challenge, we will use the improved of Climenhaga-Thompson's criterion by Pacifico et al established in [14]. **Theorem 2.5**.: _[_14_, Theorem A]_ _Let \(M\) be a compact metric space and \(f:M\to M\) a Lipschitz homeomorphism. Let \(\phi:M\to\mathbb{R}\) be a continuous potential function. Suppose there are \(\varepsilon,\delta\) and \(\epsilon>L_{f}\delta\), where \(L_{f}>0\) is a constant that depends of \(f\). Suppose that \(P^{\perp}_{\exp}(f,\phi,\varepsilon)<P(f,\phi)\) and that \((M,f)\) admits a decomposition \((\mathcal{P},\mathcal{G},\mathcal{S})\) with the following properties:_ 1. \(\mathcal{G}\) _has (W)-specification at scale_ \(\delta\)_;_ 2. \(\phi\) _has the Bowen property at scale_ \(\varepsilon\) _on_ \(\mathcal{G}\)_;_ 3. \(P(\mathcal{P}\cup\mathcal{S},g,\phi,\delta,\varepsilon)<P(f,\phi)\)_._ _Then there is a unique equilibrium state for \((M,f,\phi)\)._ The constant \(L_{f}\) depends continuously on the Lipschitz constant of \(f\). ### Partial hyperbolic systems The concept of partially hyperbolic systems is a natural generalization of uniform hyperbolicity, and the research in this area dates back to the early 1970s, see, for instance [10]. **Definition 2.6**.: _A diffeomorphism \(f\) defined on a Riemannian compact manifold \(M\) is partially hyperbolic (abbrev. PH) if it admits a non-trivial \(Df\)- invariant splitting of the tangent bundle \(TM=E^{s}\oplus E^{c}\oplus E^{u}\), such that, all unit vectors \(v^{\sigma}\in E^{\sigma}_{x}\) (\(\sigma=s,c,u\)) with \(x\in M\) satisfy_ \[\|Df_{x}v^{s}\|<\|Df_{x}v^{c}\|<\|Df_{x}v^{u}\|\] _for some appropriate Riemannian metric; \(f\) must also satisfy that \(0<\|Df_{|E^{s}}\|<\xi<1\) and \(0<\|Df_{|E^{u}}^{-1}\|<\xi<1\)._ Throughout this paper, we will work with partially hyperbolic diffeomorphisms with dimension central one, that is, \(\dim E^{c}=1\). #### 2.2.1. Minimal foliations One of the fundamental results in the theory of partially hyperbolic dynamical systems is the existence of foliations \(\mathcal{F}^{\sigma}_{f}\), tangent to the stable and unstable distributions \(E^{\sigma}_{f}\) of \(f\), (\(\sigma=s,u\)). Those foliations are called unstable and stable foliations respectively. Specifically, for any \(x\in M\), there exists a leaf of \(\mathcal{F}^{\sigma}_{f}(x)\) containing \(x\) and it corresponds to the classical unstable or stable manifold \(W^{\sigma}(x,f)\), (\(\sigma=s,u\)), as shown in [10, 12] **Definition 2.7**.: _Consider a partially hyperbolic diffeomorphism \(f:M\to M\). The foliation \(\mathcal{F}^{\sigma}_{f}\) is minimal if \(W^{\sigma}(x)\), for all \(x\in M\), is dense in \(M\), (\(\sigma=s,u\))._ **Definition 2.8**.: _Let \(f:M\to M\) be a partially hyperbolic diffeomorphism. The unstable foliation \(\mathcal{F}^{u}_{f}\) of \(f\) is \(\varepsilon-\)minimal if there exists \(R>0\) such that if \(D\) is a disk contained in an unstable leaf of \(\mathcal{F}^{u}_{f}\) with an internal radius larger than \(R\) then \(D\) is \(\varepsilon\)-dense em \(M\)._ It is well known that if \(f:M\to M\) is a partially hyperbolic diffeomorphism whose unstable foliation is minimal, then, for every \(\varepsilon>0\) there exists a \(C^{1}\) neighborhood \(\mathcal{U}\) of \(f\) such that for any \(g\in\mathcal{U}\), the unstable foliation is \(\varepsilon\)-minimal. #### 2.2.2. Existence of equilibrium states In general, it can be difficult to prove that a partially hyperbolic system has equilibrium states. However, in some cases, we can solve this task using the concept of pressure of \(f\), \(P_{\mu}(f,\phi)\), defined as \[P_{\mu}(f,\phi):\mathcal{M}_{f}(M)\to\mathbb{R},\ \mu\mapsto P_{\mu}(f,\phi), \ \text{where $\phi$ is a continuous observable.}\] Since \(\mathcal{M}_{f}(M)\) is a compact set, if the pressure is upper semicontinuous then it achieves a maximal value and so \((f,\phi)\) has an equilibrium state. For homeomorphisms, Bowen showed [1] that the metric entropy is upper semicontinuous whenever \((M,f)\) is a \(h\)-expansive system. In particular, the pressure \(P_{\mu}(f,\phi)\) is also upper semicontinuous and then \((f,\phi)\) admits an equilibrium state for any continuous potential \(\phi\). Hence, since hyperbolic systems with one-dimensional center bundle are \(h\)-expansive, [1, 10, 11], they have equilibrium states. ### Unstable entropy In this section, we recall the notions of unstable \(h^{u}_{\mu}(f)\) and stable \(h^{s}_{\mu}(f)\) metric entropy for a partial hyperbolic diffeomorphism \(f\) introduced in [14, 15]. #### 2.3.1. Unstable metric entropy Here we follow closely [15]. Consider a partially hyperbolic diffeomorphism \(f\) such that \(\dim(E^{u}_{f})\geq 1\). Let \(\alpha\) be a partition of \(M\). We denote \(\alpha(x)\) as the element of \(\alpha\) containing the point \(x\). If we have two partitions \(\alpha\) and \(\beta\) such that \(\alpha(x)\subset\beta(x)\) for all \(x\in M\), we can write \(\alpha\geq\beta\) or \(\beta\leq\alpha\). if a partition \(\xi\) satisfies \(f^{-1}(\xi)\geq\xi\) we say that the partition is increasing. For a measurable partition \(\beta\), we use the notation \(\beta_{n}^{m}=\bigvee_{i=m}^{n}f^{-i}(\beta)\). In particular, \[\beta_{n-1}^{0}=\bigvee_{i=0}^{n-1}f^{-i}(\beta).\] Take \(\epsilon_{0}>0\) small and let \(\mathcal{P}=\mathcal{P}_{\varepsilon_{0}}\) represent the set of finite measurable partitions of \(M\) where each element has a diameter smaller than or equal to \(\epsilon_{0}\). For each \(\beta\in\mathcal{P}\), we can define a finer partition \(\eta\) such that \(\eta(x)=\beta(x)\cap W^{u}_{\mathrm{loc}}(x)\) for every \(x\in M\). Here, \(W^{u}_{\mathrm{loc}}(x)\) represents the local unstable manifold at \(x\) that has a size greater than the diameter \(\epsilon_{0}\) of \(\beta\). Note that \(\eta\) is a measurable partition that satisfies \(\eta\geq\beta\). We denote the set of such partitions as \(\mathcal{P}_{u}=\mathcal{P}_{u,\epsilon_{0}}\). A measurable partition \(\xi\) of \(M\) is subordinate to the unstable manifold of \(f\) with respect to a measure \(\mu\) if, for \(\mu\)-almost every \(x\), \(\xi(x)\) is a subset of \(W^{u}(x)\) and contains an open neighborhood of \(x\) within \(W^{u}(x)\). If \(\alpha\in\mathcal{P}\) and \(\mu(\partial\alpha)=0\), where \(\partial\alpha:=\bigcup\limits_{A\in\alpha}\partial A\), then the corresponding \(\eta\) given by \(\eta(x)=\alpha(x)\cap W^{u}_{\mathrm{loc}}(x)\) is a partition that is subordinate to the unstable manifold of \(f\). Let us recall that, given measurable partition \(\eta\) of a measure space \(X\) and a probability measure \(\nu\) defined on \(X\), the canonical system of conditional measures for \(\nu\) and \(\eta\) is a collection of probability measures \(\nu_{x}^{\eta}:x\in X\) satisfying \(\nu_{x}^{\eta}(\eta(x))=1\). These measures have the property that for any measurable set \(B\subset X\), the function \(x\mapsto\nu_{x}^{\eta}(B)\) is measurable and the integral equation \[\nu(B)=\int_{X}\nu_{x}^{\eta}(B)d\nu(x)\] (See e.g. [14] for reference.) Remind that the information function of \(\alpha\in\mathcal{P}\) is defined as \[I_{\mu}(\alpha)(x):=-\log\mu(\alpha(x))\] and the entropy of the partition \(\alpha\) as \[H_{\mu}(\alpha):=\int_{M}I_{\mu}(\alpha)(x)d\mu(x)=-\int_{M}\log\mu(\alpha(x))d \mu(x).\] The conditional information function of \(\alpha\in\mathcal{P}\) with respect to a measurable partition \(\eta\) of \(M\) is given by \[I_{\mu}(\alpha\mid\eta)(x):=-\log\mu_{x}^{\eta}(\alpha(x))\] Then the conditional entropy of \(\alpha\) with respect to \(\eta\) is defined as \[H_{\mu}(\alpha\mid\eta):=\int_{M}I_{\mu}(\alpha\mid\eta)(x)d\mu(x)=-\int_{M} \log\mu_{x}^{\eta}(\alpha(x))d\mu(x).\] We now introduce the notion of unstable metric entropy presented in [10] which is similar to the classical metric entropy but incorporates the use of a conditional partition \(\eta\) to exclude the influence of central directions. **Definition 2.9**.: _The conditional entropy of \(f\) with respect to a measurable partition \(\alpha\) given \(\eta\in\mathcal{P}^{u}\) is defined as_ \[h_{\mu}(f,\alpha\mid\eta)=\limsup_{n\to\infty}\frac{1}{n}H_{\mu}\left(\alpha_ {0}^{n-1}\mid\eta\right).\] _The conditional entropy of \(f\) given \(\eta\in\mathcal{P}^{u}\) is defined as_ \[h_{\mu}(f\mid\eta)=\sup_{\alpha\in\mathcal{P}}h_{\mu}(f,\alpha\mid\eta)\] _and the unstable metric entropy of \(f\) is defined as_ \[h_{\mu}^{u}(f)=\sup_{\eta\in\mathcal{P}^{u}}h_{\mu}(f\mid\eta).\] It's possible to prove that \(h_{\mu}(f\mid\eta)\) is independent of \(\eta\), as long as it is in \(\mathcal{P}^{u}\). Hence, we have \(h_{\mu}^{u}(f)=h_{\mu}(f\mid\eta)\) for any \(\eta\in\mathcal{P}^{u}\). If the dimension of the stable bundle \(E_{f}^{s}\) is greater than or equal to 1, we can define the metric stable entropy for any \(\mu\in\mathcal{M}_{f}(M)\) as \(h_{\mu}^{s}(f):=h_{\mu}^{u}(f^{-1})\). #### 2.3.2. Unstable topological entropy and the variational principle Now we start to define the unstable topological entropy, introduced in [11]. We denote by \(d^{u}\) the metric induced by the Riemannian structure on the unstable manifold and let \(d_{n}^{u}(x,y)=\max_{0\leq j\leq n-1}d^{u}\left(f^{j}(x),f^{j}(y)\right)\). Let \(W^{u}(x,\delta)\) be the open ball inside \(W^{u}(x)\) centred at \(x\) of radius \(\delta\) with respect to the metric \(d^{u}\). Let \(N^{u}(f,\epsilon,n,x,\delta)\) be the maximal number of points in \(\overline{W^{u}(x,\delta)}\) with pairwise \(d_{n}^{u}\)-distances at least \(\varepsilon\). We call such a set an \((n,\varepsilon)\) u-separated set of \(\overline{W^{u}(x,\delta)}\). **Definition 2.10**.: _The unstable topological entropy of \(f\) on \(M\) is defined by_ \[h_{\text{top}}^{u}\left(f\right)=\lim_{\delta\to 0}\sup_{x\in M}h_{\text{top}}^{u} \left(f,\overline{W^{u}(x,\delta)}\right),\] _where_ \[h_{\text{top}}^{u}\left(f,\overline{W^{u}(x,\delta)}\right)=\lim_{\epsilon \to 0}\limsup_{n\to\infty}\frac{1}{n}\log N^{u}(f,\epsilon,n,x,\delta).\] We can also define unstable topological entropy using \((n,\epsilon)\) u-spanning sets or open covers to get equivalent definitions. Analogously, if the dimension of the stable bundle \(E^{s}_{f}\) is greater than or equal to 1, we can define the stable entropy for any \(\mu\in\mathcal{M}_{f}(M)\) as \(h^{s}(f):=h^{u}_{\mu}(f^{-1})\). As in the case of the usual definition of entropy, we can relate the metric unstable entropy with the unstable topological entropy by a variational principle. Indeed, [16, Theorem D] states that if \(f:M\to M\) is a \(C^{1}\)-partially hyperbolic diffeomorphism then it holds \[h^{u}_{top}(f)=\sup\{h^{\mu}_{\mathrm{u}}(f):\mu\in\mathcal{M}_{f}(M)\}\;\; \text{and}\;\;h^{u}_{top}(f)=\sup\{h^{u}_{\nu}(f):\nu\in\mathcal{M}^{e}_{f}(M)\}.\] An alternative definition of topological unstable entropy can be derived considering the unstable volume growth given by Hua, Saghin, and Xia ([11]), reminiscent from the works by Yomdin and Newhouse ([14, 15]). In [16, Theorem C] shows that the unstable topological entropy as defined here coincides with the unstable volume growth. ## 3. Consequences of \(h^{u}(f)-h^{s}(f)>0\) Let \(\mathrm{Diff}^{1+}(M)\) be the set of \(C^{1+}\) diffeomorphisms defined on \(M\). From now on, \(f\) will be a \(C^{1+}\) partially hyperbolic diffeomorfism with \(1\)-dimensional central direction with non trivial stable and unstable bundles, that is, \(\dim(E^{\sigma})>0(\sigma=s,u),\) and \(\phi\) a Holder continuous potential. If \(\mu\) is an ergodic \(f\)-invariant measure we set \(\lambda^{c}(\mu,f)\) for the Lyapunov exponent of \(f\) in the central direction. An \(f\)-invariant ergodic measure \(\mu\) is _hyperbolic_ if the central Lyapunov exponent satisfies \(\lambda_{c}(f,\mu)\neq 0\). For \(f\in\mathrm{Diff}^{2}(M)\) and \(\mu\) ergodic, such that the central Lyapunov exponent is non-positive for \(\mu\)-a.e. \(x\in M\), applying the definition of metric unstable entropy \(h^{u}_{\mu}\) and a formula given by Ledrappier and Young, [11, 12], we get that \(h^{u}_{\mu}=h_{\mu}(f)\). For \(f\in\mathrm{Diff}^{1+}(M)\), it was proved in [1] that the same relation holds, that is, \[h^{u}_{\mu}(f)=h_{\mu}(f). \tag{3.1}\] **Lemma 3.1**.: _Let \(\mu\) be an ergodic equilibrium state for \((M,f,\phi)\). If \(h^{u}(f)-h^{s}(f)>\sup\phi-\inf\phi\geq 0\), then \(\lambda^{c}(\mu,f)<0\)._ Proof.: Suppose that \(\mu\) is an ergodic equilibrium state for \(f\) with a non-negative center Lyapunov exponent \(\lambda^{c}(\mu,f)\geq 0\). By (3.1) we have \[P_{\mu}(f,\phi)=h_{\mu}(f)+\int\phi d\mu=h^{s}_{\mu}(f)+\int\phi d\mu\leq h^{ s}(f)+\sup\phi.\] By hypothesis, \(h^{u}(f)+\inf\phi>h^{s}(f)+\sup\phi\) and \(h^{u}(f)\leq h(f)\) and thus we get \(P_{\mu}(f,\phi)<h^{u}(f)+\inf\phi\leq P(f,\phi)\), which contradicts that \(\mu\) is an ergodic equilibrium state. This finishes the proof. **Definition 3.2**.: _The unstable pressure is defined as_ \[P^{u}(f,\phi)=\sup\{h^{u}_{\mu}(f)+\int\phi d\mu,\mu\in\mathcal{M}_{f}(M)\}.\] **Remark 3.3**.: _By equation (3.1), if \(\mu\) is an ergodic measure with \(\lambda_{c}(\mu,f)<0\), we obtain \(P_{\mu}(f,\phi)=P^{u}_{\mu}(f,\phi)=h^{u}_{\mu}(f)+\int\phi d\mu\). If in addition \(\mu\) is a ergodic EE and \(\phi\) satisfies \(h^{u}(f)-h^{s}(f)>\sup\phi-\inf\phi\geq 0\), Lemma 3.1 implies that \(\lambda_{c}(\mu)<0\) and thus we obtain \(P(f,\phi)=h^{u}_{\mu}(f)+\int\phi d\mu=P^{u}(f,\phi)\)._ In the remainder of this section, we prove that the property \[h^{u}(f)>h^{s}(f)\] persists for diffeomorphisms \(g\in\operatorname{Diff}^{1+}(M)\) in an \(C^{1}\) open neighborhood of \(f\). Particularly, if \(h^{u}(f)>\sup\phi>\inf\phi>h^{s}\), the same it holds for \(C^{1+}\) diffeomorphisms in a \(C^{1}\) neighborhood of \(f\). For this, we proceed as follows. **Lemma 3.4**.: _If \(h^{u}(f)-h^{s}(f)>\sup\phi-\inf\phi\geq 0\), the map \(g\mapsto P^{u}(g,\phi)\) with \(g\in\operatorname{Diff}^{1+}(M)\) is a continuous function in \(f\) with the \(C^{1}\) topology._ Proof.: We start proving the following result: **Claim 3.5**.: _If \(\mu\) is an ergodic equilibrium state for \((f,\phi)\) then \(h_{\mu}(f)>0\)._ To achieve this, it is enough to verify that an ergodic maximal entropy measure has higher pressure than a measure with zero entropy. For this, we do as follows. Let \(\nu,\mu\in\mathcal{M}_{f}(M)\) such that \(h_{\nu}(f)=0\) and \(\mu\) is an ergodic measure of maximal entropy. Thus \(P_{\nu}(f,\phi)=\int\phi d\nu\leq\sup\phi\) and by hypothesis we get \[P_{\nu}(f,\phi)\leq\sup\phi<h^{u}(f). \tag{3.2}\] By Remark 3.3 when the potential \(\phi\) is identically zero, \(h^{u}(f)=h_{top}(f)\). Since \(\mu\) is a maximal entropy measure we get \(h^{u}(f)=h_{\mu}(f)\). Thus, using (3.2) we get \[P_{\mu}(f,\phi)=h_{\mu}(f)+\int\phi d\mu\geq h_{\mu}(f)+\inf(\phi)\geq h_{\mu} (f)=h^{u}(f)>P_{\nu}(f,\phi).\] This finishes the proof of the Claim 3.5. Returning to the proof of Lemma 3.4, we will prove that the unstable pressure of \(f\), \(P^{u}(f)\), is continuous at \(f\) when we restrict it to \(C^{1+}\) diffeomorphisms. For this, since the map \(g\mapsto P^{u}(g,\phi)\) is \(C^{1}\) upper semicontinuous at \(f\), it is enough to show that \(P^{u}\) is lower semicontinuous at \(f\). Without loss of generality, assume that \(h^{u}(f)>\sup\phi>\inf\phi>h^{s}(f)\). Let \(\varepsilon>0\) and suppose that \(\mu\) is an ergodic equilibrium state. By Lemma 3.1 and Remark 3.3, we have that \(P(f,\phi)=h^{u}_{\mu}(f)+\int\phi d\mu=P^{u}(f,\phi)\) and \(\lambda^{c}(\mu,f)<0\). Since \(\lambda^{c}(\mu,f)<0\) we have that \(\mu\) is a hyperbolic measure and by classical results by Katok in [12, 13] (see also [1, Theorem 1]) we conclude that there exists a hyperbolic set \(\Lambda_{\varepsilon}\subset M\) such that \[P^{u}(f,\phi)-\frac{\varepsilon}{2}\leq P(f_{|\Lambda_{\varepsilon}},\phi_{| \Lambda}). \tag{3.3}\] As \(\Lambda_{\varepsilon}\) is a hyperbolic set for \(f\), we have that \((f_{|\Lambda_{\varepsilon}},\Lambda_{\varepsilon})\) is an expansive map and therefore, there exists an ergodic measure \(\mu_{\varepsilon}\) such that \(P_{\mu_{\varepsilon}}(f,\phi)=P(f_{|\Lambda_{\varepsilon}}\phi_{|\Lambda_{ \varepsilon}})\). Thus \[P^{u}(f,\phi)-\frac{\varepsilon}{2}\leq P_{\mu_{\varepsilon}}(f,\phi).\] Since hyperbolic systems are structurally stable, there exists a \(C^{1}\) neighborhood \(\mathcal{U}(f)\) of \(f\) such that if \(g\in\mathcal{U}(f)\), there exists an \(g-\)invariant hyperbolic set \(\Lambda_{g}\subset M\) such that \(g_{|\Lambda_{g}}\) and \(f_{|\Lambda_{\varepsilon}}\) are topologically conjugate by a homeomorphism \(h_{g}:M\to M\) satisfying \[\|h_{g}-I\|<\delta, \tag{3.4}\] where \(\delta>0\) is such that if \(\|I-h\|<\delta\) then \[\|\phi-\phi\circ h_{g}\|<\frac{\varepsilon}{2}. \tag{3.5}\] Since \(h_{g}\) conjugates \(f\) and \(g\), for every \(g\in\mathcal{V}\), we obtain \[P(f_{|\Lambda_{g}}\phi_{|\Lambda_{\varepsilon}})=P(g_{|\Lambda_{g}},\phi\circ h _{|\Lambda_{g}}), \tag{3.6}\] and by (3.5), we obtain \[P(g_{|\Lambda_{g}},\phi\circ h_{|\Lambda_{g}})-\frac{\varepsilon}{2}\leq P(g_{ |\Lambda_{g}},\phi_{|\Lambda_{g}}). \tag{3.7}\] Now, since \(g_{|\Lambda_{g}}\) is conjugated with \(f_{|\Lambda_{\varepsilon}}\) and \(\Lambda_{g}\) is a hyperbolic set for \(g\), we obtain that \((g_{|\Lambda_{g}},\phi_{|\Lambda_{g}})\) has an ergodic equilibrium state \(\mu_{g}\) with \(\lambda_{c}(\mu_{g},g)<0\). Thus, by (3.1), we have that \(P(g_{|\Lambda_{g}},\phi_{|\Lambda_{g}})=P_{\mu_{g}}^{u}(g,\phi)\leq P^{u}(g,\phi)\) for every \(g\in\mathrm{Diff}^{1+}(M)\). Therefore, using (3.6) and (3.7) we obtain \[P(f_{|\Lambda_{\varepsilon}},\phi)-\frac{\varepsilon}{2}\leq P^{u}(g,\phi),\] and by (3.3) we have that \[P^{u}(f,\phi)-\varepsilon\leq P^{u}(g,\phi).\] Therefore, \(P^{u}(f,\phi)-\varepsilon\leq P^{u}(g,\phi)\) for every \(g\in\mathcal{U}\cap\mathrm{Diff}^{1+}(M)\) which implies that \(g\mapsto P^{u}(g,\phi)\) is lower semicontinuous. **Corollary 3.6**.: _If there are positive numbers \(a\) and \(b\) such that \(h^{u}(f)>a>b>h^{s}(f)\) then there exists a \(C^{1}\)-neighborhood \(\widetilde{\mathcal{V}}\) of \(f\) such that for every \(g\in\widetilde{\mathcal{V}}\cap\mathrm{Diff}^{1+}(M)\), \(h^{u}(g)>a>b>h^{s}(g)\)._ Proof.: Note that in the case \(h^{u}(f)>\sup\phi>\inf\phi>h^{s}(f)\) it is enough to take \(a=\sup\phi\) and \(b=\inf\phi\). Assume there are such positive numbers numbers \(a\) and \(b\) satisfying \(h^{u}(f)>a>b>h^{s}(f)\). Then, \(h^{u}(f)-h^{s}(f)>0\) and we can apply Lemma 3.4, obtaining that the unstable entropy \(h^{u}:\mathrm{Diff}^{1+}(M)\to\mathbb{R}\) is continuous at \(f\) with the \(C^{1}\) topology. Hence, given \(\varepsilon>0\), there is a \(C^{1}\)-neighborhood \(\widetilde{\mathcal{V}}\) of \(f\) such that if \(g\in\widetilde{\mathcal{V}}\cap\mathrm{Diff}^{1+}(M)\) then \(|h^{u}(f)-h^{u}(g)|<\varepsilon\). Since the stable entropy \(h^{s}:\mathrm{Diff}^{1+}(M)\to\mathbb{R}\) is upper semicontinuous, we can take a neighborhood \(\widetilde{\mathcal{U}}\subset\mathrm{Diff}^{1}(M)\) of \(f\) such that if \(g\in\widetilde{\mathcal{U}}\) then \(h^{s}(g)-h^{s}(f)<\varepsilon\). Now, let \(0<\varepsilon<\frac{\min\{h^{u}-a,b-h^{s}\}}{2}\) and \(\mathcal{V}=\widetilde{\mathcal{V}}\cap\widetilde{\mathcal{U}}\). Then, for all \(g\in\mathcal{V}\) we have \(h^{u}(g)>a>b>h^{s}(g)\), finishing the proof. **Corollary 3.7**.: _If \(h^{u}(f)-h^{s}(f)>\sup\phi-\inf\phi\geq 0\), the map \(P(\cdot,\phi):\mathrm{Diff}^{1+}(M)\to\mathbb{R}\) is continuous at \(f\) in the \(C^{1}\) topology._ Proof.: Without loss of generality, assume suppose that \(h^{u}(f)>\sup\phi>\inf\phi>h^{s}(f)\). Let \(\varepsilon>0\). Lemma 3.4 and Corollary 3.6 imply that there exists a \(C^{1}\) neighborhood of \(f\) so that for every \(g\in\mathcal{U}\)\(|P^{u}(f,\phi)-P^{u}(g,\phi)|<\varepsilon\) and \(h^{u}(g)>\sup\phi>\inf\phi>h^{s}(g)\). Therefore, by the Remark 3.3, \(P^{u}(g,\phi)=P(g,\phi)\) and thus \[|P(f,\phi)-P(g,\phi)|<\varepsilon,\] for every \(g\in\mathcal{U},\) ending the proof. ## 4. Proof of Theorem A ### Choosing the decomposition In this section, we follow closely [10]. The aim is to provide a large collection of orbit segments with a decomposition \((\mathcal{P},\mathcal{G},\mathcal{S})\) as in Section 2.1.5 in such a way that \(\mathcal{G}\) captures all the "hyperbolicity" of every \(g\) close to \(f\) in the \(C^{1}\) topology, to overcome lack of hyperbolicity outside \(\mathcal{G}\). Let \(\widetilde{\mathcal{V}}\) be a \(C^{1}\) neighborhood of \(f\) as in Corollary 3.6. We can assume, without loss of generality, for \(g\in\widetilde{\mathcal{V}}\) it holds \(\|Dg_{E^{\varepsilon}}\|,\|Dg_{E^{\varepsilon}}^{-1}\|<\xi\), where \(\xi\) is given at Definition 2.6. For every \(g\in\widetilde{\mathcal{V}}\cap\operatorname{Diff}^{1+}(M)\) we set \(\varphi^{c}(x):=\log\|Dg_{|E^{c}(x)}\|\). If \(\mu\in\mathcal{M}_{g}(M)\) we set \(\lambda_{c}(\mu,g):=\int\varphi^{c}d\mu\). When \(\mu\) is ergodic and the central direction of a \(C^{1+}\) partially hyperbolic diffeomorphism is \(1\)-dimensional, \(\lambda_{c}(\mu,g)\) coincides with the central Lyapunov exponent of \(g\). Let \(P^{+}(g,\phi)\) and \(P^{-}(g,\phi)\) be defined as below: \[P^{+}(g,\phi)=\sup\{P_{\mu}(g,\phi):\mu\in M_{g}^{\varepsilon}(M),\lambda_{c} (\mu,g)\geq 0\}\] and \[P^{-}(g,\phi)=\sup\{P_{\mu}(g,\phi):\mu\in M_{g}^{\varepsilon}(M),\lambda_{c} (\mu,g)\leq 0\}.\] By the Corollary 3.6 and the Lemma 3.1 there is no ergodic equilibrium state \(\mu\) with \(\lambda_{c}(\mu,g)\geq 0\) and the pressure is upper semicontinuous, we have that \[P^{+}(g,\phi)<P^{-}(g,\phi)\text{ for every }g\in\widetilde{\mathcal{V}}\cap \operatorname{Diff}^{1+}(M),\] and hence by the ergodic decomposition theorem, we have \[\sup\{P_{\mu}(g,\phi):\mu\in M_{g}(M),\lambda_{c}(\mu,g)\geq 0\}<P(g,\phi) \text{ for every }g\in\widetilde{\mathcal{V}}\cap\operatorname{Diff}^{1+}(M).\] **Lemma 4.1**.: _There exists \(r>0\) and a \(C^{1}\) neighborhood \(\mathcal{V}^{\prime}\subset\widetilde{\mathcal{V}}\) such that_ \[\sup\{P_{\mu}(g,\phi):\mu\in\mathcal{M}_{g}(M),\lambda_{c}(\mu,g)\geq-r\}<P(g,\phi),\text{ for every }g\in\mathcal{V}^{\prime}\cap\operatorname{Diff}^{1+}(M).\] Proof.: Suppose, by contradiction, that there exist sequences \(g_{n}\in\widetilde{\mathcal{V}}\cap\operatorname{Diff}^{1+}(M)\) and \(r_{n}>0\) with \(r_{n}\to 0\) and \(g_{n}\to f\) when \(n\to\infty\) such that \[\sup\{P_{\mu}(g_{n},\phi):\mu\in\mathcal{M}_{g_{n}}(M),\lambda_{c}(\mu_{n},g_{ n})\geq-r_{n}\}=P(g_{n},\phi).\] Since \(\mu\mapsto\lambda_{c}(\mu,g)\) is a continuous map in the weak* topology, there is an invariant measure \(\nu_{n}\in\mathcal{M}_{g}(M)\) such that \(P_{\nu_{n}}(g_{n},\phi)=P(g_{n},\phi)\) and \(\lambda(g_{n},\nu_{n})\geq-r_{n}\). Since \(\mathcal{M}(M)\) is compact, we can suppose that \(\nu_{n}\) converges to \(\nu\) and so \(\nu\) is a \(f-\)invariant and satisfies \(\lambda_{c}(f,\nu)\geq 0\). Therefore, by Corollary 3.7 and [11, Theorem A] we obtain \[P(f,\phi)=\lim_{n}P(g_{n},\phi)=\lim_{n}P_{\nu_{n}}^{u}(g_{n})\leq P_{\nu}^{u}( f)\leq P_{\nu}^{u}(f,\phi),\] implying that \(\nu\) is an equilibrium state with \(\lambda_{c}(f,v)\geq 0\), which is a contradiction by Lemma 3.1. **Corollary 4.2**.: _Consider \(r>0\) and \(\mathcal{V}^{\prime}\subset\operatorname{Diff}^{1}(M)\) as in Corollary 4.1. There exist \(a>0\) and a \(C^{1}\) neighborhood \(\mathcal{V}\subset\mathcal{V}^{\prime}\) of \(f\) such that, for every \(g\in\mathcal{V}\cap\operatorname{Diff}^{1+}(M)\)_ \[\sup\{P_{\mu}(g,\phi):\mu\in\mathcal{M}_{g}(M),\lambda_{c}(\mu,g)\geq-r\}<a<P( g,\phi). \tag{4.1}\] Proof.: Suppose, by contradiction, that there exists sequences \(g_{n}\in\mathcal{V}^{\prime}\cap\operatorname{Diff}^{1+}(M)\) such that \(g_{n}\to f\) when \(n\to\infty\) and \[P(g_{n},\phi)-\sup\{P_{\mu}(g_{n},\phi):\mu\in\mathcal{M}_{g_{n}}(M),\lambda_{c }(\mu_{n},g_{n})\geq-r\}\leq\frac{1}{n}.\] Since \(\mu\mapsto\lambda_{c}(\lambda,g)\) is a continuous map in the weak* topology, there is an invariant measure \(\nu_{n}\in\mathcal{M}_{g}(M)\) such that \(P_{\nu_{n}}(g_{n},\phi)=P(g_{n},\phi)\) and \(\lambda(g_{n},\nu_{n})\geq-r\). As \(\mathcal{M}(M)\) is compact, we can suppose that \(\nu_{n}\) converges to \(\nu\) and so \(\nu\) is a \(f-\)invariant and satisfies \(\lambda_{c}(f,\nu)\geq-r\). Therefore, by Corollary 3.7 and [21, Theorem A] we obtain \[P(f,\phi)=\lim_{n}P(g_{n},\phi)=\lim_{n}P^{u}_{\nu_{n}}(g_{n})\leq P^{u}_{\nu}( f)\leq P^{u}_{\nu}(f,\phi),\] implying that \(\nu\) is an equilibrium state with \(\lambda_{c}(f,v)\geq-r\), which is a contradiction by Lemma 4.1. Fix \(r>0\), \(a>0\) and \(\mathcal{V}\subset\operatorname{Diff}^{1}(M)\) as in Corollary 4.2. We use (4.1) to define the decomposition \((\mathcal{P}_{g},\mathcal{G}_{g},\mathcal{S}_{g})\) for each \(g\in\mathcal{V}\cap\operatorname{Diff}^{1+}(M)\). We put \(\mathcal{S}_{g}=\emptyset\), and define \(\mathcal{P}_{g}\) and \(\mathcal{G}_{g}\) as \[\mathcal{P}_{g}:=\{(x,n)\in M\times\mathbb{N}:S_{n}\varphi^{c}(x)\geq-rn\} \tag{4.2}\] and \[\mathcal{G}_{g}:=\{(x,n)\in M\times\mathbb{N}:S_{n}\varphi^{c}(x)<-rn,\, \forall\,0\leq j\leq n\},\,\,\,S_{n}\varphi^{c}(x)=\sum_{k=0}^{n-1}\varphi^{c} (g^{k}(x)).\] Take an arbitrary orbit segment \((x,n)\in M\times\mathbb{N}\) and let \(\hat{p}=\hat{p}(x,n)\) be the maximal integer with the property that \((x,\hat{p})\in\mathcal{P}_{g}\) and \(\hat{g}=\hat{g}(x,n)=n-\hat{p}\). ### Specification on \(\mathcal{G}_{g^{*}}\) Choose \(\delta_{1}>0\) sufficiently small so that \(|\varphi^{c}(y)-\varphi^{c}(z)|<r/2\) whenever \(d(y,x)<\delta_{1}\). Then, if \((y,n)\in\mathcal{G}_{g}\) and \(z\in B_{n}(y,\delta_{1})\) it follows that \[\|Dg^{j}_{|E^{cs}(z)}\|\leq e^{-rj/2}\text{ for all }0\leq j\leq n.\] For \(\theta>0\) and \(x\in M\) consider the center stable cone \[K^{cs}_{\theta}(g,x):=\{v+w:v\in E^{cs}_{x},w\in E^{u}_{x},\|w\|<\theta\|v\| \}\subset T_{x}M.\] Fix \(\theta_{0}>0\) such that \(\theta_{0}\|Dg_{|E^{u}}(x)\|<1-e^{-r/2}\) for every \(x\in M\). Therefore, there exists \(0<\beta<1\) such that for \(u\in K^{cs}_{\theta}(g,z)\) it holds \[\|Dg^{j}(z)(u/\|u\|)\|<\beta\text{ for all }0\leq j\leq n. \tag{4.3}\] Take any manifold \(W\) so that \(g^{n}(y)\in W\), \(T_{x}(W)\subset K^{cs}(g,x)\) for all \(x\in W\) and set \[N^{cs}(g,(y,n)):=g^{-n}(W)\cap B(y,\delta_{1}). \tag{4.4}\] Now (4.3) implies \(N^{cs}(g,(y,n))\subset B_{n}(y,\delta_{1})\) and that \(\exists\,\beta_{0},\,\beta<\beta_{0}<1\) so that \[d(f^{j}(y),f^{j}(x))<\beta_{0}^{j}\text{ for all }0\leq j\leq n. \tag{4.5}\] Now, we will prove that if a diffeomorphism \(g\) is close to \(f\) then \(g\) has specification on \(\mathcal{G}_{g}\) at a small scale. To do so we proceed as follows. Observe that by the compactness of \(M\) and the continuity of the dominated splitting \(E^{sc}_{x}(f)\oplus E^{u}_{x}(f)\) at \(x\) and \(f\), there exists \(\delta_{3}>0\) such that the angle \(\sphericalangle(E^{sc}_{x}(g),E^{u}_{x}(g))>\delta_{3}\) for every \(x\in M\) and \(g\in\mathcal{V}\), where \(\mathcal{V}\) is as in Corollary 4.2. Furthermore, there exists \(\hat{\varepsilon}>0\) such that for any \(g\in\mathcal{V}\cap\operatorname{Diff}^{1+}(M)\), if \((x_{1},n_{1})\in\mathcal{G}_{g}\) and \(N^{cs}(g,(x_{1},n_{1}))\) is a manifold as (4.4) with a positive internal radius \(R_{sc}\), then for any \(y\in M\), if \(W^{u}(y,g)\cap B_{\varepsilon}(x_{1})\neq\emptyset\), it follows that \[W^{u}(y,g)\cap N^{cs}g((x_{1},n_{1}))\neq\emptyset. \tag{4.6}\] Observe that \(\varepsilon\) depends of \(R_{sc}\), \(\delta_{3}\) and \(\theta_{0}\). **Proposition 4.3**.: _Suppose that \(g\in\mathcal{V}\cap\mathrm{Diff}^{1+}(M)\) has unstable foliation \(\varepsilon\) minimal. If \(\varepsilon\) is small enough, then \(\mathcal{G}_{g}\) has specification at scale \(R_{cs}+\frac{\varepsilon}{1-\xi}\), where \(\xi\) is as in Definition 2.6._ Proof.: Let \(0<\varepsilon<\hat{\varepsilon}\) and \((x_{1},n_{1}),(x_{2},n_{2})\in\mathcal{G}_{g}\). As \(\mathcal{F}^{u}\) is \(\varepsilon\)-minimal, there exists \(R_{u}>0\) such that if \(D\) is a disk in an unstable manifold of \(g\) with an internal radius larger than \(\frac{R_{u}}{2}\) then \(D\cap B_{\varepsilon}(x)\neq\emptyset\) for every \(x\in M\). For \(R_{u}>0\), we fix \(M_{u}\) such that for every unstable disk with a center in \(z\in M\) an internal radius less than \(R_{u}\), if \(y\in D\) then \(d(g^{-M_{u}}(z),g^{-M_{u}}(y))\leq\varepsilon\). Let \(D\subset W^{u}_{2R_{u}/3}(g^{n_{1}+M_{u}}(x_{1}))\) a disk with an internal radius larger than \(R_{u}/2\) centred in \(g^{n_{1}+M_{u}}(x_{1})\). By \(\varepsilon\)-minimality \(D\cap B_{\varepsilon}(x_{2})\neq\emptyset\), and by (4.6) we can choose \(\hat{y}\in D\cap N^{cs}(g,(x_{2},n_{2}))\). Since \(\hat{y}\in D\), we have \[d(g^{-M_{u}}(\hat{y}),g^{n_{1}}(x_{1}))\leq\varepsilon.\] Analogously, \[d(g^{n_{1}-k}(x_{1}),g^{-M_{u}-k}(\hat{y}))\leq\varepsilon\xi^{k}\text{ for every }k\geq 0.\] Therefore, \[d(g^{j}(x_{1}),g^{j}(g^{-M_{u}-n_{1}}(\hat{y})))\leq\varepsilon\xi^{n_{1}-j} \text{ for every }0\leq j\leq n_{1}.\] Moreover, as \(g^{T^{u}}(\hat{y})\in N^{cs}_{(x_{2},n_{2})}\), \[d_{n_{2}}(g^{T^{u}}(\hat{y}),x_{2})\leq R^{cs}.\] Therefore, by [2, Lemma 5.10] choosing \(\tau=M_{u}\), we get that \(\mathcal{G}_{g}\) has specification at scale \(R_{cs}+\frac{\varepsilon}{1-\xi}\). **Corollary 4.4**.: _Given \(\delta>0\) there exists a \(C^{1}\) neighborhood \(\mathcal{U}\) of \(f\), \(\mathcal{U}\subset\mathcal{V}\), such that if \(g\in\mathcal{U}\) then \(g\) has specification on \(\mathcal{G}_{g}\) at scale \(\delta\)._ Proof.: Given \(\delta>0\), let \(R_{cs}\) and \(\varepsilon\) be sufficiently small such that \(R_{cs}+\frac{\varepsilon}{1-\xi}<\delta\). Since \(\mathcal{F}^{u}(f)\) is minimal, we can choose a \(C^{1}\) neighborhood \(\mathcal{U}\) of \(f\), \(\mathcal{U}\subset\mathcal{V}\) such that if \(g\in\mathcal{U}\) then \(g\) has unstable foliation \(\varepsilon\)-minimal. Therefore, by Proposition 4.3 we conclude that every \(g\in\mathcal{U}\) has specification on \(\mathcal{G}_{g}\) at scale \(\delta\). ### Obstruction of expansivity **Proposition 4.5**.: _There exists \(\varepsilon_{1}>0\) such that if \(g\in\mathcal{V}\), \(P^{\perp}(g,\phi,\varepsilon_{1})<P(g,\phi)\)._ Proof.: Let \(r>0\) be as in the inequality (4.1). By [2, Lemma 7.1] there exists \(\varepsilon_{1}>0\) such that every \(\mu\in\mathcal{M}^{e}_{g}(M)\) with \(\lambda^{c}(\mu)<-r\) is \(\mu\) almost expansive. Therefore, by 4.1, we obtain that \(P^{\perp}(g,\phi,\varepsilon_{1})<P(g,\phi)\). ### The set \(\mathcal{P}_{g}\) has small pressure Now, to use the theorem 2.5 we need to prove that \(\mathcal{P}_{g}\) has a small pressure. This is the content of the next result. **Proposition 4.6**.: _There exists \(\varepsilon_{0}>0\) such that \(P(\mathcal{P}_{g},g,\phi,\delta,\varepsilon)<P(g,\phi)\) for every \(0<\varepsilon<\varepsilon_{0}\) and \(g\in\mathcal{V}\)._ Proof.: Let \(E_{n}\subset\mathcal{P}_{gn}\) be a \((n,\delta_{0})-\)separated set and \(\delta>0\) such that \(\Lambda(\mathcal{P}_{g},g,\phi,\delta_{0},n)=\sum_{x\in E_{n}}e^{\Phi_{0}(x,n)}\). Consider \[\nu_{n}=\frac{1}{\sum_{x\in E_{n}}e^{\Phi_{0}(x,n)}}\sum_{x\in E_{n}}e^{\Phi_{ 0}(x,n)}\delta_{x},\text{ and }\mu_{n}=\frac{1}{n}\sum_{k=0}^{n-1}g_{s}^{k}\nu_{k}.\] Reasoning as in [20, Theorem 9.10], we obtain that any limit point \(\mu\) of \(\mu_{n}\) is \(g-\)invariant and \(P(\mathcal{P}_{f},f,\phi,\delta_{0})\leq P_{\mu}(g,\phi)\). Since \(\mathcal{P}_{g}\) is as in (4.2) we get \(\lambda^{c}(\mu)=\int\varphi^{c}\geq-r\). Thus, taking \(a\) as in (4.1) we obtain \[P(\mathcal{P}_{g},g,\phi,\delta)<a<P(g,\phi). \tag{4.7}\] Since \(\phi\) is uniformly continuous there exists \(\varepsilon_{0}\) such that \[P(\mathcal{P}_{g},g,\phi,\delta,\varepsilon)<a<P(g,\phi)\text{ for every } \varepsilon<\varepsilon_{0}\text{ and }g\in\mathcal{V}.\] This finishes the proof. ### Bowen's property Here we prove that if \(\phi\) is Holder continuous, then for every \(g\in\mathcal{V}\), \(\phi\) satisfies the Bowen property on \(\mathcal{G}_{g}\). **Proposition 4.7**.: _There exists \(\varepsilon_{2}>0\) such that for every \(g\in\mathcal{V}\cap\operatorname{Diff}(M)\), any potential Holder continuous \(\phi\) has the Bowen property at any scale less than \(\varepsilon_{2}\) on \(\mathcal{G}_{g}\)._ Proof.: Given \(g\in\mathcal{V}\cap\operatorname{Diff}(M)\), for every \((x,n)\in\mathcal{G}_{g}\) consider \(N^{cs}(g,(x,n))\) as in 4.4. Choose \(\varepsilon_{2}>0\) small enough such that given any \((x,n)\in\mathcal{G}_{g}\) and \(y\in M\) with \(d(x,y)<\varepsilon_{2}\), the intersection of \(N^{cs}(g,(x,n))\) with \(W^{u}_{\varepsilon_{2}}(y)\) is a unique point \(\{[x,y]\}\). **Claim 4.8**.: \(d(g^{k}(x),g^{k}(y))\leq\max\{\beta_{0}^{k}\varepsilon_{2},\xi^{k-1} \varepsilon_{2}\}\) _where \(\beta_{0}\) is as in 4.3_ Proof.: As \([x,y]\in N^{cs}_{(x,n)}\), for every \(k\in\{0,1,\cdots,n-1\}\) we have \[d(g^{k}(x),g^{k}([x,y]))\leq\beta_{0}^{(k-n)}\varepsilon_{2}\] and, since \([x,y]\in W^{u}(y)\), we get \[d(g^{k}y,g^{k}([x,y])\leq\xi^{n-k}\varepsilon_{2},\quad k\in\{0,1,\cdots,n-1\}.\] By the triangle inequality, we get \[d(g^{k}(x),g^{k}(y))\leq 2\max\{\beta_{0}^{k}\varepsilon_{2},\xi^{(n-k+1)} \varepsilon_{2}\}.\] Now, writing \(C\) for the Holder constant of \(\phi\) and \(\gamma\) for the Holder exponent, we obtain \[\left|\phi\left(g^{k}x\right)-\phi\left(g^{k}y\right)\right| \leq C\cdot d\left(g^{k}x,g^{k}y\right)^{\gamma}\] \[\leq C\cdot(2\varepsilon_{2})^{\gamma}2\max\{\beta_{0}^{k},\xi^{ \gamma(n-k+1)}\},\] and summing over \(0\leq k<n\) gives \[\left|\Phi_{0}(x,n)-\Phi_{0}(y,n)\right| \leq\sum_{k=0}^{n-1}C\cdot(2\varepsilon_{2})^{\gamma}\max\{\beta _{0}^{\gamma k},\xi^{\gamma(n-k+1)}\}\] \[\leq C\cdot(2\varepsilon_{2})^{\gamma}\sum_{k=0}^{n-1}e^{\gamma k }+\xi^{\gamma(n-k+1)}=:K.\] Since \(K\) is finite and independent of \(x,y,n\), we finish the proof. ### Proof of Theorem A Let \(f\) and \(\phi\) be as in Theorem A. Let \(\mathcal{V}\subset\operatorname{Diff}^{1+}(M)\) and \(a>0\) as in Corollary 4.2. Consider \(\varepsilon_{0}>0\) as in Proposition 4.6. Now, take \(\varepsilon\) such that \(\varepsilon<\varepsilon_{0},\varepsilon_{1},\varepsilon_{2}\), where \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are as in propositions 4.5 and 4.7 respectively. Therefore, applying successively Propositions 4.6, 4.5 and 4.7 we get \[P(\mathcal{P}_{g},g,\phi,\delta,\varepsilon_{0})<P(g,\phi)\text{ for every }g\in\mathcal{V}\cap \operatorname{Diff}^{1+}(M)\text{ and }\delta>0, \tag{4.8}\] \[P^{\perp}(g,\phi,\varepsilon)<P(g,\phi)\text{ for every }g\in\mathcal{V}\cap \operatorname{Diff}^{1+}(M), \tag{4.9}\] and \[\mathcal{G}_{g}\text{ has the Bowen's property in scale }\varepsilon\text{ for every }g\in\mathcal{V}\cap \operatorname{Diff}^{1+}(M). \tag{4.10}\] Since the constant \(L_{f}\) given in theorem 2.5 depends continuously on \(f\) in the \(C^{1}\) topology, by Proposition 4.4 there exists \(\mathcal{U}\subset\mathcal{V}\) such that every \(g\in\mathcal{U}\cap\operatorname{Diff}^{1+}(M)\) has specification on \(\mathcal{G}_{g}\) at scale \(\delta\) with \(\delta<L_{g}\varepsilon\). Therefore, by (4.8), (4.9) and (4.10) and Theorem 2.5 we conclude that \((g,\phi)\) has a unique equilibrium state, finishing the proof. _Acknowledgements._ We are thankful to Jiagang Yang for helpful conversations on this work.
2303.17485
Edge Ranking of Graphs in Transportation Networks using a Graph Neural Network (GNN)
Many networks, such as transportation, power, and water distribution, can be represented as graphs. Crucial challenge in graph representations is identifying the importance of graph edges and their influence on overall network efficiency and information flow performance. For example, important edges in a transportation network are those roads that, when affected, will significantly alter the network's overall efficiency. Commonly used approach to finding such important edges is ``edge betweenness centrality'' (EBC), an edge ranking measure to determine the influential edges of the graph based on connectivity and information spread. Computing the EBC utilizing the common Brandes algorithm involves calculating the shortest paths for every node pair, which can be computationally expensive and restrictive, especially for large graphs. Changes in the graph parameters, e.g., in the edge weight or the addition and deletion of nodes or edges, require the recalculation of the EBC. As the main contribution, we propose an approximate method to estimate the EBC using a Graph Neural Network (GNN), a deep learning-based approach. We show that it is computationally efficient compared to the conventional method, especially for large graphs. The proposed method of GNN-based edge ranking is evaluated on several synthetic graphs and a real-world transportation data set. We show that this framework can estimate the approximate edge ranking much faster compared to the conventional method. This approach is inductive, i.e., training and testing are performed on different sets of graphs with varying numbers of nodes and edges. The proposed method is especially suitable for applications on large-scale networks when edge information is desired, for example, in urban infrastructure improvement projects, power, and water network resilience analyses, and optimizing resource allocations in engineering networks.
Debasish Jana, Sven Malama, Sriram Narasimhan, Ertugrul Taciroglu
2023-03-25T20:45:30Z
http://arxiv.org/abs/2303.17485v1
# Edge Ranking of Graphs in Transportation Networks using a Graph Neural Network (GNN) ###### Abstract Many networks, such as transportation, power, and water distribution, can be represented as graphs. A crucial challenge in graph representations is identifying the importance of graph edges and their influence on the overall performance in terms of network efficiency and information flow. For example, important edges in a transportation network are those roads that, when affected, will significantly alter the network's overall efficiency. A commonly used approach to finding such important edges is "edge betweenness centrality" (EBC)--an edge ranking measure to determine the influential edges of the graph based on connectivity and information spread. Computing the EBC utilizing the common Brandes algorithm [1] involves calculating the shortest paths for _every_ node pair, which can be computationally expensive and restrictive, especially for large graphs. Changes in the graph parameters, e.g., in the edge weight or the addition and deletion of nodes or edges, require the recalculation of the EBC. As the main contribution, we propose an approximate method to estimate the EBC using a Graph Neural Network (GNN), a deep learning-based approach. We show that it is computationally efficient compared to the conventional method, especially for large graphs. The proposed method of GNN-based edge ranking is evaluated on several synthetic graphs and a real-world transportation data set. We show that this framework can estimate the approximate edge ranking much faster compared to the conventional method introduced by Brandes [1]. This approach is inductive--i.e., training and testing are performed on different sets of graphs with varying numbers of nodes and edges. The proposed method is especially suitable for applications on large-scale networks when edge information is desired, for example, in urban infrastructure improvement projects, power and water network resilience analyses, and optimizing resource allocations in engineering networks. Edge importance ranking edge betweenness centrality graph neural network transportation network resource allocation ## 1 Introduction ### Motivation Transportation networks play a crucial role in the economy and the well-being of citizens by enabling the smooth movement of people and goods and as arteries for evacuations during catastrophes and natural disasters. A healthy transportation network offers significant benefits to its citizens, e.g., through the mobility of capital and labor, diffusion of population, or national defense [2]. Natural hazard events can severely impact transportation networks leading to direct losses, such as repair costs of infrastructure, and indirect losses, such as a decrease in network efficiency [3]. In 2005, Hurricane Katrina severely impacted the U.S. highway system, especially in the area along and to the south of the I-10/I-12 corridor. Whereas some elements of the highway system were repaired and re-initiated in weeks, other elements remained impossible for many months [4]. Even two years after the disaster, basic services in New Orleans and public transportation and libraries did not regain half of its pre-Katrina capacity [5]. A good understanding of a transportation network's performance, capacity, and critical road segments are essential in decision-making during such catastrophic events. One of the critical tasks related to such an understanding is to objectively identify which road segments are crucial to the system's performance as a whole. A powerful means to answer this is to treat the network as a graph comprised of nodes and edges, representing road junctions and segments sections respectively, and optimizing for key measures such as travel-time cost or distance using this graph by associating weights to edges. Such nodes or edges can be deleted, or edge weights modified to simulate loss or decrease of functionality in local regions of the network. Node ranking provides a natural application in social networks [6]; on the other hand, edge importance ranking is more suited for engineered network systems, such as transportation networks. Identifying critical edges in transportation networks can affect preemptive strategies to address deficiencies in essential segments of the system, thereby making the overall system more robust and resilient to failures. The central objective of this paper is to propose a novel and computationally efficient way to estimate the importance of edges in transportation networks. This approximate method based on the Graph Neural Network (GNN) is shown to outperform conventional methods in terms of speed while also achieving a comparable level of performance. ### Literature Review Much of the current literature focuses on finding important nodes rather than edges in a graph. Node ranking is relevant in applications such as identifying vulnerable populations for infectious disease spread [7, 8], or in social networks [9]. For urban infrastructures, edge ranking can be very important, say street sections represented as edges in a transportation network. A relatively large amount of literature on graph components (nodes and edges) ranking can be found in post-disaster recovery research, where optimal sequencing for the repair of components is necessary to maximize the efficiency/resilience of the network. Vugrin et al. [10] presented a bi-level optimization algorithm for optimal recovery sequencing in transportation networks. Gokalp et al. [11] proposed a bidirectional search heuristic strategy for post-disaster recovery sequencing of bridges for road networks. Bocchini and Frangopol [12] presented a model for recovery planning for networks of highway bridges damaged in an earthquake. This model identifies bridge restoration activities that maximizes resilience and minimizes the time required to return the network to a targeted functionality level and the cost of restoration. Network recovery has been studied for other networks such as electrical power restoration [13], airline system recovery [14], post-earthquake water distribution network recuperation [15], internet protocol (IP) networks rehabilitation [16]. These studies generally cover small search sets of edges in a graph for optimization purposes, assuming only a few roads/bridges are damaged (or, modified in a graph) after a disaster, which is a reasonable assumption. Dealing with a complete network can be computationally exhaustive but might be necessary in case of large scale events like hurricane Katrina, which impacted 15,947 lane miles of highway in Alabama, 60,727 in Louisiana and 28,889 in Mississippi [17]. The centrality measure is a metric used for ranking nodes or edges [18]. This metric represents a quantitative view of how a component's absence or presence affects the whole graph. The well-studied importance metrics for nodes are: the betweenness centrality [19, 20], closeness centrality [21, 22], and page-rank centrality [23, 24, 25, 26]. The aforementioned centrality measures are mainly designed for node ranking. For example, betweenness centrality is a measure of the amount of network information flow being controlled by a specific node [27]. Brohl et al. [28] modified the formula so that centralities can be computed for edges. The most commonly used metric for importance estimation of edge-components is edge betweenness centrality (EBC) [1, 29, 19, 30]. EBC is based on how the edges expedite the flow of information in a graph. Edges with higher EBCs are considered high-importance edges, where the removal of an edge with large EBC significantly disrupts the information flow in a graph. [27]. The application of node betweenness centrality can be found in the study of biological graphs [31], contingency analysis in power grids [32], knowledge network analysis [33], and traffic monitoring in transportation networks [34]. Betweenness centrality calculation for both nodes and edges requires the shortest path estimation from each node to every other node in the graph. Therefore, the calculation of the betweenness centrality score is computationally expensive, especially for large graphs. An approximate calculation of the centrality score based on sampling techniques can overcome this challenge. Geisberger et al. [35] proposed a method to estimate the approximate betweenness centrality of \(k\) nodes sampled randomly to find the shortest path and calculate the betweenness centrality of all nodes. Riondato et al. [36] chose \(k\) shortest paths between randomly sampled source-target node pairs and evaluated the betweenness centrality for all nodes. Borassi [37] proposed adaptive sampling techniques for sampling shortest paths to compute betweenness centralities faster. Mahmoody et al. [38] studied centrality maximization problems for graphs and proposed an efficient randomized algorithm to approximate the node centrality score for larger graphs. Yoshida [39] proposed a hypergraph-based approach to estimate adaptive betweenness centrality for dynamic graphs. However, these random-sampling-based approximate algorithms lead to sub-optimal ranking accuracy and an increase in execution time for extensive and dynamically changing networks [40]. Recent advancements in computing and the availability of large data sets have resulted in powerful machine learning and deep learning methods. Mendonca et al. [41] proposed a simple neural network with graph embedding to estimate the approximate node betweenness centrality. A Graph Neural Network (GNN) is a deep learning architecture that leverages the graph structure and feature information to perform various tasks including node/edge/graph classification [42]. For transportation networks, GNNs have been used in traffic demand prediction [43] and traffic speed forecasting [44]. Maurya et al. [45, 46] proposed a GNN-based node ranking framework, and Fan et al. [47] used GNNs to determine high-importance nodes. Park et al. [48] adopted GNNs to estimate node importance in knowledge graphs. The aforementioned GNN-based methods exclusively work on node centrality ranking on unweighted directed and undirected graphs. ### Contributions Current literature primarily focuses on approximate node betweenness centrality for unweighted graphs using GNNs and other sampling methods; approximate estimation EBC for large graphs is lacking. Edge importance ranking is essential in dealing with problems relevant to urban infrastructure networks, such as transportation or utility distribution networks. The primary contribution of this paper is a fast and accurate GNN-based approximate edge ranking approach for weighted graphs. Changes in edge weights, restructuring of nodes and edge formation, and node/edge failures can lead to differences in edge importance rankings. Recalculating these edge importance rankings using the conventional method is time-consuming, especially for large-scale graphs. The proposed GNN-based approach reduces the computational time significantly by exploiting both the inherent parallelism of neural networks and GPUs for large-scale matrix operations along the same lines as deep learning techniques [49, 50]. The main principle of GNNs is to aggregate node features of the neighboring connected nodes in the graph. In multi-layer GNNs, such repetitive aggregation captures the overall structural and neighborhood information of the node of interest. The proposed method modifies the conventional GNN architecture to work on edges instead of nodes. Specifically, the proposed method uses a modified edge-adjacency matrix of the graph that is assimilated using the node-degree and edge-weight information to estimate the edge ranking accurately. This modification to the original edge-adjacency matrix leads to a unique representation. To the authors' knowledge, this is the first work where GNN is used to approximate the EBC in transportation networks. The trained GNN model can perform edge ranking approximation for static and dynamic graphs (time-varying graph systems). Its performance is demonstrated on synthetically generated graphs and a real-world transportation network. ### Organization The remainder of this paper is organized as follows. First, in Section 2, we present the basics of graph theory, EBC, and the computation of edge feature representation. Next, the working principles of GNN and information propagation in the learning stage are described. Subsequently, in Section 3, we present the new GNN-based framework, which forms the core of the contributions claimed in this paper. In Section 4, we evaluate the performance of the proposed approach on synthetic graphs. Next, we demonstrate the performance on a real-world transportation network in Section 4. Finally, the conclusions are presented in Section 5. ## 2 Preliminaries This section introduces the background on the concepts and terminologies necessary to follow the material presented in this paper. The basics of graph theory are explained along with an introduction to the edge adjacency matrix, which describes the spatial connection between edges in a graph. Then, a brief introduction to the conventional method of computing EBC, including its drawbacks, follows. Next, we briefly introduce edge feature representation in the graph topology. Finally, the basic concepts of Graph Neural Networks (GNNs), including how the information of edges is exchanged and accumulated, are described. The original GNN algorithm [51] proposes the message passing on nodes, whereas here, the message passing is on edges. Through the edge adjacency matrix and the edge feature vectors, the GNN learns to predict an approximate edge rank. ### Basics of Graph theory A graph \(\mathcal{G}\), is defined as \((\mathcal{V},\mathcal{E})\) - here \(\mathcal{V}\) denotes the set of nodes or vertices of the graph and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) symbolizes the edges [52]. The neighbor set of node \(i\in\mathcal{V}\) is defined as \(\mathcal{N}_{i}:=\{j\in\mathcal{V}:(i,j)\in\mathcal{E}\}\). The graph edges are weighted by \(w_{ij}\) which are associated with \((i,j)\) for \(i,j\in\mathcal{V}\) - here \(w_{ij}>0\) if \((i,j)\in\mathcal{E}\) and \(w_{ij}=0\) otherwise. The vertex adjacency matrix (or commonly known as adjacency matrix) \(\mathcal{A}^{\mathcal{V}}=[a_{ij}^{v}]\in\mathbb{R}^{|\mathcal{V}|\times| \mathcal{V}|}\) of \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) is defined as [53]: \[a_{ij}^{v}=\begin{cases}0,&\text{if }i=j\text{ or there is NO edge present between }i\text{ and }j\,,\\ w_{ij},&\text{if }i\neq j\text{ and there is one edge present between }i\text{ and }j\,.\end{cases} \tag{1}\] Here, the graph \(\mathcal{G}\) is an undirected graph such that \((j,i)\in\mathcal{E}\) iff \(w_{ij}=w_{ji}\quad\forall\;(i,j)\in\mathcal{E}\). \(|\cdot|\) refers to the cardinality, or the number of elements in the set. For unweighted graphs all the weight values are 1 i.e., \(w_{ij}=1\;\forall i,\;j\). In the vertex adjacency matrix, non-zero values sparsely appear when an edge exists between two nodes. The edge adjacency matrix \(\mathcal{A}^{\mathcal{E}}=[a_{mn}^{e}]\in\mathbb{R}^{|\mathcal{E}|\times| \mathcal{E}|}\) is determined by the adjacency of edges [54, 55]: \[a_{mn}^{e}=\begin{cases}1,&\text{if edges }m\text{ and }n\text{ are adjacent}\,,\\ 0,&\text{otherwise}\,.\end{cases} \tag{2}\] Figure 1 shows an example of the vertex adjacency matrix and the edge adjacency matrix for the same graph. ### Basics of EBC Edge ranking depends on the edge's ability to control the information flow (the term information flow is contextual) between other nodes and edges of the graph and is highly correlated with the edge weights. The edge weights greatly influence the shortest paths calculated using the graph. Edge ranking based on this criterion is called edge betweenness centrality (EBC) [1, 29]. The EBC score of an edge will be high if that edge contains many shortest paths making the information flow more accessible and faster throughout the whole graph. The edges with high betweenness centrality are called 'bridges'. Removing bridges from the graph can be disruptive, and in some cases, one graph can segregate into several smaller isolated graphs. Therefore, it is vital to ensure the safety and functionality of such edges in many engineering application contexts (transportation, power, etc.). For a given graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), EBC of an edge \(e\) is the sum of the fraction of all-pairs shortest paths that pass through \(e\) and is given by [29]: \[c_{B}(e)=\sum_{s,t\in\mathcal{V}}\frac{\sigma(s,t|e)}{\sigma(s,t)} \tag{3}\] Figure 1: Vertex adjacency matrix and edge adjacency matrix of a sample graph network where, \(\mathcal{V}\) and \(\mathcal{E}\) are the set of nodes and edges, respectively, \(s\) and \(t\) are the source and terminal nodes while calculating the shortest paths. \(\sigma(s,t)\) is the number of shortest \((s,t)\) paths and \(\sigma(s,t|e)\) is the number of those paths passing through edge \(e\in\mathcal{E}\). The conventional method to calculate this betweenness centrality is through Brandes's algorithm [1]. This algorithm has a space complexity of \(\mathcal{O}(|\mathcal{V}|+|\mathcal{E}|)\) and time complexity \(\mathcal{O}(|\mathcal{V}|\mathcal{E}|)\) for unweighted networks. For weighted networks, the time complexity increases to \(\mathcal{O}(|\mathcal{V}||\mathcal{E}|+|\mathcal{V}|^{2}\mathrm{log}|\mathcal{ V}|)\)[56]. This algorithm is computationally intensive on large-scale networks (examples shown later in Section 4). Additionally, this algorithm is sensitive to small perturbations in the network, such as changes in edge weights or regional node or edge failures. As a result, EBC is recalculated every time there is a change in the graph, which makes its practical implementation in applications such as disaster recovery planning very cumbersome. To address this issue, we pose the estimation of EBC as a learning problem and develop a deep learning-based framework whose time complexity is \(\mathcal{O}(|\mathcal{V}|)\)[57]. ### Node and Edge Feature Embeddings The adjacency matrices represent the connection information between the nodes and edges; however, the complete neighborhood information for nodes and edges is still incomplete beyond their immediate neighbors. The feature representation for nodes and edges embeds the knowledge of \(k\)-hop neighbors - hence the information is more exhaustive. Feature representation of the graph components is a way to represent the notion of similarity in graph components. Such embeddings capture the network's topology in a vector format which is crucial for numerical computations and learning. The most popular method for node embedding is Node2Vec[58]. Node2vec[58] is a graph embedding algorithm to transform a graph into a numerical representation. This algorithm generates a feature representation for each node that portrays the whole graph structure, such as node connectivity, weights of the edges, etc. Two similar types of nodes in the graph will have the same numerical representation in Node2vec algorithm. This representation is obtained through second-order biased random walks, and this process is executed in three stages: 1. _First order random walk_ A random walk is a graph traversing procedure along the edges of the graph, best understood by imagining the movement of a walker. First-order random walks sample the nodes on the graph along the graph edges depending on the current state. In each step/hop, the walker transitions from the current state to the next referred to as a 1-hop transition. In Figure 2(a), the walker is at node \(v\) and three neighboring nodes are \(u_{1}\), \(u_{2}\), and \(u_{3}\) with the respective edge weights, \(w(v,u_{1})\), \(w(v,u_{2})\), and \(w(v,u_{3})\). These weights determine the probability of the walker transitioning to the next node. The transition probability for the first step is given as, \[p(u_{i}|v)=\frac{w(u_{i},v)}{\sum_{u_{i}\in\mathcal{N}_{v}}w(u_{i},v)}=\frac{ w(u_{i},v)}{\mathrm{Degree\ of\ node\ }v};\quad\mathcal{N}_{v}\text{ is the set of neighboring nodes of }v.\] (4) One random walk is generated by performing multiple one-hop transitions; this process is repeated to multiple random walks, a function of the current state. 2. _Second-order biased walk_ In the second-order biased walk, the edge weights selection differs from the first-order random walk. A new bias factor term \(\alpha\) is introduced to reweigh the edges. The value of \(\alpha\) depends on the current state, previous state, and potential future state, as shown in Figure 2(b). If the previous and future states are not connected, then \(\alpha=\frac{1}{q}\), \(q\) is the in-out parameter. If the previous and future states are identical, then \(\alpha=\frac{1}{p}\), where \(p\) is the return parameter. If the two states (the previous state and the future state) are connected but not identical, then \(\alpha=1\). Considering the bias factors, the 2nd order transition probability is given as: \[p(u_{i}|v,t)=\frac{\alpha_{pq}(t,u_{i})w(u_{i},v)}{\sum_{u_{i}\in\mathcal{N}_ {v}}\alpha_{pq}(t,u_{i})w(u_{i},v)}\] (5) 3. _Node embeddings from random walks:_ Repeated generation of random walks from every node in the graph results in a large corpus of node sequences. The Word2Vec [59] algorithm takes this large corpus as an input to generate the node embeddings. Specifically, Node2Vec uses the skip-gram with negative sampling. The main idea of the skip-gram is to maximize the probability of predicting the correct context node given the center node. The skip-gram process for the node embedding is shown in Figure 3. From the node embedding, the edge embedding is obtained using the average operator--edge embedder for \(e(i,j)\) is \(\dfrac{f(i)+f(j)}{2}\), where the edge ends are nodes \(i\) and \(j\); the node embedding of \(i\) and \(j\) are \(f(i)\) and \(f(j)\), respectively. Figure 3: This is an illustration for the skipgram model for node embedding. For a sample random walk of length 7, a sliding window of length 3 is used to prepare the inputs and outputs for training the Word2Vec model. The embedding of the trained Word2Vec model is the node feature embedding. Figure 2: Conceptual representation of Node2Vec: (a) Parameters for transition probability calculation for the 1st order random walk, and (b) parameters for transition probability calculation for the 2nd order biased walk, Node2Vec[58] can also be used for edge feature representation. The original implementation is found to be slow and memory inefficient [60]. Hence, a fast and memory efficient version of Node2Vec called PecanPy (Parallelized, memory efficient and accelerated node2vec in Python) [60, 61] is utilized in this paper. PecanPy makes the Node2Vec implementation efficient on the following three fronts: 1. **Parallelism**: The estimation of transition probability and the random walk generation processes are independent, but are not parallelized in the original Node2Vec. PecanPy parallelizes the walk generation process, which makes the operation much faster. 2. **Data Structure**: The original implementation of Node2Vec uses NetworkX [62] to store graphs which is inefficient for large-scale computation. However, PecanPy uses the Compact Sparse Row (CSR) format for sparse graphs - which has similar sparsity properties to the transportation network that is addressed in this paper. The CSR formatted graphs are more compactly stored in memory and run faster as they can utilize cache more efficiently. 3. **Memory**: The original version of Node2Vec pre-processes and stores the 2nd order transition probabilities, which leads to significant memory usage. PecanPy eliminates the pre-processing stage and computes the transition probabilities whenever it is required, without saving. ### Details of GNN Neural network models for graph-structured data are known as GNNs [63, 24, 64, 65]. These models exploit the graph's structure to aggregate the feature information/embeddings of the edges and nodes [66]. Feature aggregation from the structured pattern of the graph enables the GNN to predict the probability of edge existence or to predict node labels. The graph structure information is assimilated from the adjacency matrix and the feature information matrix of nodes and edges, which form the inputs, and training using a loss function. Message passing occurs in each GNN layer when each node aggregates the features of its neighbors. The node feature vector is updated by combining its feature vector with the aggregated features from its adjacent nodes. In the first layer, the GNN combines the features of its immediate neighbors, and with an increasing number of layers, the depth of assimilating the neighboring edge features increases accordingly. The edge feature vector is updated with the aggregated features from its adjacent edges, and this procedure repeats for each GNN layer. There are three steps in a GNN, elaborated as follows: * a latent dimensional representation. A popular algorithm to obtain such representation is Node2Vec[58], previously discussed in Section 2.3. The new framework presented in this paper contains a modified version of the message-passing concept - edge features are aggregated and passed to the neighboring edges. In this way, the GNN learns the structural information. An example of the message passing step is shown in Figure 4. While conventional implementation of GNNs uses the message passing on the nodes, such passing is performed on the edges here, which is the novelty. * _Step 2: Aggregation:_ Messages are aggregated after all the messages from the adjacent edges are passed to the edge of interest. Some popular aggregation functions are: \[\mathrm{Sum}=\sum_{j\in\mathcal{N}_{i}}F(h_{j});\quad\mathrm{Mean}=\frac{ \sum_{j\in\mathcal{N}_{i}}F(h_{j})}{|N_{i}|};\quad\mathrm{Max}=\max_{j\in \mathcal{N}_{i}}F(h_{j});\quad\mathrm{Min}=\min_{j\in\mathcal{N}_{i}}F(h_{j})\] (6) where, \(\mathcal{N}_{i}\) is the set of neighboring edges of edge \(i\) (edge of interest). Considering 'AGGREGATE' as the aggregation function (sum, mean, max, or min of the neighboring edge message transform), the aggregated message \(\mu\) at layer \(k\) can be expressed as: \[\mu_{i}^{(k)}=\text{AGGREGATE}^{(k)}(\{h_{j}^{(k)}:\quad j\in\mathcal{N}_{i}\})\] (7) * _Step 3: Update:_ These aggregated messages update the source edge's features in the GNN Layer. In this updated step, the edge feature vector is combined with the aggregated messages and is executed by simple addition or concatenation: \[\text{Addition:}\ h_{j}^{(k+1)}=\sigma(\Gamma(\Omega(h_{j}^{(k)})+\mu_{i}^{(k)})); \qquad\text{Concatenation:}\ h_{j}^{(k+1)}=\sigma(\Gamma(\Omega(h_{j}^{(k)}) \oplus\mu_{i}^{(k)}))\] (8) where, \(\sigma\) is the activation function, \(\Omega\) is a simple multi-layer perceptron (MLP), and \(\Gamma\) is another neural network that projects the added or concatenated vectors to another dimension. In short, the updating step from the previous layer can be summarized as follows: \[h_{j}^{(k+1)}=\text{COMBINE}^{(k)}(h_{j}^{(k)},\mu_{i}^{(k)})\] (9) The output of each GNN layer is forwarded as the input to the next GNN layer. After \(k\)-th GNN layers/iteration, the edge embedding vector at the final layer captures the edge feature information and the graph structure information of all adjacent edges from 1-hop distance to the \(k\)-th hop distance. The edge feature vector of the 1st layer is obtained using NodeVec[58] as described in Section 2.3. ## 3 Proposed GNN Framework Building on the concepts of the GNN presented previously, the proposed GNN framework for estimating approximate edge ranking is presented next. Here, the main reason for choosing GNN over GCN (Graph Convolution Network) [64] is that the aggregated feature vector of a particular edge is dominated by the feature vector of the edge itself as GCN adds a self-loop in the graph. This section introduces the algorithm along with the architecture being proposed. A description of the modified adjacency matrix and its use in edge betweenness ranking is described. This is followed by the details of the edge feature aggregation process in the GNN module. Finally, the details about the ranking loss function are presented. ### Algorithm and the GNN Architecture Figure 5 shows the overall process of calculating the approximate EBC. This framework takes the graph structure--specifically the edge adjacency matrix--and the feature matrix as inputs to estimate the EBC ranking vector Figure 4: Schematic diagram of the message passing of GNN: (a) This is a sample graph with 4 nodes and 5 edges. Each edge has its own embedding vector shown as \(h_{i}\) for \(i\)th node, \(i\in\{a,b,c,d,e\}\); (b) message passing procedure of edge \(d\) using the edge vectors of the neighboring edges \(b\), \(c\), and \(e\) and transforming them, finally “passing” them to the edge of interest. This process is repeated, in parallel, for all the edges in the graph. The transformation function can be a simple Neural network (RNN or MLP) or an affine transform, \(F(h_{i})=\mathbf{W}_{i}h_{i}+b_{i}\). depicting the importance of each edge in the graph structure. The GNN module is at the core of this procedure, whose inputs are the edge feature matrix and the two variants of the edge adjacency matrix. Starting with initial weights in the GNN module, the EBC ranking vector of the model is calculated by backpropagating the errors through the GNN layers and then updating the weights iteratively. Figure 5: Proposed framework for approximate edge betweenness centrality ranking #### 3.1.1 Edge Adjacency matrix The edge adjacency matrix is not unique for all graph structures. For instance, a pair of non-isomorphic graphs - the three point star graph \(S_{3}\) and the cycle graph on three vertices \(C_{3}\) have identical edge-adjacency matrices as shown in Figure 7. Hence we introduce two variants for this matrix - modified edge adjacency matrix based on node degree \(\tilde{\mathcal{A}}^{\mathcal{E}}\) and the modified edge adjacency matrix based on edge weight \(\tilde{\mathcal{A}}^{\mathcal{E}}\). The modified edge adjacency matrices is obtained from the edge adjacency matrix using the functions \(\psi_{d}\) and \(\psi_{w}\) respectively - shown in detail in Algorithm 1: lines 13-32 and the corresponding example is shown in Figure 6. The edge weights of edges \(a\), \(b\), \(c\), \(d\), and \(e\) are \(\Omega_{a}\), \(\Omega_{b}\), \(\Omega_{c}\), \(\Omega_{d}\), and \(\Omega_{e}\), respectively. The matrix \(\tilde{\mathcal{A}}^{\mathcal{E}}\) is unique to each graph; \(\tilde{\mathcal{A}}^{\mathcal{E}}\) retains similar features to the original edge adjacency matrix and is non-unique to graph structures. Figure 6: Modified edge adjacency matrices from the edge adjacency matrix of a sample graph network Figure 7: Non-uniqueness of edge adjacency matrix and the corresponding modifications proposed in the new framework #### 3.1.2 GNN Module The pseudo-code for the GNN framework and the GNN architecture are shown in Algorithm 1 and Figure 8, respectively. The initial feature for an edge is obtained from the edge embeddings as discussed in Section 2.3, denoted as \(\mathcal{H}^{0}\). The features of its \(k\)-hop neighbors are aggregated at the \(k\)-th layer. A simple summation of the edge feature vectors is used here for aggregation. At each layer, the feature matrix \(\mathcal{H}^{0}\) is multiplied with the modified adjacency matrices i.e., \(\tilde{\mathcal{A}}^{\mathcal{E}}\) and \(\tilde{\mathcal{A}}^{\mathcal{E}}\). Then, for each edge, the features of the adjacent edges are summed, as shown in Figure 8 and Algorithm 1: line 5-6, with the Leaky-ReLU activation function. The choice of this activation function is not arbitrary and was an outcome of an extensive exercise with different activation functions (details omitted here for the sake of brevity). Subsequently, the aggregated edge features from each GNN layer are mapped to a Multilayer Perceptron (MLP) unit, which outputs a vector of vector space; \(\mathbb{R}^{\mathcal{E}}\), and each value of this vector corresponds to each edge of the network as shown in Figure 9 and Algorithm 1: line 7-8. During the training phase, the MLP learns to predict a single score based on the input edge features and the graph connection. Single MLP units are implemented in all layers to output the scores, which are then summed separately as \(\tilde{\mathcal{S}}\) and \(\hat{\mathcal{S}}\) for the modified adjacency matrices \(\tilde{\mathcal{A}}^{\mathcal{E}}\) and \(\tilde{\mathcal{A}}^{\mathcal{E}}\), respectively. The MLP unit comprises of three fully connected layers and a hyperbolic tangent as the tuned nonlinearity function. The two scores \(\tilde{\mathcal{S}}\) and \(\hat{\mathcal{S}}\) are multiplied to obtain the final score for each edge as shown in Algorithm 1: line 12. In this architecture, the weights of all the hidden units are initialized using Xavier initialization [67] - which is a standard technique used for weight initialization to ensure that the variance of the activations in every layer is identical. Due to the equal variance in every layer, the exploding or vanishing gradient problems are prevented. Figure 8: Proposed Graph Neural Network (GNN) architecture. This module takes the edge adjacency matrix and the edge feature/embedding matrix obtained from Node2vec/PecanPy as inputs and calculates the edge importance ranking. ``` 1:Number of Edges \(\mathcal{E}\), Edge weight List \(\Omega\), unweighted Edge adjacency matrix \(\mathcal{A}^{\mathcal{E}}\), Feature matrix \(\mathcal{H}^{0}\), GNN depth \(K\), GNN weight matrices \(W^{(k)}\) 2:Edge betweenness centrality value vector \(S_{(\text{EdgeBe})}\) 3:\(\tilde{\mathcal{A}}^{\mathcal{E}}\leftarrow\psi_{d}(\mathcal{A}^{\mathcal{E}})\)\(\triangleright\) function \(\psi_{d}\) modifies \(\mathcal{A}^{\mathcal{E}}\) based on node degree \(2\):\(\tilde{\mathcal{A}}^{\mathcal{E}}\leftarrow\psi_{w}(\mathcal{A}^{\mathcal{E}})\)\(\triangleright\) function \(\psi_{w}\) modifies \(\mathcal{A}^{\mathcal{E}}\) based on edge weight 4:\(\tilde{\mathcal{H}}^{(0)}=\tilde{\mathcal{H}}^{(0)}=\mathcal{H}^{0}\) 5:for\(k=1,\cdots,K\)do 6:\(\tilde{\mathcal{H}}^{(k)}\leftarrow\phi(\tilde{\mathcal{A}}^{\mathcal{E}}\ \tilde{ \mathcal{H}}^{(k-1)}\ W^{(k)})\)\(\triangleright\)\(\phi\) is the activation function 7:\(\tilde{\mathcal{S}}^{(k)}\leftarrow\text{MLP}(\tilde{\mathcal{H}}^{(k)})\)\(\triangleright\) MLP is the multi-layer perceptron 8:\(\tilde{\mathcal{S}}^{(k)}\leftarrow\text{MLP}(\tilde{\mathcal{H}}^{(k)})\)\(\triangleright\) MLP is the multi-layer perceptron 9:endfor 10:\(\tilde{\mathcal{S}}\leftarrow\sum_{k=1,\cdots,K}|\tilde{\mathcal{S}}^{(k)}|\) 11:\(\tilde{\mathcal{S}}\leftarrow\sum_{k=1,\cdots,K}|\tilde{\mathcal{S}}^{(k)}|\) 12:\(S_{(\text{EdgeBet})}\leftarrow\tilde{\mathcal{S}}\times\tilde{\mathcal{S}}\) 13:function\(\psi_{d}(\mathcal{A}^{\mathcal{E}})\) 14:\(\tilde{\mathcal{A}}^{\mathcal{E}}=zeros(\mathcal{E},\mathcal{E})\) 15:for\(i=1,\cdots,\mathcal{E}\)do 16:for\(j=i+1,\cdots,\mathcal{E}\)do 17:\(\tilde{\mathcal{A}}^{\mathcal{E}}(i,j)=\frac{\mathcal{A}^{\mathcal{E}}(i,j)}{ \text{Degree of the node connecting edge $i$ and $j$}}\) 18:\(\tilde{\mathcal{A}}^{\mathcal{E}}(j,i)=\tilde{\mathcal{A}}^{\mathcal{E}}(i,j)\) 19:endfor 20:endfor 21:return\(\tilde{\mathcal{A}}^{\mathcal{E}}\) 22:endfunction 23:function\(\psi_{w}(\mathcal{A}^{\mathcal{E}})\) 24:\(\hat{\mathcal{A}}^{\mathcal{E}}=zeros(\mathcal{E},\mathcal{E})\) 25:for\(i=1,\cdots,\mathcal{E}\)do 26:for\(j=i+1,\cdots,\mathcal{E}\)do 27:\(\hat{\mathcal{A}}^{\mathcal{E}}(i,j)=\frac{\mathcal{A}^{\mathcal{E}}(i,j)}{ (\Omega_{i}+\Omega_{j})/2}\) 28:\(\hat{\mathcal{A}}^{\mathcal{E}}(j,i)=\hat{\mathcal{A}}^{\mathcal{E}}(i,j)\) 29:endfor 30:endfor 31:return\(\hat{\mathcal{A}}^{\mathcal{E}}\) 32:endfunction ``` **Algorithm 1** GNN based edge betweenness ranking algorithm (forward propagation) ### Loss Function A ranking loss function is used to estimate the loss due to the differences in the ranking predicted by the proposed model compared to the target EBC ranking as presented in Section 2.2. Such ranking loss functions have previously been used for recommendation systems to rank or rate products or users [68, 69]. The margin ranking loss function is defined as follows: \[\mathscr{L}(S_{i}^{\text{(model)}},S_{i}^{\text{(true)}},y) =\max(0,-y\cdot(S_{i}^{\text{(model)}}-S_{i}^{\text{(true)}})+ \text{Margin}) \tag{10}\] \[y =\begin{cases}1,&\text{if }S_{i}^{\text{(model)}}\text{ should be ranked higher than }S_{i}^{\text{(true)}}\\ -1,&\text{if }S_{i}^{\text{(true)}}\text{ should be ranked higher than }S_{i}^{\text{(model)}}\end{cases}\] where, \(S_{i}^{\text{(model)}}\) is the predicted ranking-score, and \(S_{i}^{\text{(true)}}\) is the EBC score obtained using the conventional method i.e., Brandes Algorithm [29] as shown in Equation 3. In this study, the margin value is set to 1 to allow some flexibility. ## 4 Results Experimental results for both the synthetic and real-world cases are presented in this section. First, details of training for the proposed architecture are discussed. Then, the network performance on synthetic graphs and actual transportation network data are presented. In terms of computing resources, the experiments were conducted on a dedicated computer and the hardware and the software information for the computing resources used are shown in Table 1. The graph datasets are divided into training (\(\approx\)90%) and testing (\(\approx\)10%) datasets - which is typical in machine learning literature [70]. The EBC ranking is calculated using the conventional method for all graphs in the training and testing datasets. These rankings are used as target vectors for training the GNN. The test graphs are not used for training; the model learns to map edge features and importance via MLP to the ranking scores. Training and testing graphs contain variable numbers of nodes and edges, and the GNN is trained and tested on the same type of synthetic graph. The evaluation metrics used are Kendall's Tau rank correlation coefficient and the Spearman's rank correlation coefficient (details are provided in A). The model size is determined by the graph, precisely by the number of edges contained in the largest graph. A model size of 10000 is used here to accommodate a typical medium-sized urban area in the US. While the size of the edge adjacency matrix (input) size is fixed at 10000 edges (Figure 8), smaller graphs can also be accommodated by populating only the upper left portion of this matrix, with remaining elements of this input matrix set to zeros (zero-padding). The \begin{table} \begin{tabular}{|l|l|} \hline CPU model and speed & Intel Core i9-10940X CPU @ 3.30 GHz \\ Available Hard disk & 1 TB \\ Available RAM & 64 GB \\ GPU type and specification & NVIDIA GeForce RTX 3090 – 32 GB \\ Programming & Python 3.7, Matlab 2022a \\ Deep Learning framework & PyTorch, Numpy, Scipy, CUDA 11.6 \\ GNN framework & NetworkX, Node2Vec, Pecanpy, Gensim \\ \hline \end{tabular} \end{table} Table 1: Hardware and software information for the deep learning framework Figure 9: Multi Layer Perceptron (MLP) module network is trained using the ADAM (Adaptive Moment Estimation) [71] optimizer, which is a variant of stochastic gradient descent (SGD) algorithm commonly used in deep learning. The training hyper-parameters was performed with a learning rate of 0.0005 and a dropout ratio of 0.3. The number of epochs used for training is 50 and the number of hidden neurons (embedding dimension) and the number of GNN layers (B) were optimized. The experiments use an embedding dimension of 256 and 5 layers. The edge features are obtained using PecanPy as discussed in Section 2.3 with the feature vector of length 256. BFS approach with \(p=1\) and \(q=2\) is used to search the shortest path. The calculation of the ranking loss function requires a comparison of edge pair rankings. However, ranking all possible combinations of edge pairs - \(\left(\frac{|\mathcal{E}|}{2}\right)\) for \(|\mathcal{E}|\) edges is cumbersome, and so the number of edge pairs are randomly sampled as 20 times the number of edges - which is a common practice [46]. ### Performance on synthetic networks First, three synthetic random graphs are used to evaluate the performance of the proposed method: (a) Erdos - Renyi variant-I [72] i.e., \(G_{np}\), which is an undirected graph containing \(n\) nodes (fixed number) where each edge \((u,v)\) appears independent and identically distributed with probability \(p\); (b) Erdos - Renyi variant-II [72] i.e., \(G_{nm}\), which is an undirected graph containing \(n\) nodes (fixed number) and \(m\) edges. Edges are connected uniformly to random nodes. Unlike Erdos - Renyi variant-I, the number of edges in Erdos - Renyi variant-II are fixed. Both variants have a fixed number of nodes; and (c) Watts-Strogatz model [73] which is a random graph generation model that produces graphs with small-world properties such as local clustering and average shortest path lengths. The small world random graph has been used in applications such as electric power grids, networks of brain neurons, airport networks, etc. [74]. #### 4.1.1 Graph generation parameters The synthetic graph generation parameters are shown in Table 2. Here, \(\mathcal{U}\{a,b\}\) and \(\mathcal{U}[a,b]\) represent discrete and continuous uniform distributions between the ranges \(a\) and \(b\), respectively. With the graph generation parameters chosen arbitrarily for this case study, we generate 1000 training graphs, 100 validation graphs, and 100 test graphs for each case. Table 3 summarizes the results generated from the synthetic graphs. The average shortest path length is defined as: \[a=\sum_{s,t\in\mathcal{V}}\frac{d(s,t)}{|\mathcal{V}|\cdot(|\mathcal{V}|-1)} \tag{11}\] where, \(\mathcal{V}\) is the set of nodes in weighted/unweighted graph \(\mathcal{G}\) of total node number \(|\mathcal{V}|\) (cardinality) and \(d(s,t)\) is the shortest path from \(s\) to \(t\). The clustering coefficient is the measure for the nodes that are clustered together, which is the geometric average of the subgraph edge weights [75]: \[c_{s}=\frac{\sum_{tu}(\hat{w}_{st}\hat{w}_{tu}\hat{w}_{us})^{1/3}}{\deg(s)( \deg(s)-1)};\qquad C=\frac{1}{|\mathcal{V}|}\sum_{s\in\mathcal{V}}c_{s} \tag{12}\] where, \(c_{s}\) is the clustering coefficient of node \(s\), \(C\) is the average clustering coefficient of the graph; \(\deg(s)\) is the degree of node \(s\); nodes \(s\), \(t\), and \(u\) create triangles in the graph. The edge weights \(\hat{w}_{st}\) are normalized by the maximum weight in the network, \(\hat{w}_{st}=w_{st}/\max(w)\). \begin{table} \begin{tabular}{|c|c|c|} \hline Synthetic Graph Type & \multicolumn{2}{c|}{Generation Parameters} \\ \hline \multirow{3}{*}{Erdős - Rényi-I (GNP)} & Nos of nodes & \(\mathcal{U}\{1000,5000\}\) \\ & Probability of edge creation & 1.2/(Nos of nodes -1) \\ & Edge weights & \(\mathcal{U}[0,100]\) \\ \hline \multirow{3}{*}{Erdős - Rényi-II (GNM)} & Nos of nodes & \(\mathcal{U}\{1000,5000\}\) \\ & Nos of edges & \(\mathcal{U}[1.4,1.6]\times\) Nos of nodes \\ & Edge weights & \(\mathcal{U}[0,100]\) \\ \hline \multirow{3}{*}{Watts-Strogatz} & Nos of nodes & \(\mathcal{U}\{2000,4000\}\) \\ & Mean degree & 4 \\ \cline{1-1} & Probability of edge rewiring & 0.5 \\ \cline{1-1} & Edge weights & \(\mathcal{U}[0,100]\) \\ \hline \end{tabular} \end{table} Table 2: Generation parameters of the Synthetic Graphs #### 4.1.2 Training time of the model and speed for inference (latency) Figure 10(a) shows the evaluated training marginal ranking loss evolution for every epoch. The ranking scores (Kendall's-Tau and Spearman's correlation) for training and validation data are also shown in Figure 10(b). Given the asymptotic nature of the evaluation metric, the training was stopped at 50 epochs. Each epoch took approximately 175 seconds to train on average - hence the total training time for the graphs containing 10000 edges took \(\approx 2.43\) hours. The inference speed combines the latencies associated with GNN and PecanPy. While the computational overhead with the inference part of the GNN is of milliseconds, PecanPy has a relatively large computational overhead. Figure 11 shows the comparison of the results between the proposed GNN-based approach and the conventional method - Brandes' [1] in a semi-log plot. Beyond a graph size of about 2000 nodes, the proposed GNN method outperforms the conventional method. For a graph with 20000 nodes, the traditional process takes approximately 5033 seconds to compute, while the GNN takes a fraction of that time, 197 seconds. These results underscore the advantage of the proposed GNN method for large graphs compared to the conventional method. #### 4.1.3 Ranking scores The evaluation metrics (Kendall-Tau [76] and Spearman correlation [77]) are calculated for both training and testing data sets for all three types of synthetic graphs and are shown in Tables 4 and 5, respectively. Details about these evaluation metrics are discussed in A. The ranking scores are sufficiently high, indicating the proposed framework can predict the target values very well. The detailed ranking score statistics in the form of the Box and Whisker plot are also shown in Figure 12. The standard deviations for the estimated scores are minimal, denoting the robustness of the proposed method even though the graph sizes differ substantially in the testing dataset. \begin{table} \begin{tabular}{|c|c c c|} \hline Synthetic Graph & Erdös - Rényi - I & Erdös - Rényi - II & Small World Network \\ Types & GNP random & GNM random & Watts-Strogatz Model \\ \hline Number of Nodes & Upto 5000 & Upto 5000 & Upto 5000 \\ (Range) & Upto 10000 & Upto 10000 & Upto 10000 \\ Avg. Shortest Path Lengths & \(277.5\pm 15.7\), 307.5 & \(290.6\pm 20.6\), 338.6 & \(249.4\pm 7.4\), 266.6 \\ Avg. Clustering Coeff. & \(9.04\times 10^{-4}\) & \(1.02\times 10^{-4}\) & \(0.0681\) \\ (\(\mu\pm\sigma\)) & \(\pm 7.91\times 10^{-4}\) & \(\pm 7.97\times 10^{-4}\) & \(\pm 0.004\) \\ Average Degree of & \(3.198\pm 0.031\) & \(3.182\pm 0.102\) & \(4\pm 0\) \\ Nodes (\(\mu\pm\sigma\)) & \(3.198\pm 0.031\) & \(3.182\pm 0.102\) & \(4\pm 0\) \\ \hline \end{tabular} \end{table} Table 3: Statistics of the synthetic graphs used for the training and testing the proposed GNN framework Figure 10: Evolution in learning phase – (a) Training loss over epochs and (b) Evaluation metric (for training and validation data) over epochs. #### 4.1.4 Comparison with other variants of edge adjacency matrix Two variants of edge-adjacency matrices have been used in the proposed framework, as shown in Figure 8. This section shows the comparison of the proposed framework with other cases where only one variant of edge adjacency matrix is used for Erdos-Renyi-II type graph networks. Table 6 shows that combining \(\tilde{\mathcal{A}}^{\mathcal{E}}\) and \(\tilde{\mathcal{A}}^{\mathcal{E}}\) produces better outcomes (scores) than using the edge adjacency matrix, or the modified edge adjacency matrices, individually. \begin{table} \begin{tabular}{|c|c c|} \hline Type of adjacency matrix as & Kendall Tau & Spearman’s Rho \\ \hline \(\mathcal{A}^{\mathcal{E}}\) & 0.329 & 0.476 \\ \(\tilde{\mathcal{A}}^{\mathcal{E}}\) & 0.339 & 0.489 \\ \(\tilde{\mathcal{A}}^{\mathcal{E}}\) & 0.745 & 0.908 \\ Both \(\tilde{\mathcal{A}}^{\mathcal{E}}\) and \(\tilde{\mathcal{A}}^{\mathcal{E}}\) & 0.759 & 0.916 \\ \hline \end{tabular} \end{table} Table 6: Ranking Scores in GNM for different adjacency matrix (Testing Data) Figure 11: Comparison of computing time for edge ranking of graphs between the conventional method and the proposed GNN based approach \begin{table} \begin{tabular}{|c|c c c|} \hline Synthetic Graph & Erdős - Rényi - I & Erdős - Rényi - II & Small World Network \\ Types & GNP random & GNM random & Watts-Strogatz Model \\ \hline Kendall Tau Score & \(0.791\pm 0.004\) & \(0.786\pm 0.009\) & \(0.795\pm 0.005\) \\ (\(\mu\pm\sigma\)) & \(0.936\pm 0.005\) & \(0.934\pm 0.006\) & \(0.938\pm 0.003\) \\ \hline \end{tabular} \end{table} Table 4: Ranking Scores in Synthetic Graphs (Training Data) ### Application to the Minnesota transportation network We validated the GNN-based framework on the transportation network for the state of Minnesota, USA. The network information is obtained from the network repository [78], which contains 2640 nodes (road junctions) and 3302 edges (streets). This network offers a good balance of size and computational overhead to demonstrate the proposed framework's performance aspects. Figure 13 shows the EBC scores for the network. In this Figure, the streets highlighted in red denote the most critical roads, i.e., bridges, in the graph as determined by the EBC measure - any modification to these streets (change in edge weights, addition, and deletion of roads and junctions) will impact the network significantly. This study simulates the edge importance ranking due to a dynamic change in the parameters of the network - such as new road construction or inoperative roads, or change in other factors such as travel time, travel distance, or traffic flow. Figure 12: The training and testing ranking score distributions for synthetic graphs; the Box and Whisker plot shows the median, the lower and upper quartiles, any outliers (calculated using the interquartile range), and the minimum and maximum values that are not outliers – (a) Kendall-tau score on training data, (b) Kendall-tau score on testing data, (c) Spearman correlation coefficient on training data, (d) Spearman correlation coefficient on testing data. Figure 13: Edge betweenness centrality score for Minnesota State transportation network The experiments consider two cases: (a) change in edge weights, say arising due to varying traffic volumes, and (b) change in edge weights coupled with a permanent change (deletion or addition) of edges, say due to different traffic volumes combined with some inoperative roads or the addition of new roads. In both these cases, the edge weights are determined from the Euclidean distance (2-norm) between the node coordinates, i.e., the road segment length. These cases simulate post-catastrophic events such as a major earthquake or flood, where the first case could be pre-event or after minor events where all the streets are operating. Still, their properties have changed. The latter may result from a significant event where some segments are taken out of service, and the remaining are functional with significantly modified properties. #### 4.2.1 Case I: Change in edge weights For this case, the edge weights are randomly altered according to \(r\times w_{i}\) where \(w_{i}\) is the weight of edge \(i\) and the values of \(r\) are sampled from the continuous uniform distribution \(\mathcal{U}[0.8,1.2]\). The number of nodes and edges for the graphs remain unchanged. The graph statistics are shown in Table 7. From this graph generation model, we select 1000 training data and 200 testing data and the scores are shown in Table 8 and Figure 14. As for the synthetic graphs, high scores and relatively small standard deviation indicates that the model is fairly robust. network. This simulation also includes the addition of new roads. The graph statistics are shown in Table 9. After training with 1000 training data, the score is evaluated on 200 test data and training data and shown in Table 10 and Figure 15. Similar to Case-I, the standard deviation of the ranking score is relatively small, indicating the proposed method's robustness. for finding important graph components in the field of network systems. From the wide range of graphs used here to validate the approach, we can conclude that there is significant promise for using this approach towards maintenance, recovery time, and resilience estimation for a wide range of networked infrastructure systems. ## CRediT authorship contribution statement **Debasish Jana**: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing - original draft, Visualization. **Sven Malama**: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing - review & editing, Visualization. **Sriram Narasimhan**: Conceptualization, Methodology, Analysis, Investigation, Resources, Writing - review & editing, Supervision, Project Administration, Funding acquisition. **Erttugrul Taicroglu**: Conceptualization, Investigation, Resources, Writing - review & editing, Supervision, Project Administration, Funding acquisition. ## Declaration of Competing Interest The authors declare that there is no conflict of interest regarding financial and personal relationships that could influence the work reported in the paper. ## Funding Information We gratefully acknowledge funding support provided by the City of Los Angeles's Bureau of Engineering (BOE). ## Appendix A Evaluation Metric ### Kendall tau rank correlation coefficient Kendall tau rank correlation coefficient [76] is a popular metric for the ranking measure. Let's consider \((x_{1},y_{1}),\cdots,(x_{n},y_{n})\) are the set of observations of the joint random variables \(X\) and \(Y\). Any pair of observations \((x_{i},y_{i})\) and \((x_{j},y_{j})\), where \(i<j\), are said to be concordant if the ranks of the both elements in both pairs agree. If not, they are said to be discordant. For two given lists with \(n\) items each, if the number of concordant and discordant pairs are \(N_{c}\) and \(N_{d}\) respectively, then Kendall's ranking coefficient is calculated as, \[\tau=\frac{\text{number of concordant pairs - number of discordant pairs}}{\text{number of ways to choose two items from $n$ items}}=\frac{N_{c}-N_{d}}{\frac{n(n-1)}{2}}. \tag{13}\] Normalizing with the number of pair combinations results in the range of the coefficient \(\tau\) between \(-1\leq\tau\leq 1\). The value of \(\tau\) is 1, -1, 0 if all the pairs are concordant, discordant and uncorrelated, respectively. ### Spearman's rank correlation coefficient Spearman's rank correlation coefficient [77] is the covariance of the two rank variables divided by the product of their standard deviation. With \(n\) number of observations, the \(n\) raw scores of variables \(x_{i}\) and \(y_{i}\) are transformed to the ranks \(R(x_{i})\) and \(R(y_{i})\) for the joint random variables \(X\) and \(Y\). Then the Spearman's rank correlation coefficient \(\rho\) is expressed as follows: \[\rho_{s}=\frac{\text{cov}(R(X),R(Y))}{\sigma_{R(X)}\sigma_{R(Y)}} \tag{14}\] where, \(\text{cov}(R(X),R(Y))\) is the covariance of the rank variables; \(\sigma_{R(X)}\) and \(\sigma_{R(Y)}\) are the standard deviations of the rank variables. If all the \(n\) ranks are distinct integers, the spearman's rank correlation can be simplified as: \[\rho_{s}=1-\frac{6\sum d_{i}^{2}}{n(n^{2}-1)} \tag{15}\] where, \(d_{i}=R(x_{i})-R(y_{i})\) is the difference between the two ranks of each observation, and \(n\) is the number of observations. The range of spearman's coefficient \(\rho_{s}\) is \(-1\leq\rho_{s}\leq 1\). Like the Kendall tau, the spearman correlation, \(\rho_{s}=1,-1\), and \(0\) denote perfectly positive, perfectly negative, and no correlation, respectively. While the Kendall tau and Spearman's rank correlation coefficients lead to similar results, in practice, Spearman's \(\rho_{s}\) is more popular as a ranking measure. However, Spearman's \(\rho_{s}\) is more sensitive to errors and discrepancies in the data. On the other hand, if the data is Gaussian distributed, Kendall tau has less gross error sensitivity and less asymptotic variance compared to Spearman's \(\rho_{s}\). Therefore, error metrics for all synthetic and experimental simulations are reported in this paper. ## Appendix B Ablation Study We performed a suite of experiments on synthetic graphs to study the effect of hyper-parameters on the GNN model's performance. The main hyper-parameters of the model are the number of GNN layers and the number of embedding dimensions. Therefore, we vary these hyper-parameters and observe the performance of the model. We use Erdos - Renyi variant-I (GNP random) graphs for this study. ### Varying number of layers The number of GNN layers in the model influences the amount of information any given edge can accumulate from its neighboring edges. For an increasing number of GNN layers, the edges have access to information from multi-hop adjacent edges. In this study, we vary the number of GNN layers from 1 to 5, keeping the embedding dimension fixed (256). Additionally, we present the model performance in Figure 16. Both evaluation metrics, i.e., Kendall tau and Spearman's correlation coefficient, show that models with small numbers of GNN layers perform poorly, as the feature aggregation reach for each edge is limited. Increasing the number of GNN layers yields better ranking performance. Therefore, we fix the number of GNN layers as 5 for all the numerical and experimental studies; increasing this number further comes at the cost of higher training time with only a marginal improvement in accuracy. ### Varying embedding dimensions For any neural network (shallow or deep), the embedding dimension (number of neurons in the hidden layers) represents the number of learnable modal parameters. Under-parameterized models (low embedding dimension) cannot approximate complex functions, whereas over-parameterized models (high embedding dimension) generalize poorly. In this experiment, we change the embedding dimension to 32, 64, 128, 256, and 512, with five layers (obtained from the B.1). We evaluate the performance for all these trials and present it in Figure. 17. Results show that an embedding dimension of 256 is optimal - performance is lower for both the lower and higher embedding dimensions.
2302.06879
Heavy baryons in the Chiral Quark-Soliton Model
We review applications of the Chiral Quark Soliton Model to heavy baryons and to doubly heavy tetraquarks.
Michal Praszalowicz
2023-02-14T08:01:20Z
http://arxiv.org/abs/2302.06879v1
# Heavy baryons in the Chiral Quark-Soliton Model+ ###### Abstract We review applications of the Chiral Quark Soliton Model to heavy baryons and to doubly heavy tetraquarks. ## 1 Introduction Chiral Quark Soliton Model (\(\chi\)QSM) was initially designed to describe spectra of light baryons [1]. Nevertheless, a possibility to apply it to heavy baryons (with one heavy quark) was already discussed by D.I. Diakonov in 2010 [2]. Six years later the present author with collaborators [3] revived the idea of Ref. [2] extending it to putative exotic states [4, 5, 6], negative parity excited baryons [7], and even to doubly heavy tetraquarks [8, 9]. The aim of the present paper is to briefly summarize these developments. The \(\chi\)QSM is based on Witten's argument [10] that in the limit \(N_{\rm val}=N_{\rm c}\to\infty\) relativistic valence quarks generate chiral mean fields represented by a distortion of the Dirac sea, which in turn interacts with the valence quarks, which in turn modify the sea until a stable configuration is reached. This configuration is called a _Chiral Quark Soliton_ (\(\chi\)QS). \(\chi\)QS corresponds to the solution of the Dirac equation for the constituent quarks, with fully occupied Dirac sea. Therefore at this stage the model is fully relativistic. For large \(N_{\rm c}\) the \(\chi\)QS is heavy, and the quantization of the zero modes of the underlying _hedgehog_ symmetry leads to the correct light baryon spectrum (see _e.g._[11]). The effective collective Hamiltonian represents a non-relativistic rigid rotation of \(\chi\)QS in the SU(3) group space, and is characterized by the soliton mass \(M_{\rm sol}\), two moments inertia \(I_{1,2}\) and parameters describing chiral symmetry breaking due to the nonzero strange quark mass. The collective baryon wave function is given in terms of Wigner matrix \(D_{B,S}^{(\mathcal{R})*}\) where \(B=(Y,T,T_{3})\) corresponds to the SU(3) quantum numbers of the baryon in representation \({\cal R}\), and \(S=(Y^{\prime},T^{\prime},T^{\prime}_{3})\) is related to the soliton spin \(J\). Here \(Y^{\prime}=N_{\rm val}/3\) selects the allowed SU(3) representations. For light ground state baryons \({\cal R}={\bf 8}\) or \({\bf 10}\) with \(T^{\prime}=J\) and \(T^{\prime}_{3}=-J_{3}\), where \(T^{\prime}=1/2\) for octet and \(3/2\) for decuplet. It has been initially observed in [2] that removing one valence quark leading to \(N_{\rm val}=N_{\rm c}-1\) hardly changes chiral mean fields in the limit \(N_{c}\to\infty\), however, the quantization rule related to \(S\) selects \({\cal R}=\overline{\bf 3}\) with \(J=0\) or \({\bf 6}\) with \(J=1\), see Fig. 1. Assuming that heavy baryon consists from such a soliton and a heavy quark, one reproduces the quark model SU(3) pattern of ground state heavy baryons. In what follows we shall explore the resulting phenomenology. ## 2 Positive parity heavy baryons States in the multiplets of Fig. 1 are degenerate in the SU(3) symmetry limit. The collective Hamiltonian has to be supplemented by the perturbation: \[H_{\rm sb}=\alpha\,D_{88}^{(8)}+\beta\,\hat{Y}+\frac{\gamma}{\sqrt{3}}\sum_{i= 1}^{3}D_{8i}^{(8)}\;\hat{J}_{i}, \tag{1}\] where \(\alpha\), \(\beta\), and \(\gamma\) are proportional to \(m_{s}-m_{u,d}\) and are given in terms of the moments of inertia and the pion-nucleon sigma term, see Ref. [3] for their explicit form. Since we know the collective wave functions, it is rather straightforward to compute the mass splittings in the first order of the perturbative expan Figure 1: Rotational band of the \(\chi\)QS with one valence quark stripped off. Soliton spin is related to the isospin \(T^{\prime}\) of states on the quantization line \(Y^{\prime}=2/3\) (green thick horizontal line). On the right hand side we display particle names used in the present paper. Figure from Ref. [6]. sion. The result reads \[M^{Q}_{\overline{\bf 3},J=0} = m_{Q}+M_{\rm sol}+\frac{1}{2I_{2}}+\delta_{\overline{\bf 3}}Y,\] \[M^{Q}_{{\bf 6},J=1} = M^{Q}_{\overline{\bf 3}}+\frac{1}{I_{1}}+\delta_{\bf 6}Y, \tag{2}\] where \(\delta_{\overline{\bf 3}}\) and \(\delta_{\bf 6}\) are known functions of \(\alpha\), \(\beta\) and \(\gamma\)[3]. Since the soliton in \({\bf 6}\) is quantized as spin \(J=1\), we have to add spin-spin interaction between the heavy quark and the soliton, leading to the hyperfine splitting \[\delta^{\rm(h.f.)}_{6}=\!\frac{\varkappa}{m_{Q}}\left\{\begin{array}{ll}-2/ 3&\mbox{for}\quad s=1/2\\ \\ +1/3&\mbox{for}\quad s=3/2\end{array}\right. \tag{3}\] where \(s\) stands for heavy baryon spin and \(\varkappa\) is a new free parameter. Formulae (2) and (3) imply Gell-Mann-Okubo equal spacing mass relations and one relation allowing to compute \(\Omega^{*}_{Q}(s=3/2)\) mass: \[M_{\Omega^{*}_{Q}}=2M_{\Xi^{\prime}_{Q}}+M_{\Sigma^{*}_{Q}}-2M_{\Sigma_{Q}}. \tag{4}\] Equation (4) yields \((2764.5\pm 3.1)\) MeV for \(M_{\Omega^{*}_{c}}\), which is 1.4 MeV below the experiment, and predicts [3] \[M_{\Omega^{*}_{b}}=6076.8\pm 2.25\;{\rm MeV}. \tag{5}\] In order to compute masses in a model independent way (in a sense of [12]) one can try to extract the parameters from the light baryon spectra [3] or entirely from the heavy quark sector [6]. In both cases the description of the data is very good, although in the first case some modification of the parameters proportional to \(N_{\rm val}\) is required [3] in rough agreement with model calculations [13, 14]. The model allows to compute decay widths \(B_{1}\to B_{2}+\varphi\) in no recoil approximation [5]. The decay operator can be computed via the Goldberger-Treiman relation \[{\cal O}_{\varphi}=\frac{1}{2F_{\varphi}}\left[-\tilde{a}_{1}D^{(8)}_{\varphi \,i}-a_{2}\,d_{ibc}D^{(8)}_{\varphi\,b}\hat{J}_{c}-a_{3}\frac{1}{\sqrt{3}}D^{( 8)}_{\varphi\,8}\hat{J}_{i}\right]\,p_{i}. \tag{6}\] Constants \(\tilde{a}_{1}\), \(a_{2,3}\) that enter Eq. (6) can been extracted from the semileptonic decays of the baryon octet [5]. Here \(\varphi\) stands for a pseudoscalar meson, \(F_{\varphi}\) for the pertinent decay constant and \(p_{i}\) for meson momentum. Again the results for the decay widths are in very good agreement with data [5]. The model has been also applied to compute other quantities: magnetic moments [15], form factors [16, 17, 18, 19], isospin splittings [19], nuclear matter effects on heavy baryon masses [20, 21] and spin content of heavy baryons [22]. ## 3 Exotica Quantization condition \(Y^{\prime}=N_{\rm val}/3=2/3\) selects not only \({\cal R}=\overline{\bf 3}\) and \({\bf 6}\) but also exotic \(\overline{\bf 15}\) pentaquarks, which can be quantized as \(J=0\) or 1, see Fig. 1. It turns out that \(J=1\) multiplet is lighter [4]. Adding a heavy quark leads to two hyperfine split multiplets of \(s=1/2\) and \(3/2\). It has been proposed in Ref. [4] that two narrowest \(\Omega_{c}\) baryons discovered by the LHCb Collaboration in 2017 [23], namely \(\Omega_{c}^{0}(3050)\) and \(\Omega_{c}^{0}(3119)\) belong to \(\overline{\bf 15}_{J=1}\). This assignment has been motivated by the fact that their hyperfine splitting is equal to the one of the ground state sextet, which is in the same rotational band, and has been further reinforced by the calculation of their widths [5], which vanish for \(N_{\rm c}\to\infty\)[24]. Introducing new exotic multiplets, in itself very attractive, is nevertheless a phenomenological challange, as we have to explain why 43 new exotic states have not been so far observed. Recently in Ref. [6] we have shown that states in \(\overline{\bf 15}_{J=1}\) are in fact very narrow, and on the contrary \(\overline{\bf 15}_{J=0}\) is very broad. The identification of exotica requires dedicated experiments. Multipurpose searches can easily miss narrow or wide exotic states. Interestingly the lightest nucleon-like pentaquark in \(\overline{\bf 15}_{J=0}\) (see Fig. 1) decays only to to the \(N_{c}\) state in \(\overline{\bf 15}_{J=1}\), which is semi-stable. ## 4 Negative parity heavy baryons In Sec. 3 we have argued that two out of five \(\Omega_{c}\) states discovered by the LHCb can be interpreted as pentaquarks. In Refs. [4, 7] the remaining three have been interpreted as members of negative parity sextets. Negative parity baryons appear in the \(\chi\)QSM as rotational bands of the excited soliton [2, 25], and are therefore analogous to the diquark excitations (so-called \(\rho\) modes) in the quark language. Diquark-heavy quark excitations, referred to as \(\lambda\) modes, are in the present approach supperssed in the large \(N_{c}\) limit [7]. In the excited \(\chi\)QS the empty valence level in the light sector can be taken by a quark excitation from one of the filled sea levels. If such a state has negative parity, the soliton itself is parity odd. The rotational band corresponds to the same SU(3) representations as the ground state, however the soliton spin is no longer equal to \(T^{\prime}\). The situation is more complicated [25]. Namely, the soliton spin \(J\) coupled to \(T^{\prime}\) has to be equal to the _grand spin_\(K=T+S\). Here \(T\) and \(S\) denote the isospin and the spin of the quark level in question. This condition follows from the _hedgehog_ symmetry. For the ground state \(K=0\), and we recover quantization from Sec. 1. We assume that the first sea level has \(K^{P}=1^{-}\). The SU(3) \(\overline{\bf 3}\) has \(T^{\prime}=0\), and therefore negative parity soliton has \(J=1\). Adding a heavy quark we get two hyperfine split antitriplets with spin 1/2 and 3/2. This pattern is clearly seen in data [7, 26]. The sextet has \(T^{\prime}=1\) hence \(J=0,1\) and 2. Therefore heavy negative parity baryons come in three spin submultiplets: (\(J=0,s=1/2\)), (\(J=1,s=1/2,3/2\)) and (\(J=2,s=3/2,5/2\)). In Ref. [4] the remaining three LHCb \(\Omega_{c}^{0}\) states have been interpreted as members of \(J=0\) and 1 submultiplets, and it has been argued that \(J=2\) states are very broad. Phenomenology of charm and baryon sextets has been in detail discussed in Ref. [7], where several scenarios of possible assignments of existing states and predictions for the remaining ones have been presented. Within these scenarios possible two body decay patterns have been discussed, and the arguments have been given why some states have not been seen in the two-body mass distributions. ## 5 Doubly heavy tetraquarks Following Ref. [27] it has been already observed in [8] that one can replace heavy quark by a heavy antidiquark without modifying the soliton. In this case one obtains the family of doubly heavy tetraquarks \(q_{1}q_{2}\bar{Q}\bar{Q}\) in a color singlet, since a symmetric heavy diquark is in color triplet as is a single quark. A charm tetraquark infinitesimally below the threshold has been recently observed by the LHCb [28, 29]. Because of Pauli principle \(\bar{Q}\bar{Q}\) has spin 1. The model admits spin 1 antitriplet flavor multiplet and \({\bf 6}\) of spin 0,1 and 2. Whether such a system is bound depends on the \(\bar{Q}\bar{Q}\) dynamics, which has to be separately modelled. Assuming that the diquark mass is equal to the sum of the heavy quark masses one gets overbinding [8]. Calculating diquark mass from the Schrodinger equation with Cornell potential, treating the Coulomb part as a perturbation, one gets that the charm \(\overline{\bf 3}\) tetraquark is approximately 70 MeV above the \(D^{*}D\) threshold, while the bottom one is bound. For details we refer the reader to Ref. [8]. ## Acknowledgments This work was supported by the Polish NCN grant 2017/27/B/ST2/01314. The author acknowledges stimulating discussions with late Maxim V. Polyakov who participated in this project for many years until his premature death on August 25, 2021.
2304.02991
Exploiting the Complementarity of 2D and 3D Networks to Address Domain-Shift in 3D Semantic Segmentation
3D semantic segmentation is a critical task in many real-world applications, such as autonomous driving, robotics, and mixed reality. However, the task is extremely challenging due to ambiguities coming from the unstructured, sparse, and uncolored nature of the 3D point clouds. A possible solution is to combine the 3D information with others coming from sensors featuring a different modality, such as RGB cameras. Recent multi-modal 3D semantic segmentation networks exploit these modalities relying on two branches that process the 2D and 3D information independently, striving to maintain the strength of each modality. In this work, we first explain why this design choice is effective and then show how it can be improved to make the multi-modal semantic segmentation more robust to domain shift. Our surprisingly simple contribution achieves state-of-the-art performances on four popular multi-modal unsupervised domain adaptation benchmarks, as well as better results in a domain generalization scenario.
Adriano Cardace, Pierluigi Zama Ramirez, Samuele Salti, Luigi Di Stefano
2023-04-06T10:59:43Z
http://arxiv.org/abs/2304.02991v1
Exploiting the Complementarity of 2D and 3D Networks to Address Domain-Shift in 3D Semantic Segmentation ###### Abstract 3D semantic segmentation is a critical task in many real-world applications, such as autonomous driving, robotics, and mixed reality. However, the task is extremely challenging due to ambiguities coming from the unstructured, sparse, and uncolored nature of the 3D point clouds. A possible solution is to combine the 3D information with others coming from sensors featuring a different modality, such as RGB cameras. Recent multi-modal 3D semantic segmentation networks exploit these modalities relying on two branches that process the 2D and 3D information independently, striving to maintain the strength of each modality. In this work, we first explain why this design choice is effective and then show how it can be improved to make the multi-modal semantic segmentation more robust to domain shift. Our surprisingly simple contribution achieves state-of-the-art performances on four popular multi-modal unsupervised domain adaptation benchmarks, as well as better results in a domain generalization scenario. ## 1 Introduction 3D semantic segmentation is a critical task in many real-world applications, such as autonomous driving and robotics. It involves assigning labels to 3D points in a point cloud based on their semantic meaning. However, this task can be extremely challenging due to ambiguities coming from the unstructured, sparse, and uncolored nature of the 3D point clouds. Fortunately, combining 3D information with others coming from sensors with a different modality, such as RGB cameras, can help to address these shortcomings. Indeed, by combining multi-modal data, we can leverage the strengths of each modality to produce more comprehensive and accurate segmentations. For example, in autonomous driving scenarios, RGB cameras and LiDARs are commonly used together. RGB cameras provide dense, colored, and structured information, but they may fail in dark lighting conditions. On the other hand, LiDARs are robust to light conditions, but the point clouds present the problems highlighted above. By combining these two modalities, we can obtain a richer understanding of the environment and make more robust and precise 3D segmentations. Several recent approaches for multi-modal 3D semantic segmentation [24, 25, 51, 68, 53] leverage a peculiar two-branch 2D-3D architecture, in which images are processed by a 2D convolutional network, e.g., ResNet [18], while point clouds by a 3D convolutional backbone, e.g., SparseConvNet [15]. By processing each modality independently, each of the two branches focuses on extracting features from its specific signal (RGB colors or 3D structure information) that can be fused effectively due to their inherent complementarity in order to produce a better segmentation score. Indeed, averaging logits from the two branches provides often an improvement in performance, e.g., a mIoU gain from 2% to 4% in almost all experiments in [24]. Although we agree that each modality embodies specific information, such as color for images and 3D coordinates for point clouds, we argue that the complementarity of the features extracted by the two branches is also tightly correlated to the different information processing machinery, i.e., 2D and 3D convolutions, which makes networks focusing on different areas of the scene with different receptive fields. Indeed, in Fig. 1, given a point belonging to the red car, we visualize the effective receptive field [34] of the 2D and 3D networks (red ellipses). As we can clearly see from the receptive fields in the right part of the figure, the features extracted by the 3D network mainly leverage points in a 3D neighborhood, i.e., include points of the car surface. In contrast, the features extracted by the 2D network look at a neighborhood in the 2D projected space, and thus they depend also on pixels of the building behind the car, which are close in image space but far in 3D. We argue that this is one of the main reasons why the features from the two branches can be fused so effectively. Based on the above intuition, we propose to feed 3D and RGB signals to both networks as this should not hinder the complementarity of their predictions, with the goal of making the network more robust to the change of distributions between the training and the test scenarios. This problem is typically referred to _Domain shift_ in the literature. Feeding both branches with both modalities would make: i) the 2D network more robust to domain shifts, as depth information (z coordinates of point clouds projected into image space) is more similar across different domains, as shown in several papers [6, 65, 44, 48, 6, 9]; ii) the 3D network more capable of adapting to new domains thanks to RGB information associated with each point which allows learning better semantic features for the target domain, when this is available, using Unsupervised Domain Adaptation (UDA) approaches. Thus, we propose a simple architecture for multi-modal 3D semantic segmentation consisting of a 2D-3D architecture with each branch fed with both RGB and 3D information. Despite its simplicity, our proposal achieves state-of-the-art results in multi-modal UDA benchmarks, surpassing competitors by large margins, as well as significantly better domain generalization compared to a standard 2D-3D architecture [25]. Code available at [https://github.com/CVLAB-Unibo/WM2D3D](https://github.com/CVLAB-Unibo/WM2D3D). Our contributions are: * shining a light on the intrinsic complementarity of recent multi-modal 3D semantic segmentation networks based on 2D-3D branches; * proposing a simple yet remarkably effective baseline that injects depth cues into the 2D branch and RGB colors into the 3D branch while preserving the complementarity of predictions; * our network achieves state-of-the-art results in popular UDA benchmarks for multi-modal 3D semantic segmentation and surpasses standard 2D-3D architectures in domain generalization. ## 2 Related works **Point Cloud Semantic Segmentation.** 3D data can be represented in several ways such as point clouds, voxels, and meshes, each with its pros and cons. Similarly to pixels in 2D, voxels represent 3D data as a discrete grid of the 3D space. This representation allows using convolutions as done for images. However, performing a convolution over the whole 3D space is memory intense, and it does not consider that many voxels are usually empty. Some 3D CNNs [45, 54] rely on OctTree [35] to reduce the memory footprint but without addressing the problem of manifold dilation. SparseConvNet [15] and similar implementations [11] address this problem by using hash tables to convolve only on active voxels, allowing the processing of high-resolution point clouds with only one point per voxel. Aside from cubic discretization, some approaches [75, 77] employ cylindrical voxels. Other methods address the problem with sparse point-voxel convolutions [53]. Differently, point-based networks process directly each point of a point cloud. PointNet++ [42] extract features from each point, and then extract global and local features by means of max-pooling in a hierarchical way. Many improvements have been proposed in this direction, such as continuous convolutions [55], deformable kernels [55] or lightweight alternatives [23]. In this work, we select SparseConvNet [15] as our 3D network as done by other works in the field [24, 39, 51, 68] since it is suitable for 3D semantic segmentation of large scenes. **Multi-Modal Learning.** Exploiting multiple modalities to learn more robust and performant networks is a well-studied field in the literature [1, 38]. Among them, several approaches address the problem of semantic segmentation exploiting RGB and 3D structure information, either with the final goal of segmenting images, e.g., RGB-D networks [17, 58] or point clouds, e.g., LiDAR + RGB ap proaches [16, 28, 68]. To speed-up research in this promising field, several datasets have been collected [13, 14, 3, 5] with 3D point clouds, images, and annotations for tasks such as 3D object detection or 3D semantic segmentation. Recently, some multi-modal methods [24, 39, 51, 68] show that a framework composed of a 2D and a 3D network can obtain very good performance in popular 3D segmentation benchmarks when averaging the scores coming from the two branches. This result is ascribed to the complementarity of the predictions due to the different modalities processed by each branch (either RGB or point clouds). In this paper, we analyze the improvement obtained by fusing the scores, and we argue that it mainly depends on the fact that the two networks extract complementary features because of the different receptive fields of the 2D and 3D networks. Based on this intuition, we propose a simple yet effective modification of the 2D-3D framework, that consists of providing both modalities as input to both branches. **Unsupervised Domain Adaptation.** Unsupervised Domain Adaptation is the research field that investigates how to transfer knowledge learned from a source annotated domain to a target unlabelled domain [63]. In the last few years, several UDA approaches have been proposed for 2D semantic segmentation, using strategies such as style-transfer [19, 27, 31, 37, 71, 73, 8, 10, 74, 75], adversarial training to learn domain-invariant representations [57, 56, 59, 62, 64, 69, 74, 40] or self-training [72, 21, 78, 22, 7]. Recently, some works demonstrated the effectiveness of using depth information to boost UDA for 2D semantic segmentation [6, 43, 44, 48, 61, 65]. In our work, we take inspiration from these findings, and we feed the projected point cloud in input to also to the 2D network, considering depth as a rich source of information robust to the domain shift. Recently some works address UDA also for semantic segmentation of point clouds [26, 49, 40, 46, 70, 29, 70, 76, 2]. Very recently, some works have addressed the challenging multi-modal 3D semantic segmentation task [24, 25, 39, 51]. XMUDA [24] is the first work that focuses on UDA in the above setting, it defines a new benchmark and a baseline approach to adapt to a new target domain with an unsupervised cross-modal loss. [25] extend it, by proposing a more solid and comprehensive benchmark. DsCML [39] also extends XMUDA deploying adversarial training to align features across modalities and domains. In our work, we address the same multi-modal UDA scenarios introduced in [25], and we propose a simple yet effective architecture that is more robust to domain shift and can be adapted to new unlabelled target domains. Our framework, depicted in Fig. 2. ## 3 Method **Setup and Notation.** We define input source samples \(\{\mathbf{x}_{s}^{2D},\mathbf{x}_{s}^{3D}\}\in\mathcal{S}\) and target samples \(\{\mathbf{x}_{t}^{2D},\mathbf{x}_{t}^{3D}\}\in\mathcal{T}\), with \(\mathbf{x}^{2D}\) being the 2D RGB image and \(\mathbf{x}^{3D}\) the corresponding point cloud, with 3D points in the camera reference frame. Note that \(\mathbf{x}^{3D}\) contains only points visible from the RGB camera, assuming that the calibration of the two sensors is available for both domains and does not change over time. We assume the availability of annotations \(\mathbf{y}_{s}^{3D}\) only for the source domain for each 3D point. When tackling the UDA scenario, we also have at our disposal the unlabeled samples from the target domain. Our goal is to obtain a point-wise prediction \(N\times C\) for \(\mathbf{x}_{t}^{3D}\), with \(N\) and \(C\) being the number of points of the target point cloud and the number of classes, respectively. ### Base 2D/3D Architecture We build our contributions upon the two independent branches (2D and 3D) architecture proposed in [25]. The Figure 2: **Framework overview. The RGB image and the sparse depth map obtained from the projection of the corresponding point cloud are fed to a custom 2D architecture to extract point-wise features. The same point cloud and sampled colors from the RGB image are given in input to the 3D Network. Then, two main classifiers output the main predictions to be used at test time. Moreover, two auxiliary classifiers are used at training time only to allow the exchange of information across branches.** 2D branch processes images to obtain a pixel-wise prediction given \(\mathbf{x}^{2D}\) and it consists of a standard 2D U-Net [47]. On the other hand, the 3D branch takes in input point clouds to estimate the class of each point of \(\mathbf{x}^{3D}\) and it is implemented as a 3D sparse convolutional network [15]. Thanks to the fact that 2D-3D correspondences are known, 3D points can be projected into the image plane to supervise the 2D branch, as supervision is provided only for the sparse 3D points. We denote the 3D semantic labels projected into 2D with the symbol \(\mathbf{y}^{3D\to 2D}\). As argued by [25], such design choice allows one to take advantage of the strengths of each input modality, and final predictions can be obtained by averaging the outputs of the two branches to achieve an effective ensemble. In our work, we adopt the same framework, and we give an intuitive explanation of why this design choice is particularly effective. In particular, we reckon that the two predictions are complementary not only for the input signals being different but also for the fact the two branches focus on different things to determine their final predictions. Indeed, 3D convolutions produce features by looking at points that are close in the 3D space, while the 2D counterparts focus on neighboring pixels in the 2D image plane. Therefore, given corresponding 2D and 3D points, the two mechanisms implicitly produce features containing complementary information. In the right part of Fig. 1 we visualize the Effective Receptive Fields (ERF) [34] of a 2D U-Net with backbone ResNet34 [18] and of a 3D U-Net with backbone SparseConvNet [15]. It is worth highlighting that we do not focus on the theoretical yet on the effective receptive field, which is computed by analyzing the real contribution of each input point to the final prediction (the hotter the color intensity in the visualization, the larger the point contribution). Comparing the re-projected 2D ERF into 3D and the 3D ERF we can clearly appreciate that the 2D network focuses on sparse 3D regions, i.e., from the car to the building in the background, while the 3D counterpart reasons on a local 3D neighborhood (only car points). With this intuition in mind, we argue that by feeding the RGB signal to the 3D network, and the 3D information to the 2D backbone, we would still obtain complementary features that can be effectively fused together. Moreover, it is well-known that employing depth information as input to 2D segmentation networks can make it more robust to domain shift [6, 17]. At the same time, we posit that the 3D network with RGB information may be able to extract better semantic features. Differently from previous approaches that employ two independent architectures, based on the above considerations, we propose our multi-modal, two-branch framework named MM2D3D. In Sec. 3.2 we show how a point cloud can be used to obtain a stronger and more suitable input signal for the 2D network. Similarly, in Sec. 3.3 we describe our multi-modal 3D network. ### Depth-based 2D Encoder In this section, we focus on how we can use point clouds to make a 2D segmentation architecture more robust to domain shift. Inspired by [6, 17], we propose to use depth maps as an input signal that is less influenced by the domain gap. As we can observe from the two depth maps in Fig. 3, it is hard to understand which one was captured during day or night. At the same time, some objects such as the car can be distinguished by only looking at depths (bottom right of the second depth map). Thus, depth maps provide useful hints to solve the task of semantic segmentation. Given these considerations, we argue that exploiting such invariant information may alleviate the domain shifts and can be used to extract discriminative features for the segmentation task. At a first glance, injecting 3D cues into the 2D branch may seem redundant as the 3D network already has the capability to reason on the full 3D scene. However, given that the two networks have very different receptive fields, we can exploit such additional and useful information without the risk of hindering the complementarity of the two signal streams. Assuming point clouds expressed in the camera reference frame and the availability of the intrinsic camera matrix, we can project the original 3D point cloud to obtain a sparse depth map. In practice, the value of the \(z\) axis is assigned to the pixel coordinate \((u,v)\) obtained by pro Figure 4: 2D Network of our framework. It is composed of a depth encoder and an RGB encoder to process the two inputs independently. The segmentation decoder leverages the multi-scale features of both encoders to predict semantic segmentation labels. Figure 3: Depth comparison during daylight or night. Differently, from the RGB image (left column), a sparse depth map obtained by projecting a LiDAR scan into the image plane is not affected by the light conditions. jecting a 3D point into the image plane. Similarly to [17], to process both inputs, we modify the 2D encoder of the 2D U-Net architecture by including an additional encoder to process the sparse depth maps obtained from the point cloud. As can be seen in Fig. 4, the two streams i.e. one for the RGB image and the other for the sparse depth map, are processed independently. Then, the concatenated depth and RGB features are processed by a decoder, composed of a series of transposed convolutions and convolutions in order to obtain semantic predictions of the same size as the input image. Moreover, features from layers of \(\frac{1}{2}\) to \(\frac{1}{16}\) of the input resolution are concatenated using skip connections with the corresponding layer of the decoder. This simple design choice allows semantic predictions to be conditioned also on the input depth signal, without altering the RGB encoder that provides useful classification features. Furthermore, without altering the RGB encoder, we can take advantage of a pre-trained architecture on ImageNet [12] as done by our competitors. ### RGB Based 3D Network In this work, we focus on the 3D convolutional network, SparseConvNet [15], as it can segment large scenes efficiently. In this network, the initial point cloud is first voxelized such that each 3D point is associated with only one voxel. Then, rather than processing the entire voxel grid, these models work with a sparse tensor representation ignoring empty voxels for the sake of efficiency. The network associates a feature vector to each voxel, and convolutions calculate their results based on these features. A standard choice for the voxel features is to simply assign to it a constant value, i.e., 1. Although these strategies have been shown to be effective [50, 68], the feature vector can be enriched to make it even more suitable for semantic segmentation. Based on our intuition of the different receptive fields, we can borrow information from the other modality to improve the performance of each branch, still preserving 2D-3D feature complementarity. Thus, we use RGB colors directly as features for each voxel of the SparseConvNet. Moreover, we design a simple yet effective strategy to let the 3D network decide whether to use or not this information. More specifically, the original RGB pixel values are fed to a linear layer that predicts a scalar value \(\alpha\) to be multiplied by the color vector. For instance, learning this scaling could be useful in the UDA scenario, where we can train on unlabelled target samples, to discard RGB colors in case they do not provide any useful information, e.g., dark pixels in images acquired at night time. ### Learning Scheme **Supervised Learning.** Given the softmax predictions of the 2D and 3D networks, \(P_{\text{2D}}\) and \(P_{\text{3D}}\), we supervise both branches using the cross-entropy loss on the source domain: \[\mathcal{L}_{\text{seg}}(\mathbf{x}_{s},\mathbf{y}_{s})=-\frac{1}{N}\sum_{n=1}^{N} \sum_{c=1}^{C}\mathbf{y}_{s}^{(n,c)}\log\mathbf{P}_{\mathbf{x}_{s}}^{(n,c)} \tag{1}\] with \((\mathbf{x}_{s},\mathbf{y}_{s})\) being either \((\mathbf{x}_{s}^{\text{2D}},\mathbf{y}_{s}^{\text{3D}\to 2D})\) or \((\mathbf{x}_{s}^{\text{3D}},\mathbf{y}_{s}^{\text{3D}})\). **Cross-Branch Learning.** To allow an exchange of information between the two branches, [25] and [39] add an auxiliary classification head to each one. The objective of these additional classifiers is to mimic the other branch output. The two auxiliary heads estimate the other modality output: 2D mimics 3D (\(P_{\text{2D}\to\text{3D}}\)) and 3D mimics 2D (\(P_{\text{3D}\to\text{2D}}\)). In practice, this is achieved with the following objective: \[\mathcal{L}_{\text{xM}}(\mathbf{x}) =\mathbf{D}_{\text{KL}}(\mathbf{P}_{\mathbf{x}}^{(n,c)}||\mathbf{Q}_{\mathbf{x}}^{(n, c)}) \tag{2}\] \[=-\frac{1}{N}\sum_{n=1}^{N}\sum_{c=1}^{C}\mathbf{P}_{\mathbf{x}}^{(n,c)} \log\frac{\mathbf{P}_{\mathbf{x}}^{(n,c)}}{\mathbf{Q}_{\mathbf{x}}^{(n,c)}}\] with \((\mathbf{P},\mathbf{Q})\in\{(\mathbf{P}_{\text{2D}},P_{\text{3D}\to\text{2D}}),(\mathbf{P}_{ \text{3D}},P_{\text{2D}\to\text{3D}})\}\) where \(\mathbf{P}\) is the distribution from the main classification head which has to be estimated by \(\mathbf{Q}\). Note that in Eq. (2), \(x\) can belong to either \(\mathcal{T}\) or \(\mathcal{S}\). This means that, in the UDA scenario, Eq. (2) can also be optimized for \(\mathcal{T}\), forcing the two networks to have consistent behavior across the two modalities for the target domain as well without any labels. **Self-Training.** Only in the UDA scenario, where unlabelled target samples are available, as done by [25], we perform one round of Self-Training [78] using pseudo-labels [30]. Specifically, after training the model with Eq. (1) for the source domain and Eq. (2) on both domains, we generate predictions on the unlabeled target domain dataset to be used as pseudo ground truths, \(\mathbf{\hat{y}}_{t}\). Following [25], we filter out noisy pseudo-labels by considering only the most confident predictions for each class. Then, we retrain the framework from scratch the model minimizing the following objective function: \[\mathcal{L} =\mathcal{L}_{\text{seg}}(\mathbf{x}_{s},\mathbf{y}_{s})+\lambda_{t} \mathcal{L}_{\text{seg}}(\mathbf{x}_{t},\mathbf{\hat{y}}_{t}) \tag{3}\] \[+\lambda_{xs}\mathcal{L}_{\text{xM}}(\mathbf{x}_{s})+\lambda_{xt} \mathcal{L}_{\text{xM}}(\mathbf{x}_{t})\] ## 4 Experiments ### Datasets To evaluate our method, we follow the benchmark introduced in [25] because it comprehends several interesting domain shift scenarios. The datasets used in the benchmark are nuScenes [5] A2D2 [14], SemanticKITTI [3], and VirtualKITTI [13] in which LiDAR point clouds and camera are synchronized and calibrated so that the projection between a 3D point and its corresponding 2D image pixel can always be computed. It is important to note that only 3D points visible from the camera are used for both training and testing. NuScenes consists of 1000 driving scenes in total, each of 20 seconds, with 40k annotated point-wise frames taken at 2Hz, and it is deployed to implement two adaptation scenarios: day-to-night and country-to-country. The former exhibits severe light changes between the source and the target domain, while the latter covers changes in the scene layout. In both settings adaptation is performed on six classes: _vehicle_, _driveable_surface, sidewalk, terrain, manmade, vegetation_. The third challenging benchmark foresees adaptation from synthetic to real data, and it is implemented by adapting from VirtualKITTI to SemanticKITTI. Since VirtualKITTI only provides depth maps, we use the same simulated LiDAR scans from our competitor [25] for a fair comparison. Note also that to accommodate for the different classes in the two datasets, a class mapping is required and we use the same defined in [25]. The last adaptation scenario involves A2D2 and SemanticKITTI. The A2D2 dataset is composed of 20 drives, with a total of 28,637 frames. As the LiDARs sensor is very sparse (16 layers), all three front LiDARs are used. All frames of all sequences are used for training, except for the sequence 20180807_145028 which is left out for testing. The SemanticKITTI dataset features a large-angle front camera and a 64-layer LiDAR. Scenes from 0, 1, 2, 3, 4, 5, 6, 9, 10 are used for training, scene 7 as validation, and 8 as a test set. In this case, only the ten classes that are in common along the two datasets are used: _car, truck, bike, person, road, parking, sidewalk, building, nature, other-objects_. ### Implementation details We use the same data augmentation pipeline as our competitors, which is composed of random horizontal flipping and color jittering for 2D images, while vertical axis flipping, random scaling, and random 3D rotations are used for the 3D scans. It is important to note that augmentations are done independently for each branch. We implement our framework in PyTorch using two NVIDIA 3090 GPU with 24GB of RAM. We train with a batch size of 16, alternating batches of source and target domain for the UDA case and source only in DG. The smaller dataset is repeated to match the length of the other. We rely on the AdamW optimizer [33] and the One Cycle Policy as a learning rate scheduler [52]. We train for 50, 35, 15, and 30 epochs for USA \(\rightarrow\) Singapore, Day \(\rightarrow\) Night, v. KITTI \(\rightarrow\) Sem. KITTI, and A2D2 \(\rightarrow\) Sem. KITTI respectively. As regards the hyper-parameters, we follow [25] and set \(\lambda_{s}=0.8,\lambda_{t}=0.1,\lambda_{xs}=0.1,\lambda_{xt}=0.01\) in all settings without performing any fine-tuning on these values. ### UDA results Following previous works in the field [25, 39], we evaluate the performance of a model on the target test set using the standard Intersection over Union (IoU) and select the best checkpoint according to a small validation set on the target domain. In Tab. 1, we report our results on the four challenging UDA benchmarks explained in Sec. 4.1. For each experiment, we report two reference methods: a model trained only on the source domain, named _Baseline (Source Only)_; a model trained only on the target data using annotations, representing the upper bound than can be obtained with real ground-truth, namely _Oracle_. We note that these two models employ the two independent stream architecture of [25]. In the columns _Avg_, we report the results obtained by the mean of the 2D and 3D outputs after softmax which is the final output of our multi-modal framework. For the sake of completeness, we also report the results of each individual branch (2D and 3D only). We compare our method with both Uni-modal and Multi-Modal approaches. In particular, we mainly focus on a comparison with xMUDA [25] and DsCML [39], as they are the current s.o.t.a. methods for UDA in our multi-modal setting. In particular, for the latter, we use the official code provided by the authors 1 to retrain the model on the new more exhaustive benchmark defined by [25]. Overall, we note how our contributions largely improve results over competitors across all settings and modalities. In USA \(\rightarrow\) Singapore, we observe a large boost in both branches, and on average we report a +3% \begin{table} \begin{tabular}{c l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Modality} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{**USA = Singapore**} & \multicolumn{3}{c}{**Day = Night**} & \multicolumn{3}{c}{v.KITTI = Sem & KITTI \(\uparrow\)} & \multicolumn{3}{c}{**A2D = Sem & KITTI \(\downarrow\)**} \\ \cline{3-14} & & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg \\ \hline \multirow{4}{*}{Un-modal} & Baseline (Source only) & 58.4 & 62.8 & 68.2 & 47.8 & 68.8 & 63.3 & 26.8 & 42.0 & 42.2 & 34.2 & 35.9 & 40.4 \\ \cline{2-14} & MiniEnt [60] & 57.6 & 61.5 & 66.0 & 47.1 & 68.8 & 63.6 & 39.2 & 43.3 & 47.1 & 37.8 & 39.6 & 42.6 \\ \cline{1-1} \cline{2-14} & Deep logCORAL [36] & 64.4 & 63.2 & 69.4 & 47.7 & 68.7 & 63.7 & 41.4 & 36.8 & 47.0 & 35.1 & 41.0 & 42.2 \\ \cline{1-1} & Pr. [3] & 62.0 & 64.8 & 70.4 & 47.0 & 69.6 & 63.0 & 21.5 & 44.3 & 35.6 & 34.7 & 41.7 & 45.2 \\ \hline \multirow{4}{*}{Multi-modal} & xMUDA [24] & 64.4 & 63.2 & 69.4 & 55.5 & 69.2 & 67.4 & 42.1 & 46.7 & 48.2 & 38.3 & 46.0 & 44.0 \\ \cline{1-1} & DsCML* [39] & 52.9 & 52.3 & 56.9 & 51.2 & 61.4 & 61.8 & 31.8 & 32.8 & 34.8 & 25.4 & 32.6 & 33.5 \\ \cline{1-1} & MMED3D (Ours) & **71.7** & **66.8** & **72.4** & **70.5** & **70.2** & **72.1** & **53.4** & **50.3** & **56.5** & **42.3** & **46.1** & **46.2** \\ \hline \hline \multirow{4}{*}{Multi-modal} & Oracle & 75.4 & 76.0 & 79.6 & 61.5 & 69.8 & 69.2 & 66.3 & 78.4 & 80.1 & 59.3 & 71.9 & 73.6 \\ \cline{1-1} \cline{2-14} & & \multicolumn{1}{c}{} & & & & & & & & & & \\ \cline{1-1} \cline{2-14} & & \multicolumn{1}{c}{} & & & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: **Results for UDA for 3D semantic segmentation with both uni-modal and multi-modal adaptation methods**. We report performance for each network stream in terms of mIoU. ‘Avg’ column denotes the obtained by taking the mean of the 2D and 3D predictions. * indicates trained by us using official code. (third row of the Multi-modal section). The large improvement (+7.3%) for the 2D model, suggests that the depth cues injected into a common 2D decoder can be quite useful even if the light conditions are similar across domains. In Day \(\rightarrow\) Night, we observe a remarkable +15% for the 2D branch, which in turn rises the average score to +4.7% when compared with the previous best model. We attribute this boost in performance to the depth encoder, which is able to provide useful hints when the RBG encoder has to deal with large changes in light conditions. Remarkably, our network surpasses even the performance of the two independent streams _Oracle_. Indeed, as discussed in Sec. 3.2, the sparse depth is able to give useful details for the task of semantic segmentation. Moreover, thanks to the fact that the cross-modal loss Sec. 3.4 is optimized for both domains, the network lean to use both encoders to make the final predictions, leading to more robust performance when the encoder receives a less informative RGB signal. In the challenging synthetic-to-real case (v. KITTI \(\rightarrow\) Sem. KITTI), we also notice consistent improvements in both branches. We highlight that even though RGB colors are here likely the main source of the domain gap, they are still useful to obtain a stronger 3D model (+3.6%). In the A2D2 \(\rightarrow\) Sem. KITTI setting, where the sensors setup is different, we still benefit from the depth hints provided to the 2D network, and on average, our method surpasses by 2.2% MUDA. In general, we highlight that though we employed both modalities in the 2D and 3D branches, the Avg performances are better than those of each individual branch, supporting our core intuition. In Fig. 5, we report some qualitative results obtained with our framework. ### Domain Generalization results In this section, we test our contributions in the Domain Generalization setting, in which the target data cannot be used at training time. For this study we consider XMUDA [25] as our baseline two-branch 2D-3D method, and we show that our simple contribution can boost generalization performances. Results are reported in Tab. 2. To implement this experiment we keep the same hyper-parameters as used in the UDA scenario. We retrain [24] using the official code, but without the target data. Also in this set \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{**USA\(\rightarrow\) Singapore**} & \multicolumn{2}{c}{**Dap \(\rightarrow\) Night**} & \multicolumn{2}{c}{**\(\mathtt{\text{W}}\)KITTI \(\rightarrow\)Sent KitTI**} & \multicolumn{2}{c}{**\(\mathtt{\text{A}}\)D**} & \multicolumn{2}{c}{**\(\mathtt{\text{Sent KitTI}}\)} \\ \cline{2-13} & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg \\ \hline sMUDA* [24] & 58.7 & **62.3** & 68.6 & 43.0 & **68.9** & 59.6 & 25.7 & 37.4 & 39.0 & 34.9 & **36.7** & 41.6 \\ MM2D3D (Ours) & **69.7** & **62.3** & **70.9** & **65.3** & 63.2 & **68.3** & **37.7** & **40.2** & **44.2** & **39.6** & 35.9 & **43.6** \\ \hline \hline \end{tabular} \end{table} Table 2: **Results for 3D for semantic segmentation in the Domain Generalization setting.** We report performance for each network stream in terms of mIoU. ‘Avg’ column denotes the obtained by taking the mean of the 2D and 3D predictions. * indicates trained by us using official code. Figure 5: **Qualitative examples of the proposed framework in the UDA scenario.** From left to right: RGB images, point cloud segmentations projected into 2D for visualization purpose of the baseline source only model, our method, and the ground truth respectively. From top to bottom: the four different adaptation scenarios. Comparisons are provided for the target domain. ting, we observe overall large improvements. We believe that this can be ascribed especially to the introduction of the depth encoder, which helps to achieve a better generalization. Evidence of this is well observable in the Day \(\rightarrow\) Night, where the 2D performance increases from 43% to 65.3% in terms of mIoU, but also for USA \(\rightarrow\) Singapore and (v. KITTI \(\rightarrow\) Sem. KITTI), where we achieve +11% and +12 respectively. In the Day \(\rightarrow\) Night scenario, the 3D branch experience a drop in performance. We think that it is related to the large domain shift of RGB images. Differently from the adaptation scenario in which we can train directly on the unlabeled target data to counteract this problem, in the generalization scenario, it influences badly the 3D performance. However, we note that our final Avg prediction still outperforms xMUDA. ### Ablation Studies **Modality-wise analysis.** In Tab. 3, we ablate our contributions starting from the model proposed by [24] in the UDA scenario. We start by activating our depth-based network, introduced in Sec. 3.3. The performance boost given by our proposal is remarkable across all settings. In cases such as Day \(\rightarrow\) Night, where the RGB gap is larger, the depth cues injected with skip connections to the semantic decoder greatly enhance performances in the target domain (+15.8% for 2D and +5.4% in "Avg). We note also a consistent improvement for the remaining settings, in particular, we highlight a +10.5% for the 2D scores on the challenging synthetic-to-real adaptation benchmark (v. KITTI \(\rightarrow\) Sem. KITTI). Furthermore, when feeding RGB colors to the 3D network (last row of Tab. 3), we observe improved performances in almost all settings. The largest improvement is oberved in the synthetic-to-real setting, where we achieve a +10% in terms of mIou for the 3D, which in turn increased the average score from 53.7% to 56.5%. Better performance is also achieved for both the 3D network and the average score for A2D2 \(\rightarrow\) Sem. KITTI. **Self-Training.** In this section, we compare different self-training strategies and report results in Tab. 4. As explained in Sec. 3.4, for the self-training protocol we first need a model trained on the source domain to produce the pseudo-labels for the target domain in the second round. We report in the first row of Tab. 4 the performance of this starting model to better appreciate the effectiveness of self-training. First, we note how thanks to our contributions, for USA \(\rightarrow\) Singapore, Day \(\rightarrow\) Night, and v. KITTI \(\rightarrow\) Sem. KITTI we already surpass xMUDA [24] on the _Avg_ column even without the usage of pseudo-labels. When pseudo-labels from the 2D and the 3D branches are used to supervise the 2D and the 3D network respectively, we establish new state-of-the-art performances for all four settings in the average predictions (third row). Furthermore, in the fourth row of Tab. 4, we deploy the strategy proposed in [24], where point-wise features from the two networks are concatenated and used to train a unique classifier). In this case, we observe mixed results, indicating that this self-training strategy is not necessarily better across all settings when compared to the standard self-training protocol. ## 5 Conclusions In this paper, we shed light on the complementarity of recent and emerging 3D-2D architectures for 3D semantic segmentation. We provide an intuitive explanation based on the notion of effective receptive field of why processing data with these two networks grants orthogonal predictions that can be effectively fused together. Based on this, we propose to feed both modalities to both branches. Despite the simplicity of our approach, we establish new state-of-the-art results in four common UDA scenarios and demonstrate superior generalization performance over the baseline 2D-3D architecture. A limitation of our work is that our method is purely multi-modal, and it requires both modalities and a valid calibration across sensors at test time. An interesting future direction is to investigate how our approach may generalize to other multi-modal 2D-3D architectures for semantic segmentation. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{USA\(\rightarrow\) Singapore} & \multicolumn{3}{c}{Day \(\rightarrow\) Night} & \multicolumn{3}{c}{\(\times\) KITTI \(\rightarrow\) Sem. KITTI} & \multicolumn{3}{c}{A2D2} & \multicolumn{3}{c}{Senkle \(\uparrow\)1} \\ \cline{2-13} & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg \\ \hline MM2D3D (Ours) & 71.7 & 66.8 & 72.4 & 70.5 & **70.2** & 72.1 & 53.4 & 50.3 & 56.5 & 42.3 & 46.1 & 46.2 \\ \hline \hline xMUDA [25] + PL & 67.0 & 65.4 & 71.2 & 57.6 & 69.6 & 64.4 & 45.8 & 51.4 & 52.0 & 41.2 & **49.8** & 47.5 \\ MM2D3D (Ours) + PL & **74.3** & **68.3** & **74.9** & **71.3** & 69.6 & **72.2** & **55.4** & **55.0** & 59.7 & **46.4** & 48.7 & **50.7** \\ MM2D3D (Ours) + Fusion & x & x & 74.0 & x & x & 71.0 & x & x & **60.4** & x & x & 48.8 \\ \hline \hline \end{tabular} \end{table} Table 4: **Self-training Analysis.** Results with different self-training strategies in the UDA scenario. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Depth} & \multirow{2}{*}{RGB} & \multicolumn{3}{c}{USA\(\rightarrow\) Singapore} & \multicolumn{3}{c}{Day \(\rightarrow\) Night} & \multicolumn{3}{c}{\(\times\) KITTI \(\rightarrow\) Sem. KITTI} & \multicolumn{3}{c}{A2D2} & \multicolumn{3}{c}{Senkle \(\uparrow\)1} \\ \cline{3-14} & & & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg & 2D & 3D & Avg \\ \hline xMUDA [25] & & & 64.4 & 63.2 & 69.4 & 55.5 & 69.2 & 67.4 & 42.1 & 46.7 & 48.2 & 38.3 & 46.0 & 44.0 \\ MM2D3D (Ours) & ✓ & & 69.5 & 64.0 & 69.6 & **71.3** & 69.9 & **72.8** & 52.6 & 40.3 & 53.7 & 41.7 & 44.8 & 45.9 \\ MM2D3D (Ours) & ✓ & ✓ & **71.7** & **64.8** & **72.4** & 70.5 & **79.2** & 72.1 & **53.4** & **50.3** & **56.5** & **42.3** & **46.1** & **46.2** \\ \hline \hline \end{tabular} \end{table} Table 3: **Modality-wise ablation of the proposed framework in the UDA scenario.**_Depth_ indicates the usage of the additional sparse depth encoder, while _RGB_ denotes the introduction of the RGB information in the 3D network.
2303.07696
Shadoks Approach to Convex Covering
We describe the heuristics used by the Shadoks team in the CG:SHOP 2023 Challenge. The Challenge consists of 206 instances, each being a polygon with holes. The goal is to cover each instance polygon with a small number of convex polygons. Our general strategy is the following. We find a big collection of large (often maximal) convex polygons inside the instance polygon and then solve several set cover problems to find a small subset of the collection that covers the whole polygon.
Guilherme D. da Fonseca
2023-03-14T08:19:49Z
http://arxiv.org/abs/2303.07696v2
# Shadoks Approach to Convex Covering ###### Abstract We describe the heuristics used by the Shadoks team in the CG:SHOP 2023 Challenge. The Challenge consists of 206 instances, each being a polygon with holes. The goal is to cover each instance polygon with a small number of convex polygons. Our general strategy is the following. We find a big collection of large (often maximal) convex polygons inside the instance polygon and then solve several set cover problems to find a small subset of the collection that covers the whole polygon. Set cover, covering, polygons, convexity, heuristics, enumeration, simulated annealing, integer programming, computational geometry [MISSING_PAGE_POST] [color=black]0.0 [color=]0.0 [color=black]0.0 [color=]0.0 [color=black]0.0 [color=black]0.0 [color=black]0.0 [color=]0. which is returned as the solution. Figure 1 shows three small solutions and we can observe that most convex polygons are maximal and often much larger than necessary. Our approach is different from that of the winning team DIKU (AMW), that uses clique cover [1]. To construct the collection \(\mathcal{C}\) in phase 1, we used either a modified version of the Bron-Kerbosch algorithm or a randomized bloating procedure starting from a constrained Delaunay triangulation (Section 2). To solve the set cover problem in phase 2, we used integer programming and simulated annealing. The key element for the efficiency of phase 2 is to iteratively generate constraints as detailed in Section 3. Generally speaking, the initial constraints ensure that all input vertices are covered and supplementary constraints ensure that a point in each uncovered area is covered in the following iteration. In fact, to obtain our best solutions, we repeat phase 2 using the union of the solutions from independent runs of the first two phases as the collection \(\mathcal{C}\). Our results are discussed in Section 4. ## 2 Collections We now describe phase 1 of our strategy: building a collection. Throughout, the instance is a polygon with holes \(P\) with vertex set \(V\). Formally speaking, a _collection_\(\mathcal{C}\) is defined exactly as a _solution_\(\mathcal{S}\): a finite set of convex polygons whose union is \(P\). However, while we want a solution \(\mathcal{S}\) to have as few elements as possible, the most important aspect of a collection \(\mathcal{C}\) is that it contains a solution \(\mathcal{S}\subseteq\mathcal{C}\) with few elements. Ideally, \(|\mathcal{C}|\) is also not too big so the second phase solver is not overloaded, but the size of \(\mathcal{C}\) is of secondary importance. Given a set of points \(S\), a convex polygon \(C\subseteq P\) is _\(S\)-maximal_ if the vertices of \(C\) are in \(S\) and there exists no point \(s\in S\) with \(\operatorname{conv}(C\cup\{s\})\subseteq P\). Next, we show how to build a collection with all \(S\)-maximal convex polygons. Bron-KerboschThe Bron-Kerbosch algorithm [2] is a classic algorithm to enumerate all maximal cliques in a graph (in our case the visibility graph) with good practical performance [6]. The algorithm recursively keeps three sets: * Vertices in the current maximal clique. Initially, \(\mathtt{R}=\emptyset\). * Vertices that may be added to the current maximal clique (these must be adjacent to all vertices in \(\mathtt{R}\)). Initially, \(\mathtt{S}=S\). * Vertices that may not be added to the current maximal clique because otherwise the same clique would be reported multiple times. Initially, \(\mathtt{X}=\emptyset\). If the polygon \(P\) has no holes and \(S\), then there is a bijection between the maximal cliques in the visibility graph of \(S\) on \(P\) and the \(S\)-maximal convex polygons. While this is no longer true in the version with holes, we can adapt the Bron-Kerbosch algorithm to enumerate all \(S\)-maximal convex polygons as shown in Listing 1. Figure 3 shows how the number of \(V\)-maximal convex polygons grows for different instances and that we can compute all \(V\)-maximal convex polygons quickly for instances with around 10 thousand vertices. Figure 2: Definitions of \(V\), \(S_{1}\), and \(S_{2}\). Let \(S_{1}\) be the set of the endpoints of the largest segments inside \(P\) that contain each edge of \(P\) (Figure 2). It is easy to see that \(|S_{1}|\leq 2|V|\). However, as shown in Figure 4, we are only able to compute all \((V\cup S_{1})\)-maximal convex polygons for instances with less than one thousand vertices. It is possible that a modified version of the Bron-Kerbosch algorithm gives better results, either by using a pivot or choosing a particular order for the points, but we have not succeeded in obtaining significant improvements. Another natural set of points is the set \(S_{2}\supseteq S_{1}\) defined as the intersection points (inside \(P\)) of the lines containing the edges of \(P\) (Figure 2). The set \(S_{2}\) may however have size roughly \(|V|^{2}\). Hence, computing all \((V\cup S_{2})\)-maximal convex polygons is only feasible for very small instances. Random BloatingAs a \(V\)-maximal convex polygon \(C\) is generally not \(P\)-maximal, we also grow \(C\) with an operation we call bloating. Given a convex polygon \(C\) and a set of points \(S\), we construct an _\(S\)-bloated_ convex polygon \(C^{\prime}\) by iteratively trying to add a random point from \(S\) to \(C\) and taking the convex hull, verifying at each step that \(C^{\prime}\) lies inside the instance polygon \(P\). There are two sets of points that may compose \(S\). First, \(S_{1}(C)\) is the set of endpoints of the largest segment in \(P\) that contains each edge of \(C\). Second, \(S_{2}(C)\) is the union of \(S_{1}(C)\) and the intersection points of the lines containing the edges of \(C\), if the points are inside \(P\). Notice that \(|S_{1}(C)|=O(|C|)\), but \(|S_{2}(C)|=O(|C|^{2})\). To start the bloating operation, we need a convex polygon \(C\). One approach is to use the \(V\)-maximal convex polygons produced by Bron-Kerbosch. A much faster approach for large instances is to use a constrained Delaunay triangulation of the instance polygon. In this case, we start by \(V\)-bloating the triangles into a convex polygon \(C\), and then possibly \(S_{1}(C)\)-bloating or \(S_{2}(C)\)-bloating the polygon \(C\). Since the procedure is randomized, we can _replicate_ the triangles multiple times to obtain larger collections of large convex polygons. ## 3 Set Cover Given a collection \(\mathcal{C}\) of convex polygons that covers \(P\), our covering problem consists of finding a small subset of \(\mathcal{C}\) that still covers \(P\). In contrast to the classic set cover problem, in our case \(P\) is an infinite set of points. Nevertheless, it is easy to create a finite set of _witnesses_\(W\), that satisfy that \(W\) is covered by a subset \(\mathcal{S}\) of \(\mathcal{C}\) if and only if \(P\) is. To do that, we place a point inside each region (excluding holes) of the arrangement of line segments defining the boundaries of the polygons in \(\mathcal{C}\) and \(P\) (Figure 5(a)). The size of such set \(W\) is however very large in practice and potentially quadratic in the number of segments of \(\mathcal{C}\). Producing small sets of witnesses has been studied in the context of art gallery problems [3]. However, we do not know if small sets of witnesses exist for our problem. Hence, we use a loose definition of witness as any finite set of points \(W\subset P\). In practice, we want \(W\) to be such that if \(W\) is covered, then most of \(P\) is covered. Next, we show how to build such set of witnesses and afterwards we describe how we solved the finite set cover problem. WitnessesA set of witnesses \(W\) that gave very good results, which we call _vertex witnesses_, consists of one witness inside each cell of the arrangement that contains a vertex of the instance polygon \(P\), as shown in Figure 5(b). This set guarantees that if \(W\) is covered, then all points that are arbitrarily close to the _vertices_ of \(P\) are covered. However, trivially computing \(W\) requires building the arrangement of the collection \(\mathcal{C}\), which is too slow and memory consuming for large \(\mathcal{C}\). A set of witnesses \(W\) that also gives excellent results and is much faster to compute is called _quick vertex witnesses_. For each vertex \(v\) of \(P\), we consider all edges in \(\mathcal{C}\) and also that are adjacent to \(v\). We order these edges around \(v\) starting and ending with the edges of \(P\). For each pair of consecutive edges, we add a point \(w\) to \(W\) that is between the two consecutive edges and infinitely close to \(v\). Notice that the number of vertex witnesses is linear in the number of edges of \(\mathcal{C}\) and it can also be built in near linear time, avoiding the construction of the whole arrangement of \(\mathcal{C}\). If \(P\) has not colinear points, then the quick vertex witnesses give the same vertex coverage guarantee as the vertex witnesses. We represent points that are arbitrarily close to \(v\) implicitly as a point and a direction. Given a set \(\mathcal{S}^{\prime}\) of convex polygons that cover \(W\), there are two natural options to produce a valid solution \(\mathcal{S}\). The first option is to make \(\mathcal{S}=\mathcal{S}^{\prime}\cup\mathcal{R}\) for a set \(\mathcal{R}\) built as follows. The _uncovered region_\(P\setminus\cup_{C\in\mathcal{S}^{\prime}}C\) consists of a set \(\mathcal{U}\) of disjoint polygons, possibly with holes (Figure 6(a)). However, most of the time the polygons in \(\mathcal{U}\) are in fact convex. For each polygon \(U\in\mathcal{U}\), if \(U\) is convex, then we add \(U\) to \(\mathcal{R}\). Otherwise, we triangulate \(U\) and add the triangles to \(\mathcal{R}\). Furthermore, we can greedily merge convex polygons in \(\mathcal{R}\) to reduce their number, as long as the convex hull of the union remains inside \(P\), which works very well for the SoCG logo solution shown in Figure 6(b). A second option is normally preferable and is based on the constraint generation technique, Figure 5: All the 82 \(V\)-maximal convex polygons for the socg_fixed60 instance with (a) the 1009 arrangement witnesses and (b) the 200 vertex witnesses. Figure 6: (a) A solution that covers all vertex witnesses of the socg_fixed60 instance but not the whole polygon. Uncovered regions are marked in striped red. (b) The optimal solution obtained from the previous one by merging the uncovered regions. widely used in integer programming. We build the set \(\mathcal{R}\) as before, but for each convex polygon \(R\in\mathcal{R}\) we add to \(W\) a point inside \(R\). Then, we run the solver again and repeat until a valid solution is found (or one with a very few uncovered regions). It is perhaps surprising how few iterations are normally needed, as shown in Figure 7. Set Cover SolverA simple and often efficient way to solve a set cover problem \((W,\mathcal{C})\) is to model the problem as _integer programming (IP)_ and then use the CPLEX solver [4]. Each set in \(\mathcal{C}\) becomes a binary variable and each witness point \(w\in W\) becomes a constraint forcing the sum of the sets that contain \(w\) to be at least \(1\). As discussed in the next section, this approach can optimally solve fairly large problems in seconds and give good approximation guarantees to some extremely large problems. However, for some large problems the solution found is extremely bad (sometimes worse than a greedy algorithm). Another solver we used is based on _simulated annealing_. We start from a greedy solution, obtained by adding to \(\mathcal{S}\) the convex polygon that covers the most uncovered witnesses at each step, breaking ties randomly. If a previously added convex polygon in \(\mathcal{S}\) becomes unnecessary, we remove it from \(\mathcal{S}\). At each step, we remove \(3\) random convex polygons from \(\mathcal{S}\) and use the same greedy approach to make the solution cover all \(W\). A larger solution is accepted with a certain probability that depends on the size difference and decreases as we advance in the annealing procedure. This simple procedure normally produces solutions that are close to the IP solutions, and sometimes produce much better solutions. ## 4 Results We now discuss the quality of the solutions obtained for each technique. Our C++ code uses CGAL [8] and CPLEX [4] and is run on Fedora Linux on a Dell Precision 7560 laptop with an Intel Core i7-11850H and 128GB of RAM. All times refer to a single core execution with scheduling coordinated by GNU Parallel [7]. Our plots for a solution \(\mathcal{S}\) use the _relative solution size_, defined as \(|\mathcal{S}^{*}|/|\mathcal{S}|\), where \(\mathcal{S}^{*}\) is the best solution submitted among all teams. This corresponds to the square root of the Challenge _score_ of \(\mathcal{S}\). Figure 8 compares the different techniques to obtain \(V\)-maximal convex polygons before bloating them. As the figure shows, using \(4\) replications of each constrained Delaunay triangle Figure 7: Number of iterations to find a valid solution starting from quick vertex witnesses using IP as the solver, setting \(\mathcal{C}\) as (a) all \(V\)-maximal convex polygons and (b) \(2\) times triangulation \((V\cup S_{2}(C))\)-bloated convex polygons. gives solutions that are almost as good as Bron-Kerbosch, but works on all instance sizes. Hence, we use this setting for Figures 9 and 10. Figure 9 shows the relative solution sizes using different bloating approaches and comparing the simulated annealing and the IP solvers. You can see that the simulated annealing solver is only slightly worse than IP for small instances, but better for large cheese instances. We limited the running time of IP to 10 minutes per iteration. The total running times of the solvers are compared in Figure 10. A much better collection is obtained by using the union of several high quality solutions as a collection. To produce the previous plots, we performed 3 independent runs for each settings (showing the best result found). Figure 11 shows the solution sizes obtained by using the best \(k\) solutions from these runs as the collection. Figure 8: Solution sizes relative to the best Challenge solution. Data is based on a triangulation replicated 1, 2, or 4 times and randomly bloated using \(V\). Alternatively, we use Bron-Kerbosch (BK) to obtain all \(V\)-maximal polygons, when the running time is not too long. Afterwards, all collections have bloated again using \(V\cup S_{2}(C)\) and solved using IP. Figure 9: Solution sizes relative to the best Challenge solution. A solid line is used for the IP solver and a dashed line for simulated annealing. Data is based on a triangulation randomly bloated using \(V\) (red) and then bloated again using \(V\cup S_{1}(C)\) (green), or \(V\cup S_{2}(C)\) (blue). Figure 11: Relative solution sizes merging the \(k\) best solutions for different values of \(k\). Figure 12: Running times in seconds to find the solutions of Figure 11. Figure 10: Running times in seconds of the set cover solvers used to find the solutions of Figure 9.
2303.16943
Soft Gamma-Ray Spectral and Time evolution of the GRB 221009A: prompt and afterglow emission with INTEGRAL/IBIS-PICsIT
The gamma-ray burst (GRB) 221009A, with its extreme brightness, has provided the opportunity to explore GRB prompt and afterglow emission behavior on short time scales with high statistics. In conjunction with detection up to very high-energy gamma-rays, studies of this event shed light on the emission processes at work in the initial phases of GRBs emission. Using INTEGRAL/IBIS's soft gamma-ray detector, PICsIT (200-2600 keV), we studied the temporal and spectral evolution during the prompt phase and the early afterglow period. We found a "flux-tracking" behavior with the source spectrum "softer" when brighter. However the relationship between the spectral index and the flux changes during the burst. The PICsIT light curve shows afterglow emission begins to dominate at ~ T0 + 630s and decays with a slope of 1.6 +/- 0.2, consistent with the slopes reported at soft X-rays.
James Rodi, Pietro Ubertini
2023-03-29T18:05:45Z
http://arxiv.org/abs/2303.16943v1
Soft Gamma-Ray Spectral and Time evolution of the GRB 221009A: prompt and afterglow emission with _Integral_/IBIS-PICsIT ###### Abstract Context: Aims:The gamma-ray burst (GRB) 221009A, with its extreme brightness, has provided the opportunity to explore GRB prompt and afterglow emission behavior on short time scales with high statistics. In conjunction with detection up to very high-energy gamma-rays, studies of this event shed light on the emission processes at work in the initial phases of GRBs emission. Methods:Using _INTEGRAL_/IBIS's soft gamma-ray detector, PICsIT (200\(-\)2600 keV), we studied the temporal and spectral evolution during the prompt phase and the early afterglow period. Results:We found a "flux-tracking" behavior with the source spectrum "softer" when brighter. However the relationship between the spectral index and the flux changes during the burst. The PICsIT light curve shows afterglow emission begins to dominate at \(\sim T_{0}+630\) s and decays with a slope of \(1.6\pm 0.2\), consistent with the slopes reported at soft X-rays. Conclusions: ## 1 Introduction The long gamma-ray burst (GRB) GRB 221009A was likely the brightest GRB ever detected (Burns et al., 2023). The observation of the prompt emission was initially reported by _Fermi_/GBM (\(T_{0}=13\):17:00 UTC) (Veres et al., 2022). In view of the extreme GRB flux, spanning from low energy X-rays to very high gamma-rays, detections were reported by numerous instruments, including detectors not built to detect GRBs like GAIA, SOHO (ESA, 2022), Solar Orbiter (Xiao et al., 2022), _CSES_ HPPL (Battiston et al., 2023) and others. Most of the GRB devoted telescopes and observatories were obviously triggered, many of them with severe problems from detector dead-time, pile-up and telemetry saturation: _Swift_/BAT (Dichihara et al., 2022); MAXI (Negoro et al., 2022), _Fermi_/LAT (Bissaldi et al., 2022); _AGILE_(Piano et al., 2022; Ursi et al., 2022); _INTEGRAL_(Gotz et al., 2022); _Konus-Wind_(Frederiks et al., 2022); _Insight_-_HMXT_(Tan et al., 2022); _STPSat-6_/SIRI (Mitchell et al., 2022); _GECAM_(Liu et al., 2022); _SRG_/ART-XC (Labsnov et al., 2022); _GRBAlpha_(Ripa et al., 2022)). None the less, the usable data from the instruments will enable studies of GRB prompt emission evolution at high time resolution and high statistics. Combined with multi-wavelength afterglow detections up to TeV energies, GRB 221009A is a unique opportunity to explore numerous aspects of GRB behavior with particular regard to the initial phase of the transition from prompt to afterglow gamma-ray emission. In this work, we analyse the soft gamma-ray evolution of GRB 221009A in the \(200-2600\) keV energy range using the spectral-timing data provided by the IBIS/PICsIT gamma-ray telescope aboard _INTEGRAL_ to study how the prompt emission varies throughout the burst. Additionally, we explored the characteristics of the afterglow emission during the prompt phase and shortly after it. ## 2 Observations and analysis The _INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL)_ was launched in October 2002 from Baikonur, Kazachstan (Jensen et al., 2003) in a high Earth orbit providing long un-interrupted observations due to the \(\sim 2.5\)-day elliptical orbit, resulting in an all-sky coverage for more than of 85% of the operation time. With its on-board suite of instruments, _INTEGRAL_(Winkler et al., 2003) spans \(3\) keV \(-\) 10 MeV with fields-of-view (FoV) ranging from \(\sim 100-1000\) deg\({}^{2}\). Also, the active veto shields of SPI (Vedrenne et al., 2003) and IBIS (Ubertini et al., 2003) feature large sensitive exposed area to high energy photons and can detect impulsive events outside the FoVs of the imaging instruments with high sensitivity (von Kienlin et al., 2003; Savchenko et al., 2017). _INTEGRAL_ was observing XTE J1701 - 462 at the trigger time of GRB 221009A (2022-10-09 13:17:00.0 UTC) with an angle of \(\sim 65.8^{\circ}\) off-axis from the pointing direction. The prompt emission lasted more than 600s, including the small precursor peak and spanned _INTEGRAL_ observations 255800290010 - 255800300010. Beginning at 2022-10-10 13:27:56 UTC, _INTEGRAL_ began pointed observations of the afterglow emission. Analysis and interpretation of these afterglow observations are covered in Savchenko et al. (2023). While the SPI-ACS has a higher sensitivity as a GRB monitor due to its large effective area (0.7 m\({}^{2}\)), it has only a single energy channel (\(>75\) keV (von Kienlin et al., 2003)). PICsIT, IBIS's soft gamma-ray detector, covers energies from 170 keV to 10 MeV with two commonly used data types: spectral-timing and spectral-imaging (Labanti et al., 2003). The spectral-timing data has 7.8-ms time resolution in 8 broad energy bands (\(200-2600\) keV). This data type sums all the counts in all the detector pixels and thus does not have any position resolution. In contrast, the spectral-imaging data type sums the counts in each pixel over the span of an _INTEGRAL_ pointing (\(\sim 1800\) s). Using the spectral-timing data, PICsIT is able to detect impulsive events both inside and outside the FoV (Bianchin et al. 2009, 2011; Savchenko et al. 2017), though with reduced sensitivity for those outside to shielding of the IBIS walls. It is possible to account for the absorbing materials to also produce spectra (Bianchin et al. 2009; Rodi et al. 2021). For the above mentioned reasons, in this paper we focus on IBIS-PICsIT data which provides 8 broad energy channels and a higher time resolution (7.8-ms vs 50-ms of SPI-ACS). We analyzed the observations \(255800290010-255800300010\) as part of an accepted _INTEGRAL_ proposal to analyze the PICsIT spectral-timing data for reported GRBs. _INTEGRAL_ telemeters its data to the ground in real-time (Winkler et al. 2003). However, the size of the on-board buffer was optimized for'standard' observations of fields like the Galaxy Centre, Crab etc, and thus is not ideal to cope with impulsive, high-flux events, due to the limited bandwidth available at the time of the satellite design and construction. Thus periods of very high count-rate (e.g. GRBs in the FoV) can result in buffer overflows and data gaps for on-board instruments. To remove such bad time intervals (BITs), we removed periods with 3 or more time bins missing. Additionally we searched for detector saturation. No time bins were found at or near the maximum possible value allowed by the dedicated telemetry space, indicating that PICsIT did not suffer from pile-up effects during the burst. ## 3 Results ### Temporal evolution GRB 221009A began with a precursor at 13:17:00 UTC. Figure 1 shows the \(200-1200\) keV PICsIT light curve on a 500-ms time resolution. A dashed line denotes \(T_{0}\) (\(T_{0}=13\):17:00 UTC). The precursor is shown in an inset to better show the behavior. The feature shows a fast rise with an exponential decay lasting for approximately 7 s. No other significant emission was detected in PICsIT until \(\sim T_{0}+177\) s. Subsequently, Pulse 1 (\(\sim T_{0}+177-205\) s) starts, peaks around 11,000 cts/s, decays to \(\sim 5\), \(000\) cts/s after which a sub-flare begins (peak \(\sim 7\), \(000\) cts/s) then further decays to a low inter-pulse flux at \(\sim T_{0}+210\) s. Then Pulse 2 (\(\sim T_{0}+210-252\) s) begins, increasing from an inter-pulse level of \(\sim 1000\) cts/s to \(\sim 21,000\) cts/s in the span of \(\sim 8\) s, after which the PICsIT data suffers from telemetry issues from \(\sim T_{0}+220\) s. Thus the peak of the flare is not detected with PICsIT. The gap ends at approximately \(T_{0}+243\) s with a PICsIT count-rate of \(\sim 19,000\) cts/s. After which the GRB flux rapidly decreases to \(\sim 15,000\) cts/s before a more gradual decrease until roughly \(T_{0}+252\) s. Pulse 3 (\(\sim T_{0}+252-320\) s) begins with a fast increase (\(\sim 8,500\) to \(\sim 20,00\) cts/s). Much of the Pulse 3 rise overlaps with the decay of Pulse 2 and thus is not observed. PICsIT again has telemetry issues at \(\sim T_{0}+255\) s and is unable to monitor the peak behavior until \(T_{0}+268\)s when the flux begins to decrease from a level of \(\sim 19,500\) cts/s to a few 100 cts/s. Followed by an inter-pulse period until approximately \(T_{0}+380\) s. Pulse 4 (\(\sim T_{0}+380-600\) s) commences with a comparatively low flux shoulder-like feature that slowly varies between \(\sim 500-2,000\) cts/s until roughly \(T_{0}+495\) s. On top of this feature several sub-flares occur, lasting \(\sim 5-10\) s with count-rates near 4,000 cts/s at their peaks. After \(\sim T_{0}+495\) s, the flux dramatically increases to a peak of approximately 20,000 cts/s in \(\sim 15\) s with multiple sub-flares present during the rise. The Pulse peaks before the flux decreases to roughly 10,000 cts/s in approximately 5 s and then decaying gradually until \(\sim T_{0}+550\) s after which the shoulder-like feature continues for roughly 50 s more. The results after \(T_{0}+1000\) s are discussed in Savchenko et al. (2023) as part of the INTEGRAL afterglow follow-up observations. We investigated the hardness ratio (HR) evolution, defined as count rate (\(200-312\) keV/\(570-1200\) keV), during the four pulses. Their evolution and corresponding HR are shown in Figure 2, where a dashed line is drawn at HR = 0.5 for reference. The evolution of the normalization at 300 keV and the photon index (\(\Gamma\)) are also shown and will be discussed in detail in Section 3.2. GRB 221009A shows an evolution of hardness throughout the pulses with generally higher HR values ("softer") at higher fluxes and lower HR values ("harder") at lower fluxes. However some variability is present during the inter-pulse periods due to the relatively low count-rate possibly generating statistical fluctuations. For Pulse 1, much of the data during the rise phase are missing due to a telemetry gap. Thus the existing data show predominately the peak and decay behavior with the hardness evolving from \(\sim 1.1\) at the peak to \(\sim 0.3\) just before the inter-pulse phase. Pulse 2 shows a more complicated behavior. During the rise, the hardness begins at roughly 0.5 and increases to nearly 1.2 as the flux increases before the data gap. In contrast, the dip shows a near constant hardness at \(\sim 0.5\) as the flux decreases followed by a sharp increase in both hardness and flux (the beginning of Pulse 3) before flattening at HR \(\sim 0.6\) prior to the next data gap. When the decay phase of Pulse 3 starts, the hardness is approximately 0.7 and decreases to a value of roughly 0.5 before beginning the inter-pulse period between Pulses 3 and 4. Pulse 4 behaves differently from the prior three. Starting from \(T_{0}+400\) s, the HR varies from \(\sim 0.2-0.4\). After the HR increases from \(\sim 0.4\) to \(\sim 0.6\) and peaks at nearly 0.75 at the peak of the pulse. During the decay phase, the hardness decreases to roughly 0.6 and remains nearly constant until \(T_{0}+540\)s when the HR continues to decrease. However, there are large variations in the behavior as the flux decreases until the afterglow begins at roughly \(T_{0}+600\) s. ### Spectral evolution Expanding on the HR evolution analysis, we fitted the PICsIT data in 7 channels spanning \(250-2600\) keV using 0.5\(-\)s integration throughout to study the changes in the spectral parameters. We found that the spectra are adequately fit with a power-law model and did not require a high-energy cutoff. The evolutions of the normalization at 300 keV in ph/cm\({}^{2}\)/s/keV and the photon index (\(\Gamma\)) are shown in Figure 2, as mentioned above. In agreement with the results from HR evolution, there is a general trend of increasing photon index with increasing flux. Pulse 1 shows an increase from \(\Gamma\sim 1.7\) at \(\sim T_{0}+180\) s prior to the gap and indexes of approximately 2.3 at the peak of the pulse. After which, \(\Gamma\) decreases to roughly 1.4 during the inter-pulse period at \(\sim T_{0}+207\) s. Subsequently, the photon index during the Pulse 2 rise rapidly increases to \(\sim 2.4\) prior to the gap at \(\sim T_{0}+220\) s where the values flatten while the normalization shows an increase, though they are consistent with a constant value. Post-gap \(\Gamma\) gradually decreases before increasing to \(\Gamma\sim 2.2\) during the rise of Pulse 3 when the next gap begins. When the data restarts, the photon index is at a similar value and gradually decreases to roughly 2. Following the next inter-pulse period at \(\sim T_{0}+400\) s, \(\Gamma\sim 1.6\) until approximately \(T_{0}+450\) s, though a large amount of scatter due to the low flux. Next, \(\Gamma\) slowly varies between \(\sim 1.6\) and \(\sim 1.9\) until roughly \(T_{0}+480\) s after which the photon index is constant at approximately 1.75 for nearly 20s. Then as the flux increases rapidly, \(\Gamma\) increases to \(\sim 2.1\) at approximately \(T_{0}+511\) s when the pulse peaks. During the decay phase, the normalization drops by a factor of roughly 3 with only a small decrease in photon index (\(\Gamma\sim 2\)). After \(\sim T_{0}+515\) s the flux plateaus for roughly 10 s with photon index constant before decreasing exponentially until \(\sim T_{0}+560\) s while \(\Gamma\) decreases from roughly 1.9 to 1.5. After this the normalization and photon index behave similarly to the period during \(\sim T_{0}+400-450\) s. ## 4 Discussion ### Prompt emission As seen in Figure 2, the spectral behavior and flux show correlated behavior. Similar behavior was reported by An et al. (2023) during from \(T_{0}+180-300\) s (though with a Band model) and has been found in \(\sim 2/3\) of multi-pulse GRBs studied by Li et al. (2021). In the case of GRB 221009A, we found that the relationship between the flux and photon index changes throughout and between the pulses. Figure 3 shows the different behavior in the rise and decay of the pulses with power-law fits to each phase. The fit to the Pulse 1 decay found a slope of \(0.14\pm 0.02\) and a normalization of \(1.00\pm 0.03\). However, the lack of rise data means it is not possible to compare the two phases. Pulse 2 shows a similar but significantly different slope between the rise and decay with values of \(0.07\pm 0.01\) and \(0.09\pm 0.01\), respectively, but the two have noticeably different normalizations (\(0.89\pm 0.01\) vs \(0.79\pm 0.01\)). Both slopes are significantly lower than the slope and normalization of the Pulse 1 decay. Due to the brevity of the Pulse 3 rise, we studied only the decay phase. Its behavior is more complicated than the previous two pulses. The fit has a slope of \(0.051\pm 0.006\) and a normalization of \(0.08\pm 0.02\), which is flatter than both the Pulses 1 and 2, but with a normalization comparable to the Phase 2 decay. However, the data show a possible flattening behavior below \(\sim 0.25\) ph/cm\({}^{2}\)/s/keV. The Pulse 4 behavior rise has a slope of \(0.08\pm 0.01\) and normalization of \(0.83\pm 0.02\) comparable to Pulse 2. In contrast, the Pulse 4 decay behavior is similar to the Pulse 3 decay (slope of \(0.051\pm 0.01\) and normalization of \(0.81\pm 0.01\)) with the data below roughly 0.25 ph/cm\({}^{2}\)/s/keV again apparently flattening to nearly constant. Thus the spectral evolution of the pulses changes throughout the prompt emission, though some phases of the later pulses show similar behavior. PICsIT's spectral-timing data's limited energy range makes direct comparisons with instruments that extend to lower energies difficult to compare if there is spectral curvature, as is reported by _GRBAlpha_(Ripa et al., 2023), _Konus-Wind_(Frederiks et al., 2023), _GECAM_(An et al., 2023). However, _AGILE_/MCAL found the data from \(T_{0}+181.00-194.03\) s well fit by a power-law of \(\Gamma=2.07\pm 0.04\)(Ursi et al., 2022). While PICsIT does not have data for during the early portion of that time period, Figure 2 shows similar values. Yang et al. (2023) performed a time-resolved spectral analysis of the prompt emission using _GECAM-C_ in the 6 keV \(-6\) MeV energy range. They fit the spectra to a physical model assuming a synchrotron emission origin. The PICsIT data have a limited energy range that prevent us to perform the same data analysis, but we can compare the time-evolution trends with our results. Anyway, the Yang et al. (2023) results show a strong "flux-tracking" behavior for the power-law index of the injection rate (\(q\)) during Pulse 2. (They do not present any results during Pulse 1.) The behavior during Pulse 3 has an increase in \(q\) with flux, though the correlation is less clear than for Pulse 2. Unfortunately, much of the lack of correlation occurs during the gap in the PICsIT data so a direct comparison is not possible. Also, Pulse 3 analysis in Yang et al. (2023) ends at \(\sim T_{0}+272\), and thus we are not able to compare the later Pulse 3 decay behavior that suggests little to no flux correlation. During Pulse 4, no significant "flux-tracking" is present in Yang et al. (2023). While \(q\) is highest during the peak, the errors are large during the results shown (\(\sim T_{0}+500-520\) s). Overall, it is worth mentioning that the PICsIT results in Figure 3 are consistent with those from Yang et al. (2023). Thus are data are consistent with their interpretation of a synchroton-emission origin from an expanding Figure 1: Time evolution of the background subtracted PICsIT light curve of GRB 202909A in the energy range \(200-1200\) keV. Each time bin is integrated over 500-ms to increase the statistics and avoid empty time bins. The dashed line corresponds \(T_{0}=13\):17:00 UTC, when the precursor has been clearly detected by IBIS-PICsIT. The inset shows the precursor with 200-ms time resolution. emission region and a decaying global magnetic field and therefore a jet which is Poynting-flux dominated. ### Afterglow emission Detailed analysis of afterglow observations with _INTEGRAL_ data after \(T_{0}+1000\) s are covered in (Savchenko et al. 2023). However, An et al. (2023) report that afterglow emission begins after \(T_{0}+225\) s while the prompt emission is still present. To search for the presence of afterglow emission prior to \(T_{0}+600\) s, we fit the PICsIT light curve to a power law (\(N(t)<t^{-\alpha}\)) from \(T_{0}+600-900\) s, following An et al. (2023) and found a best-fit indfscs of \(\alpha=3.91\pm 0.01\), which is shown as the green dot-dash line in Figure 4. This is significantly different than the 0.88 from An et al. (2023) (dashed blue line in Figure 4). An inspection of the PICsIT light curve (Figure 4) finds the An et al. (2023) slope begins to over-predict the observed flux at approximately \(T_{0}+900\) s. In contrast, the 3.91 slope under-predicts the observed flux above \(\sim T_{0}+800\) s, requiring a break to describe the decay at later times. An et al. (2023) report a break at \(T_{0}+1125\) s, well before what is required for either slope based on the PICsIT data. Additionally, the 3.91 slope significantly over-predicts the observed flux between \(\sim T_{0}+300-380\) s. Thus a start time of \(T_{0}+600\) s for when the afterglow emission begins to dominate is not consistent with the PICsIT data. The PICsIT light curve shows significantly different slopes at \(T_{0}+600\) s and \(T_{0}+700\)s indicating that the afterglow emission begins to dominate between these times. While a precise start time of when the afterglow emission begins to dominate is difficult to determine using the PICsIT light curve, we tested several start times requiring that the extrapolated flux not exceed the observed flux prior to \(T_{0}+225\) s. We found \(T_{0}+630\) s as an approximate start time when the light curve decay has a slope of \(\alpha=1.6\pm 0.2\) (solid red line). This slope is similar to the \(T_{0}+1350-1860\) s fit of \(1.89\pm 0.07\) from An et al. (2023) and consistent with the soft X-ray slopes of \(\sim 1.5\) (O'Connor et al. 2023; Williams et al. 2023) and the soft gamma-ray slope (\(\sim 1.78\)) from Savchenko et al. (2023). Figure 2: GRB 221009A \(200-1200\) keV PICsIT light curve with the corresponding hardness ratio HR=(\(200-312\) keV)/(\(S70-1200\) keV), normalization at 300 keV (ph/cm\({}^{2}\)/s/keV), and photon index to power-law fits. The black dashed line denotes HR= 0.5 for reference, and the red dashed line at an index of 1.75 for reference. Spectral analysis in the \(600-3000\) keV energy range by _Insight-HMXT_ during \(T_{0}+630-930\) s found a power-law spectrum with an index of 1.62 using the HE/CsI and \(GECAM>200\) keV had a spectral index of \(\Gamma=1.56\pm 0.16\)(An et al. 2023). We found the PICsIT data during the same period are unable to constrain a photon index. Interestingly, the photon indexes are also consistent with the photon index. Figure 4: GRB 221009A 200 – 1200 keV PICsIT light curve. The power-law best-fit slope to the data from \(T_{0}+600-900\) s is a slope of 3.91 and is shown in green. The best-fit to the data from \(T_{0}+630-900\) s gives an index of 1.6, which is shown as the red line. The blue dashed line is the best-fit a slope from An et al. (2023) for \(T_{0}+600-900\) in the \(600-3000\) keV energy range and is a slope of 0.88. Figure 3: Evolution of the photon index with flux during the rise (black) and decay (red) for the four pulses. A dashed line is plotted at a photon index of 2 for reference. consistent with the values from PICsIT during \(T_{0}+400-450\) s and \(T_{0}+550-600\) s, surrounding Pulse 4 when the spectrum softens. These spectra are also significantly harder than the spectra seen during Pulses 1, 2, and 3, suggesting the emission at these later, fainter periods are different from the brighter periods. ## 5 Conclusions In conclusion, the study of GRB 221009A spectral evolution in the \(200-2600\) keV energy range using the PICsIT spectral-timing data shows that the prompt emission has a "flux-tracking" behavior with the PICsIT power-law indexes evolving in a correlated way with the GRB flux. Similar behavior was reported by Yang et al. (2023). They interpret the spectral evolution as synchrotron emission with an expending emission region and a decaying global magnetic field without the need for photospheric emission. An investigation of the photon index-flux correlation for each pulse and the rise and decay phase (where the data are present) showed that the relationship varies across pulses and sometimes between the rise and decay phases of the same pulse. Additionally, the strength of the correlation appears to weaken as the prompt emission progresses with decays phases of Pulses 3 and 4 showing little to no correlation at low fluxes (\(<0.25\) ph/cm\({}^{2}\)/s/keV). Lastly, we searched for the presence of afterglow emission in the PICsIT data prior to \(T_{0}+600\) s, finding the afterglow emission dominates after \(\sim T_{0}+630\) s and decays with a slope of \(1.6\pm 0.2\) until at least \(T_{0}+900\) s. Also, this decay index is consistent with those seen at soft X-rays prior to \(T_{0}+79000\) s. ###### Acknowledgements. We thank the anonymous referee for their comments and suggestions. The authors thank the Italian Space Agency for the financial support under the "INTEGRAL AS-INAF" agreement no 2019-35-HH. The research leading to these results has received funding from the European Union's Horizon 2020 Programme under the AHEAD2020 project (grant agreement n. 871158)
2305.14202
Fine-tuned LLMs Know More, Hallucinate Less with Few-Shot Sequence-to-Sequence Semantic Parsing over Wikidata
While large language models (LLMs) can answer many questions correctly, they can also hallucinate and give wrong answers. Wikidata, with its over 12 billion facts, can be used to ground LLMs to improve their factuality. This paper presents WikiWebQuestions, a high-quality question answering benchmark for Wikidata. Ported over from WebQuestions for Freebase, it consists of real-world data with SPARQL annotation. This paper presents a few-shot sequence-to-sequence semantic parser for Wikidata. We modify SPARQL to use the unique domain and property names instead of their IDs. We train the parser to use either the results from an entity linker or mentions in the query. We fine-tune LLaMA by adding the few-shot training data to that used to fine-tune Alpaca. Our experimental results demonstrate the effectiveness of this methodology, establishing a strong baseline of 76% and 65% answer accuracy in the dev and test sets of WikiWebQuestions, respectively. By pairing our semantic parser with GPT-3, we combine verifiable results with qualified GPT-3 guesses to provide useful answers to 96% of the questions in dev. We also show that our method outperforms the state-of-the-art for the QALD-7 Wikidata dataset by 3.6% in F1 score.
Silei Xu, Shicheng Liu, Theo Culhane, Elizaveta Pertseva, Meng-Hsi Wu, Sina J. Semnani, Monica S. Lam
2023-05-23T16:20:43Z
http://arxiv.org/abs/2305.14202v2
# Complementing GPT-3 ###### Abstract As the largest knowledge base, Wikidata is a massive source of knowledge, complementing large language models with well-structured data. In this paper, we present WikiWeb-Questions, a high-quality knowledge base question answering benchmark for Wikidata. This new benchmark uses real-world human data with SPARQL annotation to facilitate a more accurate comparison with large language models utilizing the up-to-date answers from Wikidata. Additionally, a baseline for this benchmark is established with an effective training data synthesis methodology and WikiSP, a Seq2Seq semantic parser, that handles large noisy knowledge graphs. Experimental results illustrate the effectiveness of this methodology, achieving 69% and 59% answer accuracy in the dev set and test set, respectively. We showed that we can pair semantic parsers with GPT-3 to provide a combination of verifiable results and qualified guesses that can provide useful answers to 97% of the questions in the dev set of our benchmark. ## 1 Introduction The emergence of advanced large language models (LLMs) such as GPT-3 has revolutionized the field of natural language processing, demonstrating remarkable capabilities in various tasks. LLMs can perform open-domain question answering without access to external knowledge or any task-specific training examples. However, LLMs tend to hallucinate facts and make false statements. Furthermore, they use a confident tone regardless of the truthfulness of the answer. This may cause significant harm as people increasingly accept LLMs as a knowledge source. On the other hand, traditional knowledge base question answering (KBQA) aims to find answers based on the facts in a given knowledge base. Semantic parsing (SP) has been widely used to tackle this challenging task, where the questions are first parsed into a logical form and then executed to retrieve answers from the knowledge base. It not only achieves state-of-the-art performance over various benchmarks but also provides intermediate reasoning that improves the interpretability of the results compared to GPT-3 and other information-retrieval-based approaches Dong et al. (2015); Miller et al. (2016); Sun et al. (2018, 2019) where answers are predicted directly. ### A New Dataset Most of the widely-used high-quality benchmarks for KBQA are based on Freebase Bollacker et al. (2008) which has been shut down since 2015. With outdated knowledge, it's hard to compare the results with modern LLMs such as GPT-3, since answers have changed over time for most of the questions. Wikidata Pellissier Tanon et al. (2016), despite being the largest and most popular knowledge base nowadays, has not been widely used for KBQA research. Datasets on Wikidata are either extremely small Usbeck et al. (2017) or synthesized Saha et al. (2018). **Our first contribution is WikiWebQuestions, a high-quality semantic parsing dataset for Wikidata**. We migrated the popular Web Figure 1: WikiSP complements GPT-3 with verified answers from Wikidata, demonstrated with accuracy on WikiWebQuestions dev set. QuestionsSP (Yih et al., 2016) benchmark from Freebase to Wikidata, with updated SPARQL and up-to-date answers from Wikidata. Compared to Freebase, Wikidata is much bigger, with 10K properties, 100M entities, and 10 billion triples (facts). The knowledge graph is very sparse and noisy in terms of how facts are represented. ### Few-Shot Seq2Seq Semantic Parsing To handle the enormous search space of large knowledge bases, most of the previous SP-based approaches use a multi-stage pipeline that decomposes the problem into sub-tasks. They normally rely on subgraph extraction first based on entities detected in the questions. As a result, these approaches struggle with questions with a large search space and fail to understand the question when information is missing in the knowledge graph. On the other hand, despite being the standard approach to semantic parsing, Seq2Seq has mainly been used on schemas of relatively small relational databases (Yu et al., 2018; Xu et al., 2020, 2020) and web APIs (Campagna et al., 2017; Su et al., 2017). Only in recent years has it been applied to KBQA for large knowledge bases (Yin et al., 2021; Gu et al., 2021; Banerjee et al., 2022). Suffering from limited training data compared to the massive knowledge base, it is outperformed by approaches using subgraph extraction. In this paper, we present a new methodology to synthesize high-quality training data for large, sparse, and noisy knowledge graphs like Wikidata. We introduce several high-level concepts to cope with the challenges. We collapse the 100K domains in Wikidata to a total of 180 domains in Schema.org1; we create a property hiearchy to increase the learnability of the large number of properties; we improve named entity disambiguation with a representation of using unique IDs or string mentions in the logical form. Our WikiSP Seq2Seq semantic parser based on these ideas establishes a first, strong baseline of 69% and 59% answer accuracy for the dev set and test set of our new WikiWebQuestions benchmark, respectively. **Our second contribution is an effective training data synthesis methodology for seq2seq semantic parsers for large, noisy knowledge graphs**. Footnote 1: [https://schema.org/](https://schema.org/) ### Complementing Large Language Models Trained on Wikipedia and all of the internet, large language models are capable of answering many questions directly, especially those in WikiWebQuestions, which contains mainly the head questions. Our evaluation of GPT-3 on WikiWebQuestions shows that GPT-3 can answer most questions with high accuracy, although it does not consistently give complete answers. Unfortunately, the user cannot tell if the answers are correct. We propose a strategy of reporting the answer from the semantic parser whenever possible; if not, we tell the user of GPT-3's guess, explicitly labeling it as such. The user can have full confidence with the answers from the former, while also benefiting from the latter. As shown in Figure 1, WikiSP can provide verifiable results 69% of the time and improves the guesses by GPT-3, resulting in errors only 3% of the time. **The paper's third contribution is thus improving GPT-3's trustworthiness with our semantic parser for Wikidata**. ### Outline The rest of the paper is organized as follows. We first discuss related work in Section 2. We present our dataset WikiWebQuestions in Section 3, and we introduce our methodology and implementation details in Section 4 through Section 6. Lastly, we present our experimental result and conclude. ## 2 Related Work ### Kbqa KBQA task aims to make large knowledge bases accessible by natural language. There are two mainstream approaches in the existing work: (1) semantic parsing-based approaches, and (2) retrieval-based approaches. Semantic parsing-based approaches first convert the natural language input into a formal logical form, and then retrieve the answers from the knowledge base by executing the logical form. Approaches in this branch normally use a multi-stage pipeline to decompose the problem into sub-tasks. For example, Bordes et al. (2014); Luo et al. (2018) first generate candidate queries and rank the queries based on semantic similarity with the question. Similarly, Das et al. (2021) first find other queries that contain semantically similar subparts, and construct a new logical form by combining the similar subparts of the found queries. Thanks to the development of large language models, Seq2Seq models are used to directly translate natural language into logical form in recent years. (Yin et al., 2021; Gu et al., 2021; Banerjee et al., 2022). But they are only evaluated on synthetic or paraphrased data. The other popular branch of solutions to KBQA is based on retrieval (Dong et al., 2015; Miller et al., 2016; Sun et al., 2018, 2019; Mavromatis and Karypis, 2022; Sen et al., 2021; Vivona and Hassani, 2019; Verga et al., 2021). It predicts the answers directly within the subgraph extracted based on the topic entity in the question. Retrieval-based approaches allow training end-to-end without using a logical form, which keeps data acquisition cost low. However, they cannot answer certain types of questions, such as questions with no answer available and questions that finds the largest/tallest entity, where no entities are named. They have poor interpretability and do not perform well for complex questions. ### KBQA Benchmarks A variety of benchmarks for KBQA have been created. Most of the early benchmarks are based on Freebase (Berant et al., 2013; Yih et al., 2016; Talmor and Berant, 2018). In recent years, new benchmarks have been created for Wikidata (Cao et al., 2022; Saha et al., 2019). However, these benchmarks are created using rule-based synthesis or paraphrases, which are easier for semantic parsers. CSQA (Saha et al., 2019) collects human-written questions for single triples and constructs complex questions using fixed rules with very limited natural language variety. KQA Pro (Cao et al., 2022) first synthesizes queries with canonical natural language and then crowdsources human paraphrases. Campagna et al. (2019) shows that a model can achieve significantly higher accuracy over paraphrased data compared to real-world data even for untrained queries. Thus, we base our WikiWebQuestions dataset on WebQuestionsSP (Yih et al., 2016), where data are collected from real-world users using the Google Suggest API. ### Training Semantic Parsing with Synthesized Data Semantic parsing requires a large set of training data annotated with the logical form, which is very expensive to acquire. Wang et al. (2015) proposes to use crowd-sourced human paraphrases as the training data. Since then, this approach has been used in various domains, including KBQA. Campagna et al. (2019) proposes to use large synthetic data with just a small set of human paraphrases as few-shot training data. Xu et al. (2020, 20) introduces an English-grammar-based synthesis tool that significantly improves the variety of synthetic data and shows that it reaches similar accuracy without any human input. However, it has only been applied to small closed-domain schemas with less than 20 fields. ### Entity Linking Entity linking involves finding the named entities in a query, and linking them to the corresponding entities in the knowledge graph so that the query can be executed using the proper entities as reference points. The current state-of-the-art for entity linking on the WebQuestionsSP dataset is ReFinED (Ayoola et al., 2022). They use a bidirectional transformer on the query to predict the most likely mentions of named entities within a query, and then combine that information with embeddings computed over every entity in the knowledge base to predict which entity the mention is most likely to be referring to. Prior to ReFinED, the state-of-the-art was ELQ (Li et al., 2020). They similarly generate embeddings for each entity in the knowledge base, and then use the predicted mentions of entities combined with these predicted embeddings to generate likely entities. ## 3 WikiWebQuestions (WWQ) Benchmark In this section, we first introduce how large knowledge bases work and then describe how we build our WikiWebQuestions benchmark. ### From Freebase to Wikidata A knowledge base (KB) is a structured database consisting of a set of entities and relationships between them. KBs store facts in the form of subjection-relation-object triples. In each triple, the subject is an entity while the object could be either an entity or a literal, such as a date or a number. Large knowledge bases, such as Freebase (Bollacker et al., 2008), DBPedia (Lehmann et al., 2015), and Wikidata (Pellissier Tanon et al., 2016), store an enormous amount of human knowledge in various domains. They provide structured web information for various downstream tasks including web search. Freebase was launched in 2007 by Metaweb and later acquired by Google in 2010. It served as the open core of the Google Knowledge Graph. Wiki data was a public knowledge base that started in 2012; it grew so quickly that Freebase was shut down and integrated into Wikidata by 2014. As of May 2023, Wikidata has over 10,000 properties, 100 million entities, and 12 billion facts, making it the biggest public knowledge base. Note that only 3,000 of the properties are needed for answering user questions; the rest are used to link data in Wikidata with external library catalogs and database IDs. Figure 2 shows an example Wikidata page for entity "Joe Biden". In Wikidata, entities and properties are given unique identifiers, QIDs and PIDs, respectively, and a default natural language label. It also contains a description and aliases providing alternative ways to describe the entity or property. Label, description, and aliases are all multilingual, meaning that they can be displayed to users or entered by users in all languages supported. Facts are stored as statements in Wikidata. For example, the fact that Joe Biden is the president of the United States can be represented as a statement triple (Q6279, P39, Q11696), where P39 is the PID for property _position held_, Q6279 and Q11696 are QIDs for Joe Biden and the president of the United States, respectively. Each statement can be qualified with additional conditions. For example, "Joe Biden is the president of the US" is qualified by the predicate (P580, 20 Jan 2021), meaning that the _start time_ (P580) of Joe Biden's presidency is 20 Jan 2021. Unlike Freebase, Wikidata does not have a fixed set of domains with their available properties as schema. Instead, any entity can be used as a subject or an object with any property to form a triple. However, each entity has a property _instance of_ (P31) that lists the _domain entities_ the entity belongs to. For example, "Joe Biden" is an instance of "human" as shown in Figure 2. Each domain entity has a _subclass of_ (P279) property that lists its higher-level domain entities, forming a hierarchical type system. For example, "human" domain is a subclass of "mammal", which is a subclass of "vertebrate", which is a subclass of "animal". All domains are a subclass of the "entity" domain. ### WebQuestionsSP Despite being the most popular large knowledge base for a long time, existing benchmarks on Wikidata are unfortunately either small or of low quality. On the other hand, benchmarks over the deprecated Freebase still dominate the KBQA research with better-quality data. For example, WebQuestions (Yih et al., 2015) was collected by using Google Search API instead of human paraphrasing or synthesis. As a result, it is much more natural and truly reflects the real-world questions users may ask. This dataset is later annotated with SPARQL over Freebase, named WebQuestionsSP (Yih et al., 2016). Examples with no legitimate SPARQL to retrieve answers from Freebase are dropped. In total, WebQuestionsSP consists of 3098 examples in the training set and 1639 in the test set. ### Migrating WebQuestionsSP to Wikidata We migrated WebQuestionsSP, the best collection of natural language questions over a general knowledge graph, from Freebase to Wikidata, with the help of an automatic tool we developed, based on Google's entity mapping 2 and Wikidata's relation mapping 3. About 60% of the dataset was automatically converted. One of the authors of this paper, who did not participate in model tuning, manually converted those instances that failed to convert automatically. Footnote 2: [https://developers.google.com/freebase](https://developers.google.com/freebase) Footnote 3: [https://www.wikidata.org/wiki/Wikidata:WikiProject_Freebase/Mapping](https://www.wikidata.org/wiki/Wikidata:WikiProject_Freebase/Mapping) While much bigger, Wikidata does not necessarily contain all the information available in Freebase. For example, it lacks countries' trade partners, hence we drop all such questions from the WebQuestionsSP dataset. Its information may also be incomplete; for example, it knows the religions for many countries, but not for some, such as Australia. We include such questions in the dataset since the logical queries are correct and useful for other entities. If multiple paths can lead to the cor Figure 2: An example Wikidata page: Joe Biden. rect answer, we choose the path that provides the most complete answers and has the best availability among entities in the same domain. For example, when asking for books written by an author X, we can either search for books whose _author_ is X or find _notable works_ of X that are books. While the latter is more efficient, the property _notable works_ is not always available for all authors and it often does not provide a complete list. Thus, we annotate such examples using the former representation. We also cleaned up the original dataset. The dataset contained questions like "who does Ronald-inho play for now in 2011?". We drop the appended year as it conflicts with "now" in the utterance, and it would refer to the live information in Wikidata. ### Dataset Statistics In total, we dropped about 5% of the examples from WebQuestionsSP, keeping 2931 examples from the training set and 1560 from the test set. We withhold the last 500 examples from the training set as the dev set. We evaluate using two metrics: (1) query accuracy which measures if the predicted logical form matches the gold logical form exactly, and (2) answer accuracy which measures if the retrieved answers by the predicted logical form match gold answers. Among these examples, about 12% have no answer in Wikidata. A KBQA system should be able to _understand_ a question even if its answers are not available in the underlying knowledge base currently. However, it does not make sense to evaluate answer accuracy for these examples. Thus, we present two versions of the benchmark: (1) **WWQ** containing only questions whose answers are in Wikidata so both query accuracy and answer accuracy can be evaluated, (2) **WWQ-SP** containing all examples and we only evaluate query accuracy. The size of each version of the dataset is shown in Table 1. The complete dataset will be made available upon the publication of the paper. Given that Wikikdata has 100 million entities and 3,000 useful properties for answering questions, the training data set is woefully inadequate and can be considered as a "fewshot" training data set at best. ## 4 Property-Based Knowledge Graphs As it is infeasible to collect enough natural human questions to adequately cover possible questions on Wikidata, we augment the training data set with synthesized data. It is challenging to generate good synthesis data for Wikidata. Not only is Wikidata large, unlike relational databases and Freebase, it has no predefined domains or types, but only properties. Moreover, the properties are used inconsistently, and the same type of information may be represented with different properties. The same question may map to different target properties depending on not just the type of the entities of interest, but also unmentioned attributes of the entity, or even which of the properties have been populated with values. Essentially, a sequence-to-sequence semantic parser that directly translates a sentence to its logical form would need to memorize the knowledge base to predict the logical form properly and the parser would need to be retrained to accommodate updates in the knowledge base. Our approach is to create a high-level domain and type hierarchy for property-based graphs so they can have the benefits of types: Entities in the same domain often share the same properties, and hence are asked the same kind of questions. The same question on two different entities will map to a similar logical form. In the following, we describe how we handle domains and properties in Wikidata. ### High-Level Open Knowledge Domains Even though Wikidata is property-based, all named entities have one or more _instance of_ properties to some domain entity; domain entities are organized into a hierarchy with the _subclass of_ property. Wikidata has over 150 thousand domains and most of them are too subtle to distinguish in natural language and too sparsely used to make meaningful queries. For example, Washington DC is an _instance of_ "big city", "capital city", "human settlement", and "planned community" among others. "Big city" and "capital city" can be better represented using property _population_ and _capital of_ property, while "human settlement" and "planned community" are just too rare to use in natural language and difficult to distinguish from "city". In contrast, Schema.org is a curated ontology for open domain knowledge to facilitate semantic \begin{table} \begin{tabular}{l l l l} \hline \hline & Train & Dev & Test \\ \hline **WWQ** & 2431 & 438 & 1384 \\ **WWQ-SP** & 2431 & 500 & 1560 \\ \hline \hline \end{tabular} \end{table} Table 1: Size of WWQ and WWQ-SP datasets. search of web data. It has a typed hierarchy of 802 domains. Most of the more important Wikidata domains have been mapped to the Schema.org classes via the _equivalent class_ property. We thus use the Schema.org topology to identify the first-class domains; all domains of Wikidata are either equivalent or a subdomain of these domains. We collect the properties of each first-class domain by sampling Wikidata. We first extract the popular entities for each domain based on the number of sitelinks4 (number of links for an entity to other Wikimedia pages). We include properties only if they are used by at least two of the top 100 most popular entities in each domain. In total, we included 180 top-level domains with an average of 28.6 properties per domain. We found the sampling strategy we used produced a sufficiently large schema to cover questions in WikiWebQuestions. Note that the sampling size and focus can be easily be adjusted. Footnote 4: [https://www.wikidata.org/wiki/Help:Sitelinks](https://www.wikidata.org/wiki/Help:Sitelinks) ### A Property Hierarchy As discussed above, the target property of a question can depend on the type of the entities of interest, unmentioned attributes of the entity, and which of the properties have been populated with values. For instance, consider the common question "where is \(x\) located". The most likely target property is _located in the administrative territorial entity_ (P131). However, if \(x\) is an organization, then the target property is _headquarters location_ (P159). If \(x\) is a person, then their location can be answered with one of the _residence_ (P551), _work location_ (P937), and _country of citizenship_ (P27) properties, sorted in order of preference. For the question "who plays role \(x\) in movie \(y\)", the target property is _voice actor_ if the character role is a CGI character, and _cast member_ otherwise. Such information may not even be available in Wikidata. The principle for our approach is that neural networks should not need to learn these nuances and idiosyncrasies for every entity in the enormous knowledge base. The neural network should simply predict the semantics of the question, and let the predicted logical form be resolved separately for the specific instances depending on what is available in the knowledge base. To achieve this goal, we introduce the concept of a property hierarchy to group together properties with similar semantics under a _super property_. There are two flavors of super properties: * The any super-property uses the first populated property from an ordered list of sub-properties. For example, _location_ is a super property for _located in the administrative territorial entity_, _headquarters location_, _residence_, _state_, _country_, _continent_, among many others. If a value is available for property _located in the administrative territorial entity_, then it will be returned; otherwise, it will check if the value for _headquarters location_ is available. It will keep searching until it reaches the end of the list. This can be computed with the coalesce operator in SPARQL. * The all super-property returns the values for all properties in its list. For example, _partner_ will return values for both _unmarried partner_ and _spouse_. This can be computed with the union operator in SPARQL. Generic questions such as "where is \(x\) located" will be annotated with the super property, _location_. After the parse, the predicted super-property is expanded into the SPARQL code accordingly and executed. However, questions asking explicitly for a particular subproperty should parse directly into queries for that property. For example, "what country is \(x\) a citizen of" is mapped to the property _country of citizenship_ and not the _location_ super-property. It returns null if the _country of citizenship_ property is not available for \(x\). We manually curate these super-properties based on the dev set in WikiWebQuestions. We highly recommend that the Wikidata community adopt a property hierarchy. Today, only a set of approved Wikidata users are allowed to create properties, and they would be the logical group to curate the hierarchy for the 3,000 properties used to answer questions in natural language. ## 5 Named Entity Disambiguation (NED) For a closed-domain database, entities are normally unambiguous and can be identified easily using a soft search. In large knowledge bases like Wikidata, there are many entities that share similar or the same names. In SPARQL, all entities are represented using QIDs, which are unique identifiers for entities. With over 100 million entities in Wikidata, we use a dedicated entity linker to identify the enti ties in a query and feed the result to the semantic parser. ### Challenges of NED for Wikidata Disambiguating named entities for WikiWeb-Questions is particularly difficult. First, since the dataset is collected from real-world questions without prompting the users for more information, users tend to refer to their entities of interest without using their full names. Second, the questions are generally short with very limited context, making it harder to disambiguate among entities with similar names. Last, many QIDs in Wikidata are used to represent terms not generally known as "named entities". For example, domain entities are often ignored by entity linkers, as in "What is the biggest country in Europe by population?", both "country" (Q6256) and "Europe" (Q46) are required to construct the correct SPARQL, but entity linkers only provide "Europe" and ignore "country". ### Entity Handling Consider the simple query "What movies has Selena Gomez starred in?" The SPARQL query is SELECT DISTINCT?x WHERE { ?x wdt:P31/wdt:P279* wd:Q11424. ?x wdt:P161 wd:Q83287. } This says that we are seeking \(x\), where \(x\) is transitively either an instance of (wdt:P31) or a subclass of (wdt:P279) of a film (wd:Q11424), and \(x\) has Selena Gomez (wd:083287) as a cast member (wdt:P161). Note wdt is prefix for Wikidata property, and wd is for Wikidata entity. To handle ambiguous entities, we use the ReFinED entity linker (Ayoola et al., 2022) to identify the entities in the user statements and return a set of triples of the form \(\langle\)entity, domain, QID\(\rangle\), where _entity_ is the name (default label) Wikidata gives the entity and _domain_ is the value of the instance-of property in Wikidata for that entity. For the example above, ReFinED returns \(\{\langle\)Selena Gomez, human, Q083287\(\rangle\}\), but misses the entity (film, visual artwork, Q11424\(\rangle\). We want our semantic parser to be able to recover from mistakes by the entity linker. That is, the semantic parser should use the entity linker when it is helpful, but to still try to predict the right logical form when the linker fails. To help with this, we have made several changes. First, the semantic parser is trained to accept, along with the user query, an _optional_ set of _potentially_ useful QIDs from the entity linker. This allows the semantic parser to recover from missing or mistaken entries from the entity linker. Second, we have modified the format of the predicted logical form. Instead of PIDs, we use the property name, since property names are unique and are more mnemonic. Similarly, we use the entity name for first-class domains. All other entities can either be represented by the QID or by its mention in the question if the entity linker fails to supply it. At inference time, we use the mention to attempt to look up the QID in Wikidata. If multiple matches exist, the most popular entity is returned. This allows the model to potentially recover from entity-linking failures to get the correct answers. ## 6 Implementation In this section, we discuss the implementation details of our synthesis methodology, entity linker, and the WikiSP semantic parser. ### Training Data Synthesis Schema2QA (Xu et al., 2020) introduces a comprehensive set of grammar-based templates to synthesize large natural training examples given a relational database schema. We have extended and adapted the template systems to work with large knowledge bases. In Schema2QA, each property in the schema is annotated with natural language phrases in different parts of speech (POS). It helps to generate a variety of ways to describe the same property. In Wikidata, each property has "aliases" which provide natural language phrases to describe the property. We extract "aliases" and use a POS tagger to assign them to different parts of speech as the annotation for the properties. The synthesis infrastructure is designed to operate on ThingTalk, a custom language that operates across database queries and APIs. To leverage this infrastructure, we built a conversion tool to convert SPARQL to ThingTalk and from ThingTalk to SPARQL. All data in WikiWebQuestions is converted into ThingTalk for training, and at inference time, once ThingTalk is predicted, it is converted back into SPARQL to retrieve the answers from Wikidata. We also extended ThingTalk to support common multi-hop questions in KBQA with property path similar to SPARQL. ### Entity Linking We use ReFinED Ayoola et al. (2022) for entity linking, which is the current state of the art for WebQuestionsSP. As discussed before, Wikidata treats many common terms such as "country" as named entities and assigns them QIDs. To fine-tune ReFinED to learn such terms, we add the question and entity pairs from the training set of WikiWebQuestions to the data used to train ReFinED's "questions_model" model. We run 10 epochs of finetuning using the default hyperparameters suggested by Ayoola et al. (2022). For each identified entity, we provide the mention in the original utterance, the QID, as well as its domain in plain text. The information is appended to the utterance before being fed into the neural semantic parsing model. #### 6.2.1 The WikiSP Semantic Parser We synthesized 272,474 examples, and we augment the fewshot data by replacing entities and values. In total, we obtained 403,034 training examples. We train the semantic parser with entities provided by finetuned ReFinED, and in the case when ReFinED failed to produce the correct entities, we replace the missing QIDs in the logical form with the corresponding mention of the entity in the question. During evaluation, if a mention of an entity is predicted by the model, we look up the QID using the Wikidata "wbsearchenities" API. We fine-tuned the pre-trained BART model introduced in Campagna et al. (2022) for semantic parsing. Our model encodes a concatenation of the user utterance and the QIDs and types predicted by ReFinED, and is trained to generate the full ThingTalk statement directly. BART-large is used, which has only about 400M trainable parameters. We used transformer learning rate schedule with 0.01 multiplier, and transformer warmup schedule with a warmup of 20. In total, we trained the model for 80K iterations. ### Executing Queries on Wikidata After obtaining the SPARQL query, we proceed to retrieve answers from the Wikidata SPARQL endpoint5. The retrieval of all answers is conducted with respect to the time frame of May 2023. Since Wikidata is actively updating, the gold SPARQL can be easily re-executed to acquire up-to-date answers, allowing the benchmark to compare with forthcoming iterations of large language models. Footnote 5: [https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service](https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service) ## 7 Experiments In this section, we evaluate WikiSP on WikiWebQuestions and demonstrate how WikiSP can be used to complement large language models such as GPT-3. ### Evaluation Metrics For WWQ-SP, we only evaluate the exact match query accuracy, and not answer accuracy, since it contains questions with no answers. For WWQ, where all questions have gold answers, we evaluate the query accuracy, answer accuracy, and the F1 score for answers. ### Evaluation Results The evaluation results are shown in Table 2. Our approach achieves a 55.8% query accuracy on the full WWQ-SP dataset. If trained with only synthetic data, the model achieves a 23.8% accuracy, while if trained with just augmented few-shot data it achieves a 51.3% accuracy. Since the dataset contains many similar questions in few-shot and test, few-shot data alone achieves good accuracy, but synthetic data still complements the few-shot data by providing better coverage over the knowledge base schema. A similar result can be observed on the WWQ dataset, where training with both synthetic and few-shot data achieves the best accuracy. The answer accuracy is higher than the query accuracy as sometimes the model provides an alternative correct or partially correct interpretation of the questions. Thus, the answer might be correct even if the logical form does not match the gold. See Section 7.5 for examples. ### Entity Linking Ablation Our logical form is designed to recover from entity linking errors by letting entities be specified with a \begin{table} \begin{tabular}{l|c|c c c} \hline \hline & **WWQ-SP** & \multicolumn{3}{c}{**WWQ**} \\ & Query & Query & \multicolumn{2}{c}{Answer} \\ & EM & EM & EM & F1 \\ \hline WikiSP (ours) & 55.8 & 56.9 & 59.0 & 63.1 \\ Synthetic only & 23.8 & 26.6 & 28.3 & 31.3 \\ Fewshot only & 51.3 & 52.6 & 54.8 & 58.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Test results on WWQ and WWQ-SP dataset. QID or a mention. We perform an ablation study by allowing only QIDs (and not mentions) in the logical form. As ReFinED tends to miss entities, we tested with two training strategies: (1) using the fine-tuned ReFinED results in the training data as is (QID-only), and (2) augmenting them with missing QIDs predicted in the gold (QID-augmented). The results on the dev set are shown in Table 3. As dev set is used for validation during training, we obtained a higher accuracy on dev than test. The results indicate that our approach outperforms either baseline since none of them can recover from entity linking errors. ### Complementing GPT-3 LLMs like GPT-3 can answer many questions on general knowledge correctly; however, they may also hallucinate. WWQ is representative of popular questions, so we expect GPT-3 to perform well. On the dev set of WWQ, GPT-3 answers 66.4% of the questions correctly and provides incomplete answers to 26.5% of the questions. For example, when asked "What does Obama have a degree in?", GPT-3 correctly identifies President Obama's political science degree, but fails to mention his law degree. In total, GPT-3 gives wrong answers to 7.1% of the questions. While the accuracy on popular questions is reasonably high, the problem is that, unlike human beings, GPT-3 sounds definitive with any answer it gives. On the question "What is the biggest country in Europe by population?", GPT-3 answers "Germany", when the answer is "Russia". Or, on the question, "where does the name Melbourne come from?" GPT-3 answers "Melbourne comes from the Latin word'melburnum' meaning 'blackburn' or 'blackbird' ", but the real answer is "Melbourne is named after William Lamb, 2nd Viscount Melbourne". It is not possible to tell when GPT-3's answers are wrong, and every answer needs to be fact-checked every time. Semantic parsers can be used to complement LLMs as they are interpretable; all their results are grounded in Wikidata, which we presumed to be correct. It is possible that the response is not answering the question at hand, but the user can tell from the full-sentence response if that is the case. We propose getting the best of both worlds by answering the question with WikiSP if possible. Otherwise, we report GPT-3's guesses by prefacing it with: "We are not sure but GPT-3 guesses that the answer is:". The user can fact-check such answers if desired. For this dev set, we can give definitive answers to 69.4% of the questions with WikiSP (Table 3). For the rest of the questions (30.6%), accounting for the overlap between the GPT-3 and our semantic parser's results, the percentages of guessing correctly, incompletely, and incorrectly are at 18.7%, 8.4%, and 3.4%, respectively (Figure 1). In summary, the combination of GPT-3 and WikiSP makes it possible to give a definitive, correct and complete answer two thirds of the time for the dev set. Users can also benefit from GPT-3's guesses the rest of the time at a 3.4% error rate, which is less than half of the original error rate. We expect this combination to be even more useful for questions on less popular topics and especially on events that happened after the LLM is trained. ### Error Analysis We analyzed the 176 examples in the WWQ-SP dev set where the model failed to predict the gold logical forms, and identified the most common causes as follows. **Identical Answers (11.9%).** In 11.9% of the errors, the logical form results in the same answers, or both the predicted logical form and gold logical form are correct and lead to an empty answer. For example, "What is South America made up of?" has gold annotation using property _has parts_ on entity "South America", while the model predicts a logical form finding countries that have _continent_ equals "South America". Both will lead to the same set of answers. This contributes to the discrepancy between query and answer accuracy. **Reasonable alternate answers (10.2%).** In 8.5% of the cases, the logical form is reasonable but the answers are different from the gold. For example, the gold for question "what did Boudicca do?" uses the _position held_ property, while the model predicts _occupation_ property. Both are considered \begin{table} \begin{tabular}{l|c|c c c} \hline \hline & \multicolumn{1}{c|}{**WWQ-SP**} & \multicolumn{3}{c}{**WWQ**} \\ & Query & \multicolumn{1}{c}{Query} & \multicolumn{1}{c}{Answer} \\ & EM & EM & EM & F1 \\ \hline WikiSP (ours) & 64.8 & 67.4 & 69.4 & 72.8 \\ QID-only & 61.2 & 63.4 & 64.8 & 66.6 \\ QID-only (augmented) & 62.1 & 65.5 & 67.1 & 69.4 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results on WWQ and WWQ-SP dev set with different entity linking strategies. valid answers to the question. **Entity linking (30.7%)**. The entity linker failed to provide the correct entities in 30.7% of the failed examples. While WikiSP can potentially recover from missing entities, it cannot recover from incorrect entities. This is especially common for character roles, as some character roles have different entities for books and movies or even different series of movies. Sometimes WikiSP located the correct mention from the question, but the lookup failed. For example, the model located the mention of the event "allied invasion of France" in question "Where did the allied invasion of France take place", but failed to find the corresponding entity from Wikidata by the name. There are also two cases where the entity linker produced the correct entity, but the model dropped them from the predicted logical form. **Wrong property (15.9%)**. 15.9% of the errors are caused by predicting the wrong property. Some of the examples require background knowledge to parse. For example the answer of the question "What did Mitch Hedberg OD on" can be found by _cause of death_, while the model thinks we are looking for organizations where Mitch Hedberg OD performs for. **Subject vs object (4%)**. The model misplaces the subject and object 4% of the time. For example, for the question "What inspired Van Gogh work?", the model predicts what is _inspired by_ Van Gogh instead. ## 8 Conclusion We proposed a new high-quality benchmark, WikiWebQuestions, for large knowledge base question answering. The dataset is based on the popular WebQuestionsSP dataset with natural questions, annotated with SPARQL for Wikidata. We also proposed an effective training data synthesis methodology and a Seq2Seq semantic parsing strategy for large and sparse knowledge graphs. Our WikiSP Seq2Seq semantic parser establishes a first, strong baseline of 59% answer accuracy and 63% F1 score for WikiWebQuestions. We show that we can address the hallucination of large language models like GPT-3 by grounding it with a semantic parser for Wikidata. For the dev set of our benchmark, this combination approach provides useful information for 97% of the questions in the dev set of the benchmark. More importantly, it generates verifiable answers for 69% of the questions. ## Acknowledgements This work is supported in part by the National Science Foundation under Grant No. 1900638, the Alfred P. Sloan Foundation under Grant No. G-2020-13938, the Verdant Foundation, Microsoft, KDDI, JPMorgan Chase, and the Stanford Human-Centered Artificial Intelligence (HAI) Institute.
2304.05998
Rigidly-rotating scalar fields: between real divergence and imaginary fractalization
The thermodynamics of rigidly rotating systems experience divergences when the system dimensions transverse to the rotation axis exceed the critical size imposed by the causality constraint. The rotation with imaginary angular frequency, suitable for numerical lattice simulations in Euclidean imaginary-time formalism, experiences fractalization of thermodynamics in the thermodynamic limit, when the system's pressure becomes a fractal function of the rotation frequency. Our work connects two phenomena by studying how thermodynamics fractalizes as the system size grows. We examine an analytically-accessible system of rotating massless scalar matter on a one-dimensional ring and the numerically treatable case of rotation in the cylindrical geometry and show how the ninionic deformation of statistics emerges in these systems. We discuss a no-go theorem on analytical continuation between real- and imaginary-rotating theories. Finally, we compute the moment of inertia and shape deformation coefficients caused by the rotation of the relativistic bosonic gas.
Victor E. Ambruş, Maxim N. Chernodub
2023-04-12T17:24:04Z
http://arxiv.org/abs/2304.05998v2
# Rigidly-rotating scalar fields: between real divergence and imaginary fractalization ###### Abstract The thermodynamics of rigidly rotating systems experience divergences when the system dimensions transverse to the rotation axis exceed the critical size imposed by the causality constraint. The rotation with imaginary angular frequency, suitable for numerical lattice simulations in Euclidean imaginary-time formalism, experiences fractalization of thermodynamics in the thermodynamic limit, when the system's pressure becomes a fractal function of the rotation frequency. Our work connects two phenomena by studying how thermodynamics fractalizes as the system size grows. We examine an analytically-accessible system of rotating massless scalar matter on a one-dimensional ring and the numerically treatable case of rotation in the cylindrical geometry and show how the anionic deformation of statistics emerges in these systems. We discuss a no-go theorem on analytical continuation between real- and imaginary-rotating theories. Finally, we compute the moment of inertia and shape deformation coefficients caused by the rotation of the relativistic bosonic gas. ## I Introduction Effects of rotation on the state of physical bodies have been a subject of passionate interest throughout the decades. In metals, the uniform rotation acts on electrons via a centrifugal force that produces a slight but experimentally perceptible gradient of electric potential measured at \(\sim 10^{2-3}\,\mathrm{Hz}\)[1]. At the level of electronic spins, one of the numerous examples of rotation-generated phenomena is the Barnett effect [2] which - with its celebrity reciprocal, the Einstein-de Haas effect [3] - relates the mechanical torque and magnetization in ferromagnets. The nuclear analog of the Barnett effect substantially affects the polarization of the protons (ions of hydrogen) in the water rotating with the frequency \(\sim 10^{4}\,\mathrm{Hz}\)[4]. However, the fastest rotation of matter has been produced in noncentral collisions of relativistic heavy ions that create quark-gluon plasma in which the vorticity reaches the values \(\sim 10^{22}\,\mathrm{Hz}\)[5; 6; 7]. The fast rotation affects the local properties of quark-gluon plasma, leading to various spin polarization phenomena, allowing us to probe experimentally the interior of rapidly rotating plasma in terms of its local vortical structure [8; 9]. There are various theoretical shreds of evidence that fast rotation also affects the chiral [10; 11; 12; 13; 14; 15; 16] and (de)confining transitions [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27] of the quark-gluon plasma. Theoretical methods, however, prevailingly assume a rigid rotation that makes every physical point rotate about a fixed axis with the same angular velocity. While the rigid character of rotation substantially simplifies the analytical treatment of the problem [28; 29], the consensus on the thermodynamic properties of quark-gluon plasma, even in this simplest case, is still absent, thus opening a gap between numerical and various analytical calculations. Moreover, the latest first-principle simulation reveals the instability of the rigidly rotating gluon plasma below the "supervortical" critical temperature [27], indicating the complexity of rotation in strongly interacting systems. First-principle information about the quark-gluon plasma comes from lattice simulation in the Euclidean imaginary time formalism where the real angular momentum \(\Omega\) brings the sign problem [30] which makes the numerical simulations impossible. This inconvenience can traditionally be overcome by turning the angular momentum into the complex plane and considering the purely imaginary rotation \(\Omega_{I}=-i\Omega\) in full analogy with the baryon chemical potential [17; 19; 21; 23; 30; 31]. The imaginary rotation differs, however, from the imaginary baryonic chemical potential: the analytical continuation to real rotation used in numerical lattice simulations has some unusual features including the emergence of (stable) ghost-like excitations [19] characterized by "inionic" deformation of statistics and the appearance of the fractal features of thermodynamics under imaginary rotation. The fractalization imposes a no-go theorem on an analytical continuation for rotating systems in the thermodynamic limit [32]. In our work, we discuss the effect of rotation on the thermodynamics of the simplest possible system represented by the massless scalar fields. First, we briefly introduce the real and imaginary rotation in Sec. II. Then, in Sec. III, we analyze the interrelation between the fractal features, analytical continuation, and the causality constraint for the model formulated on a one-dimensional ring which can be treated analytically. Section IV approaches real and imaginary rotation within the scope of the relativistic kinetic theory applied to the three-dimensional rotating gas. It also addresses the mechanical features of the rotating gas, including its moment of inertia and shape-deformation coefficients. This analysis is followed by Sec. V, where we pursue, for simplicity, a "hybrid" quantization approach based on the cylindrical waves with continuous momentum in a spatially unbounded region. We show the advantages of both discussed approaches in Sec. VI, where the rotating gas in the cylindrically-bounded region and the discrete quantization of the transverse modes is treated numerically in great detail. Furthermore, we reveal the analytical fractalization of the thermodynamics numerically in the three-dimensional rotating gas and explicitly show strong parallels with the fractalization of thermodynamics in the one-dimensional ring accessible analytically. Our last section is devoted to conclusions and a summary of our results. Throughout the article, we work with the conventions \(\hbar=c=k_{B}=1\). ## II Imaginary rotation and statistics ### Real rotation and imaginary rotation Let us consider a quantum-mechanical system of bosonic particles rotating uniformly (rigidly, as a solid body) with the constant angular frequency \(\Omega\) about the \(z\) axis. For simplicity, we can assume that the system of particles is rotating inside a cylinder possessing reflective boundary conditions. In the co-rotating frame, the free energy of the system takes the following form: \[F_{\beta}=\frac{V}{\beta}\sum_{\alpha,m}\ \ \sum_{c=\pm 1}\ln\left(1-e^{-\beta( \omega_{\alpha,m}+cm\Omega)}\right), \tag{1}\] where \(\beta=1/T\) is the inverse temperature, \(V\) is the volume of the system, \(\omega_{\alpha,m}\) is the energy spectrum of the particles in the laboratory reference frame, and \(\alpha\) is a collective notation of quantum numbers other than the projection of angular momentum, \(m\equiv m_{z}\in\mathbb{Z}\). We work with zero-charge systems so that the chemical potential does not enter the free energy of the system (1). We also ignore the zero-point contribution associated with the vacuum Casimir energy since it does not affect the thermodynamics of the system. In order to determine the thermodynamic characteristics (for example, energy, pressure, entropy, angular momentum, moment of inertia, etc), it is sufficient to evaluate the statistical integral (1). For bosonic particles, the contribution of each quantum level to the thermodynamic quantities is given by the Bose-Einstein distribution \[n_{\omega}^{\rm(bos)}=\frac{1}{e^{\beta\omega}-1}\,,\qquad[\text{bosonic statistics}], \tag{2}\] where \(\omega\) is the energy of the quantum level. In a rigidly rotating system, the statistical weight is determined by the energy in the co-rotating reference frame: \[\omega=\tilde{\omega}_{\alpha,m}\equiv\omega_{\alpha,m}-m\Omega\,, \tag{3}\] thus demonstrating explicitly how rotation with \(\Omega\neq 0\) affects the statistical particle distribution. It is convenient to calculate the thermodynamic properties of rotating particles using the imaginary-time formalism in which the time coordinate is turned to a complex variable via the Wick transformation, \(t\rightarrow\tau=it\). The imaginary time \(\tau\) is compactified to a circle of the length \(\beta=1/T\) related to thermal equilibrium temperature \(T\). The compactification imposes the matching conditions on the fields: all scalar fields \(\phi\) are periodic functions along the thermal direction, \[\phi(\mathbf{x},\tau)=\phi\left(\mathbf{x},\tau+\beta\right)\,, \tag{4}\] while all fermionic fields (not considered in this article) obey anti-periodic boundary conditions. The Bose-Einstein statistical distribution (2) for bosonic fields can be recovered automatically from the periodic boundary conditions (4) [33]. The imaginary-time approach is intensively used in numerical lattice simulations of quantum field theories where the partition function is formulated in terms of a statistical integral in Euclidean spacetime [34]. The lattice simulations are especially useful for obtaining information about non-perturbative effects that cannot be treated with standard perturbative methods [34]. However, the imaginary-time techniques cannot be directly applied to rotating systems because the action of the Euclidean theory becomes a complex quantity at a nonzero angular frequency, \(\Omega\neq 0\), thus exhibiting the so-called "sign problem" [30]. The latter property does not allow us to treat the partition function of a rotating system as a statistical integral bringing us to an inconvenient similarity with finite-density systems where the (baryonic) chemical potential also makes the Euclidean action a complex quantity [34]. The only practical way to avoid the sign problem for rotation is to consider the angular frequency as a purely imaginary variable: \[\Omega=i\Omega_{I}\,. \tag{5}\] The shift of the angular frequency to the complex plane (5) restores the real-valuedness of the Euclidean action [17; 30]. Having calculated the desired quantities at a set of imaginary \(\Omega_{I}\), one can then apply an analytical continuation to map the results obtained with the imaginary rotation to the realistic case of real rotation [17; 21]. This prescription, applied to the angular frequency \(\Omega\), follows a standard set of practices invoked to avoid the sign problem in simulations of finite-density systems [35]. In the context of the imaginary-time formalism, there are two methods how one can implement the imaginary rotation (5). The first approach, originally proposed in Ref. [30] and adopted in various numerical Monte Carlo simulations of (quark-) gluon plasmas [26; 27; 21; 30], consists in (i) considering the system in a non-inertial co-rotating reference frame in Minkowski spacetime; (ii) turning the system, via a Wick transformation, to the curved Euclidean spacetime with a complex metric tensor; (iii) implementing the substitution (5) which makes the metric tensor real-valued again; (iv) simulating the thermodynamics at a set of non-zero \(\Omega_{I}\) with the standard periodic boundary conditions (4); (v) fitting the obtained numerical results by a reasonable analytical function and, finally, (vi) making an analytical continuation of the lattice results to the real-valued frequency by setting \[\Omega_{I}^{2}\to-\Omega^{2}\,. \tag{6}\] The second approach implements the imaginary rotation in the imaginary-time formalism in a more straightforward way using the property that the imaginary frequency \(\Omega_{I}\) corresponds, after all, to a uniform rotation of a subspace of a timeslice of the Euclidean spacetime about certain fixed axis [19; 23]. As the imaginary time variable \(\tau\) advances for a full period from \(\tau=0\) to \(\tau=\beta\), the system experiences a spatial rotation by the angle: \[\chi=\beta\Omega_{I}\equiv 2\pi\nu\,,\qquad\nu=\frac{\beta\Omega_{I}}{2\pi}\,. \tag{7}\] The turn of the space necessitates a modification of the standard bosonic boundary conditions (4) which should now incorporate a translation in imaginary time with the uniform rotation of the Euclidean spacetime. Under the imaginary rotation, the bosonic wavefunction appears to satisfy the rotwisted boundary condition: \[\phi(\mathbf{x},\tau)=\phi\left(\hat{R}_{\mathbf{\chi}}\mathbf{x},\tau+\beta \right)\,, \tag{8}\] where the \(3\times 3\) matrix \[\hat{R}_{\mathbf{\chi}}=\begin{pmatrix}\cos\chi&\sin\chi&0\\ -\sin\chi&\cos\chi&0\\ 0&0&1\end{pmatrix}, \tag{9}\] written in Cartesian coordinates, corresponds to the global rotation of the whole spacial Euclidean subspace, \(\mathbf{x}\to\mathbf{x}^{\prime}=\hat{R}_{\mathbf{\chi}}\mathbf{x}\), by the angle (7). In the absence of rotation, the transformation (9) becomes a unit matrix and the boundary condition (8) reduces to the standard periodic condition for bosons (4). The rotwisted boundary conditions, visualized in Fig. 1, have already been discussed in the context of the Euclidean lattice simulations of field theories [23; 26]. The boundary conditions (8) are obviously invariant under \(2\pi\) shifts of \(\chi\), or equivalently, shifts by one unit in \(\nu\): \[\chi\to\chi+2\pi\,,\qquad\nu\to\nu+1\,, \tag{10}\] and, in the parity-unbroken systems, under the reversal of the rotation angle: \[\chi\to-\chi\,,\qquad\nu\to-\nu\,. \tag{11}\] The latter condition holds for a system of neutral particles that we consider. The symmetry under clockwise and counterclockwise rotations (11) can be broken, for example, for charged particles subjected to a background magnetic field which leads, in particular, to a rotation diode effect in semiconductors [36]. ### Imaginary rotation and ninicinic statistics The differences between the two implementations of the imaginary rotation are two-fold: one can either consider the curved Euclidean spacetime with the ordinary boundary conditions (4) implemented along the compactified time (the first approach) or work in a flat Euclidean spacetime with the rotwisted boundary conditions (8) following the second approach. In our article, we consider field theories subjected to imaginary rotation introduced via the rotwisted boundary condition. The boundary conditions imposed on the fields in the imaginary time direction have one-to-one correspondence with the statistical distribution of the particles. For example, the equal-time commutation relations for bosonic fields imply the periodic boundary conditions (4), which lead to the Bose-Einstein distribution for bosons (2). Analogously, anti-commuting fermionic variables possess anti-periodic conditions in the compactified time direction which correspond to the Fermi-Dirac statistics [33]. Therefore, it is appropriate to ask which statistical distribution corresponds to the rotwisted boundary conditions (8)? It turns out that the imaginary rotation deforms the statistical distribution of fermions and bosons, leading to a "ninicinic" deformation which matches neither the bosonic nor the fermionic statistical distributions [32]. For example, for bosons, the ninicinic deformation takes the following form: \[n_{\omega}^{\rm(min)}(\xi)=\frac{e^{\beta\omega}\cos\xi-1}{1-2e^{ \beta\omega}\cos\xi+e^{2\beta\omega}}\,, \tag{12}\] where \(\omega\equiv\omega_{\alpha,m}\) is associated with the energy of the quantum state in the laboratory reference frame and \(\xi=m\chi=2\pi m\nu\) is the deformation parameter associated with the "statistical angle" \(\chi\), Eq. (7). The latter depends on the angular velocity \(\Omega_{I}\) of the imaginary rotation in Euclidean spacetime. Notice that at the zero (modulo \(2\pi\)) statistical angle, the ninicinic deformation (12) of the bosonic distribution (2) disappears: \(n_{\omega}^{(n)}(\xi=2\pi k)=n_{\omega}^{(b)}\) with an integer \(k\in\mathbb{Z}\). Figure 1: The rotwisted boundary conditions (8) characterized by the statistical angle (7) produced by the imaginary angular velocity \(\Omega_{I}\). The ninionic deformation (12) can be understood as the real part of the bosonic occupation number (2), \[n^{\rm(ini)}_{\omega}(\xi)={\rm Re}\,n^{\rm(bos)}_{\omega+i\xi/\beta}\,, \tag{13}\] at an imaginary chemical potential \(\mu=\xi/\beta\). Given the unusual form of the ninionic deformation of the bosonic statistical distribution (12), it is appropriate to ask how this deformation modifies the statistical properties of the thermal state? What are the consequences which are brought to the theory by the introduction of the new dimensionless parameter, the statistical angle (7)? The answer to this question, which depends on the volume of the rotating system, is one of the aims of our article. In respect of the causality, the rigid rotation with real-valued angular velocity is a well-defined notion only for transversely-bounded systems. On the contrary, the imaginary rotation does not impose any bounds on the size of the system due to the absence of the notion of the light cone in the Euclidean space (in other words, there is no causality constraint in the imaginary time formalism because it has no notion of real time). Therefore, the imaginary rotation does not lead to causality problems [30] and can be formulated in the thermodynamic limit in the whole Euclidean space [23]. The relation between imaginary and real rotation in terms of the analytical continuation is another aim of our paper. ### Ninionic statistics and fractal thermodynamics Sticking to an infinite-volume system, one can show, both in the scope of a classical interacting field theory [31] as well as in a free bosonic quantum field theory [32], that the imaginary rotation characterized by the nonvanishing value of the statistical angle \(\chi\) modifies the relation between the physical temperature \(T\) and the length of the compactified direction \(\beta\): \[T(\beta,\chi)=\frac{1}{\beta}f_{\mathsf{T}}\left(\frac{\chi}{2\pi}\right)\,, \tag{14}\] where \[f_{\mathsf{T}}(x)=\left\{\begin{array}{ll}\frac{1}{q}&\mbox{if $x=\frac{p}{q} \in\mathbb{Q}$, with $p,q\in\mathbb{N}$ coprimes,}\\ 0&\mbox{if $x\notin\mathbb{Q}$}\,,\end{array}\right. \tag{15}\] is the Thomae function. In other words, function (15) gives zero for all irrational numbers and equals to a nonzero number \(1/q\) determined by the denominator \(q\) of the rational argument \(x=p/q\in\mathbb{Q}\) with two natural coprime numbers \(p,q\in\mathbb{N}\). The Thomae function (15), shown in Fig. 2, is known also under other names, such as the raindrop function, the modified Dirichlet function, the popcorn function, etc. This function has the amazing counter-intuitive property stating that the function is discontinuous if its argument \(x\) is rational, and it is continuous provided \(x\) is irrational. The Thomae function possesses a nontrivial fractal structure [37; 38] which equips the thermodynamics of imaginary rotation with fractal properties. The fractalization (and "defractalization") of thermodynamics of imaginary rotating systems will also be addressed in this paper. Notice that the behavior of the physical temperature (14) as function of the statistical angle (7) is determined solely by the denominator \(q\) of the rational number \(\chi/(2\pi)\) and not by its numerator. Irrational (in units of \(2\pi/\beta\)) frequencies correspond to zero temperature (14). In the absence of the imaginary rotation, \(\chi=0\), one gets the standard relation between temperature \(T\) and the length of the imaginary time direction \(\beta\), as expected: \[T(\beta,0)=\frac{1}{\beta}\,. \tag{16}\] The ninionic deformation of bosonic statistics can be readily understood in the imaginary time formalism for free massless bosons in the thermodynamic limit (on an infinite spatial line) where the particles possess the linear energy dispersion, \(\omega_{k}=|k|\). In this conformal system, the thermal pressure of bosons \(P\) is equal to their energy density, \(E\equiv P\), taking a well-known expression in the absence of imaginary rotation: \[P_{0}=\int_{-\infty}^{\infty}\frac{dk}{2\pi}n^{\rm(bos)}_{\omega_{k}}\,\omega _{k}=\frac{\pi}{6\beta^{2}}\,,\qquad[\Omega_{I}=0]\,. \tag{17}\] The temperature of the system is given by the inverse length \(\beta\) of the compactified imaginary-time direction (16). Looking ahead a little, one can also discuss the thermodynamics of the same system compactified into a ring of an infinitely large radius \(R\) which is subjected to the imaginary rotation with the angular velocity \(\Omega_{I}\). The pressure can be derived via the ninionic statistics (13): \[P=\lim_{R\to\infty}\frac{1}{R}\sum_{m\in\mathbb{Z}}n^{\rm(ini)}_{\omega_{m}}( \xi_{m})\,\omega_{m}=\frac{\pi}{6\beta^{2}}f_{\mathsf{T}}\left(\frac{\beta \Omega_{I}}{2\pi}\right)\,, \tag{18}\] Figure 2: Thomae function (15). where the ninionic parameter \(\xi_{m}=\chi m\equiv\beta\Omega_{I}m\) is expressed via the angular momentum \(m\) and the statistical angle (7). The energy spectrum in the statistical sum (18), \[\omega_{m}=\frac{1}{R}|m|\,, \tag{19}\] corresponds to the laboratory frame. In the thermodynamic limit, \(R\to\infty\), the energy gaps of the discrete spectrum (19) shrink, the variable \(m/R\) becomes the continuum momentum \(k\), and the sum in Eq. (18) reduces to an integral thus bringing the gap between the exotic (17) and standard (17) statistical sums in the thermodynamic limit. However, the presence of the imaginary rotation \(\Omega_{I}\) makes the system nontrivial even in the thermodynamic limit. Indeed, the pressure of the imaginary rotating system (18) has the same expression as the pressure of the non-rotating one (17) with only one important difference that temperature of the former (14) becomes a fractal function (15) of the imaginary angular frequency \(\Omega_{I}\). In the next section, we discuss the particularities of fractalization of thermodynamics by imaginary rotation working with an analytically-solvable example of a free massless particle confined to a one-dimensional ring. ## III Real and imaginary rotations on the ring ### Relativistic rotation, particle spectrum In this section, we consider a free massless particle on a ring of a fixed radius \(R\) with the angle coordinate \(\varphi\) as shown in Fig. 3. For a static ring, the particle wavefunction is described by the Klein-Gordon equation: \[\left(\frac{\partial^{2}}{\partial t^{2}}-\frac{1}{R^{2}}\frac{\partial^{2}} {\partial\varphi^{2}}\right)\Phi(t,\varphi)=0\,, \tag{20}\] which is formulated in the inertial, laboratory frame. Let us consider the ring rotating with the constant angular velocity \(\Omega\). The coordinates associated with the co-rotating reference frame (denoted by a tilde) are related to the laboratory coordinates as follows: \[t=\tilde{t}\,,\qquad\tilde{\varphi}=\varphi-\Omega t\ \ \text{mod}\ 2\pi\,. \tag{21}\] In the co-rotating frame, the Klein-Gordon equation (20) transforms into the following equation: \[\left[\left(\frac{\partial}{\partial\tilde{t}}-\Omega\frac{\partial}{\partial \tilde{\varphi}}\right)^{2}-\frac{1}{R^{2}}\frac{\partial^{2}}{\partial\tilde{ \varphi}^{2}}\right]\Phi(\tilde{t},\tilde{\varphi})=0\,, \tag{22}\] which possesses the energy spectrum in the rotating reference frame: \[\tilde{\omega}_{m}=\frac{1}{R}|m|-\Omega m\,,\qquad m\in\mathbb{Z}\,, \tag{23}\] corresponding to the following eigenfunctions: \[\Phi(\tilde{t},\tilde{\varphi})=\frac{1}{\sqrt{2\pi R}}e^{-i\tilde{\omega} \tilde{t}+im\tilde{\varphi}}\,. \tag{24}\] The energy spectrum (23) is bounded from below provided the causality condition is satisfied: \[R|\Omega|<1\,. \tag{25}\] The thermodynamics of the system is determined in the rotating reference frame where all statistical distributions are set by the energy in the co-rotating frame \(\tilde{\omega}_{m}\) rather then by its laboratory-frame counterpart (19). ### Free energy of rotating scalar field We consider statistical mechanics of scalar particles in the rotating environment. The corresponding statistical sum, \[\mathcal{Z}\equiv e^{-\beta\mathcal{F}} =\prod_{m\in\mathbb{Z}}\sum_{n_{m}=0}^{\infty}e^{-\beta(\omega_{m }-\Omega m)n_{m}}\] \[=\prod_{m\in\mathbb{Z}}\left[1-e^{-\beta(\omega_{m}-\Omega m)} \right]^{-1}\,, \tag{26}\] is formulated via the sum over states labeled by the angular momentum \(m\) with the occupation number \(n_{m}\) of system's levels that possess the total energy \(\tilde{E}_{m,n_{m}}=\tilde{\omega}_{m}n_{m}\) in the rotating reference frame and the total angular momentum \(L_{n,m_{n}}=mn_{m}\). In Eq. (26), \(F\) stands for the free energy in the co-rotating reference frame (for simplicity, we omit the tilde label hereafter). In the statistical sum, we do not take into account a \(m=0\) contribution which corresponds to the zero-energy mode and contributes to the zero-point (Casimir) vacuum energy [39]. We concentrate on the thermal part of the free energy which possesses interesting fractal properties in the thermodynamic limit. Figure 3: Illustration of a particle on a ring of the radius \(R\) and the angular coordinate \(\varphi\). The ring rotates with the angular velocity \(\Omega\) counterclockwise. The thermodynamic free energy (26), \[\mathcal{F}(\Omega)=\frac{1}{\beta}\sum_{m=1}^{\infty}\ln \Big{[}\Big{(}1-e^{-\beta(1/R-\Omega)m}\Big{)}\] \[\cdot\Big{(}1-e^{-\beta(1/R+\Omega)m}\Big{)}\Big{]}\,, \tag{27}\] can be evaluated explicitly: \[\mathcal{F}(\Omega)=\frac{1}{12R}+\frac{1}{\beta}\ln \bigg{\{}\eta\left[\frac{i\beta}{2\pi}\left(\frac{1}{R}-\Omega \right)\right]\] \[\cdot\eta\left[\frac{i\beta}{2\pi}\left(\frac{1}{R}+\Omega\right) \right]\bigg{\}}, \tag{28}\] via the Dedekind \(\eta\) function: \[\eta(z)=e^{\frac{i\pi z}{12}}\prod_{n=1}^{\infty}\left(1-e^{2\pi inz}\right)\,. \tag{29}\] Notice that the Dedekind function (29) is defined only in the upper complex plane \(\operatorname{Im}z>0\), which implies that the free energy (28) is well-defined if and only if the causality condition (25) is satisfied. The causality condition is absent for the case of the imaginary rotation (5), when the angular frequency \(\Omega\) becomes a purely imaginary quantity. Indeed, according to the analytical properties of the Dedekind function (29), the free energy (28) is a well-defined analytical function for any real value of the imaginary angular frequency \(\Omega_{I}\) at any radius of the ring \(R\). Consequently, the rigid imaginary rotation, contrary to rotation in Minkowski spacetime, can be formulated in thermodynamic limit. For convenience of our subsequent analysis, we consider all physical quantities in units of the inverse length \(1/\beta\) of the imaginary time direction. We introduce the dimensionless length of the ring (the one-dimensional volume) \(L\) and the frequency of rotation \(\nu\), respectively: \[L=\frac{2\pi R}{\beta}\,,\qquad\nu=\frac{\beta\Omega_{I}}{2\pi}\equiv\frac{ \chi}{2\pi}\,. \tag{30}\] The normalized frequency \(\nu\) corresponds to the normalized statistical angle (7). The free energy density \(\overline{\mathcal{F}}=\mathcal{F}/2\pi R\) is given by \[\overline{\mathcal{F}}=-P= \,\frac{\pi}{6\beta^{2}L^{2}} \tag{31}\] \[+\frac{1}{\beta^{2}L}\ln\left[\eta\left(\frac{i}{L}-\nu\right) \eta\left(\frac{i}{L}+\nu\right)\right].\] After taking the thermodynamic limit \(R\to\infty\) and in the absence of rotation, \(\overline{\mathcal{F}}\to\overline{\mathcal{F}}_{0}=-P_{0}=-\pi/6\beta^{2}\), as established by Eq. (17). The first term in Eq. (31) could be erroneously taken for the regularized zero-point (Casimir) energy contribution to the free energy. To show that this identification is not correct, let us consider the trivial case \(\nu=0\) which corresponds to a vanishing statistical angle, \(\chi=0\) (non-trivial angles will be considered shortly after). The low-temperature limit, \(\beta\to\infty\), for a ring with a fixed radius \(R\) corresponds to a vanishing parameter \(L\). Using the following relation, valid for vanishingly-small positive \(L\), \[\ln\eta\left(\frac{i}{L}\right)=-\frac{\pi}{12L}+\ldots, \tag{32}\] (where the ellipsis denote subleading terms in the limit \(L\to 0\)), one gets from Eq. (31) that the normalized free energy vanishes in low-temperature limit: \[\lim_{\beta\to\infty}\overline{\mathcal{F}}=0. \tag{33}\] For the sake of convenience, we present here the expressions for the normalized Casimir energy, the Casimir pressure and the Casimir free energy, respectively: \[\overline{\mathcal{F}}_{\text{Cas}}=-P_{\text{Cas}}=\frac{1}{24\pi R^{2}}. \tag{34}\] The thermodynamic contribution (28) does not contain the zero-point energy since the latter, in the normalization (31), should diverge in the \(L\to 0\) limit (34) which is not the case (33). Therefore, Eq. (31) represent a purely thermodynamic contribution which we address below. ### Fractalization of thermodynamics Using the property \(\eta(-z^{*})=[\eta(z)]^{*}\) valid for any complex number \(z\) from the upper complex semi-plane, \(\operatorname{Im}z>0\), one gets for the thermodynamic part of the free energy density (28), the following expression: \[\overline{\mathcal{F}}=\frac{\pi}{6\beta^{2}L^{2}}+\frac{2}{\beta^{2}L}\ln \left|\eta\left(\nu+\frac{i}{L}\right)\right|\,, \tag{35}\] where we used notations (30). The thermodynamic limit, \(L\to\infty\), of the free energy on the ring (35) can be deduced from the beautiful result of Ref. [38] which relates the Dedekind \(\eta\) function (29) with the Thomae \(f_{\mathsf{T}}\) function (15) as the following limit: \[\lim_{\epsilon\to+0}\epsilon\left|\eta(x+i\epsilon)\right|=-\frac{\pi}{12}f_{ \mathsf{T}}^{2}(x)\,. \tag{36}\] Applying (36) to the thermal part of the free energy density (35), we get that the (normalized) thermodynamic energy density "fractalizes" in the thermodynamic limit: \[\lim_{L\to\infty}\overline{\mathcal{F}}=-\frac{\pi}{6\beta^{2}}f_{\mathsf{T} }^{2}(\nu). \tag{37}\] The non-analyticity of the Thomae function \(f_{\mathsf{T}}\) has the fractal nature [38] implying the fractalization of thermodynamics under imaginary rotation [32]. The result in Eq. (37) implies that close to the thermodynamic limit, \(L\gg 1\), the non-analytical fractal part of the free energy density dominates over the analytical term in the thermodynamic free energy (35). Interpreting the expression for \(\overline{\mathcal{F}}\) in light of Eq. (17), we are led to define a rotation-dependent temperature \(T_{\mathsf{T}}(\nu)\) via \[T_{\mathsf{T}}(\nu)=\beta^{-1}f_{\mathsf{T}}(\nu), \tag{38}\] which depends on the statistical parameter \(\nu\equiv\beta\Omega_{I}/2\pi=p/q\) via the discontinuous Thomae function \(f_{\mathsf{T}}\) as given in Eq. (14). Notice that in the thermodynamic limit, the thermodynamics of the system is determined by the anionic statistics (12). This fact can be seen from the expression for pressure (31), given in Eq. (38), which coincides with Eq. (18). The fractalization (38), characterized by the non-analytical behaviour of free energy, is achieved only in the thermodynamic limit when the radius \(R\) of the ring becomes infinitely large. At any finite \(R\), all thermodynamic characteristics of the systems are analytical. Thus, it is instructive to see how thermodynamics acquires its fractal properties under imaginary rotation as the radius of the ring increases. In Fig. 4 we show the (normalized) thermodynamic pressure of bosons (31) as a function of the normalized statistical angle \(\nu\), Eq. (30), at various (normalized) lengths \(L\) of the ring. Pressure, which is a smooth analytical function of \(\nu\) at small radius of the ring, develops a series of minima and maxima as the length of the ring increases. For large sizes \(L\sim 10^{3}\), pressure of bosonic particles develops self-similar features. At \(L\sim 5\times 10^{3}\), pressure becomes almost indistinguishable from its limiting form (\(L\to\infty\)) given by Eq. (18) and governed by the fractal properties of the Thomae function (15). In this limit, the thermodynamic pressure becomes a fractal dictated by the ninionic statistics (12). ### Analytical continuation: the disk of analyticity Finally, let us discuss how the fractalization of thermodynamics in the thermodynamic limit leads to the absence of the analytical continuation from the imaginary angular frequencies to the real ones. In other words, we would like to see that the thermodynamic quantities obtained in an infinite volume limit at imaginary rotation cannot be directly connected to thermodynamics of real rotation. Qualitatively, the validity of this statement can be deduced from both mathematical and physical arguments. Mathematically, it is clear that a non-analytical function cannot be analytically continued to an analytical domain because the result will depend on the prescription used for the continuation procedure. Moreover, at the imaginary-rotating side in the thermodynamic limit, the pressure \(P\) cannot be expressed as a function of the imaginary velocity squared, \(\Omega_{I}^{2}\), Eq. (18), which renders inapplicable the continuation prescription to the real rotation summarized in Eq. (6). Physically, the causality condition (25) is incompatible with the continuation prescription (6) in the thermodynamic limit, \(R\to\infty\), for any finite \(\Omega_{I}\). However, outside of the thermodynamic limit at any finite \(R\), the analytical continuation does exist. Let us briefly discuss this point for the example of the ring. Since the length of the ring is always a positive number, \(L>0\), the argument of the Dedekind \(\eta\) function in the free energy density (35) always belong to the upper part of the complex plane where the Dedekind function is an analytical and well-defined function. Therefore, the imaginary rotation is well-defined at any imaginary angular frequency \(\nu\), contrary to its real counterpart (28). It is convenient to write explicitly the normalized pressures \(\bar{P}\equiv\beta^{2}P=-\beta^{2}\overline{\mathcal{F}}\) for real (\(\bar{P}^{\,\mathrm{Re}}\)) and imaginary (\(\bar{P}^{\,\mathrm{Im}}\)) angular frequencies, respectively: \[\bar{P}^{\,\mathrm{Re}}(x_{R},y) =-y\,\ln\!\Big{[}\eta\left(ix_{R}+iy\right)\eta\left(-ix_{R}+iy \right)\!\big{]}, \tag{39}\] \[\bar{P}^{\,\mathrm{Im}}(x_{I},y) =-y\,\ln\!\big{[}\eta\left(x_{I}+iy\right)\eta\left(-x_{I}+iy \right)\!\big{]}. \tag{40}\] Here we defined the following real-valued quantities: \[x_{R}=\frac{\beta\Omega}{2\pi},\qquad x_{I}=\frac{\beta\Omega_{I}}{2\pi}, \qquad y=\frac{\beta}{2\pi R}\,, \tag{41}\] which represent the normalized angular frequencies for real and imaginary rotation (\(x_{R}\) and \(x_{I}\), respectively), and the inverse size of the ring \(y>0\). The analytical Figure 4: Fractalization of thermodynamics of scalar particles on the ring under the imaginary rotation \(\Omega_{I}\): pressure \(P\), Eq. (31), shown in units of pressure \(P_{0}\) for an infinite ring \(L\to\infty\) in the absence of rotation, Eq. (17), as a function of the normalized statistical angle \(\nu=\chi/(2\pi)\equiv\beta\Omega_{I}/(2\pi)\) for various (normalized) lengths \(L\) of the ring (30). The plots at finite values of \(L\) are given for the analytical behaviour (31) in terms of the Dedekind \(\eta\) function (29). The behaviour in the thermodynamic limit, \(L\to\infty\), corresponds to the non-analytical fractal result (38) expressed via the Thomae function (15). The behaviour near the points \(\nu=0\) and \(\nu=1\) is not shown to preserve a convenient vertical scale. continuation can be formulated in terms of the relation between pressures (39) and (40). Despite similarity of Eqs. (39) and (40), these quantities have different properties. The pressure for real rotation (39) is defined only in the strip \(-y<x_{R}<y\) because the Dedekind eta function is defined only in the upper part of the complex plain (excluding the real axis). Physically, the same condition coincides with the causality requirement (25). The pressure of the gas under imaginary rotation (40) is defined for any real-valued \(x_{I}\in\mathbb{R}\). Equations (39) and (40) can be written in the following unified form: \[-\bar{P}(z,z_{0})=y\,\ln\Bigl{[}\eta\left(iz+z_{0}\right)\eta\left(-iz+z_{0} \right)\bigr{]}\,, \tag{42}\] with \[z=x_{R}+ix_{I}\equiv\frac{\beta\left(\Omega+i\Omega_{I}\right)}{2\pi},\quad z _{0}=iy\equiv\frac{i}{L}. \tag{43}\] For any \(y>0\), the analyticity properties of the free energy density at real rotation (39) imply that function (42) can be expanded in series of powers of \(z\) around the point \(z_{0}\) in the disk \(|z|<|z_{0}|\) of radius \(|z_{0}|=y>0\). In the original notations, the disk of analyticity can be defined as the generalization of the causality condition (25) in the plane of complex angular frequencies: \[\left(\Omega^{2}+\Omega_{I}^{2}\right)R^{2}<1\,. \tag{44}\] The dimensionless pressure \(\bar{P}\) can be written as the following series as the following series \[\bar{P}(z,z_{0})=\sum_{n=0}^{\infty}\bar{P}_{\beta}^{(2n)}(z_{0})z^{2n}\,, \quad|z|<|z_{0}|\,, \tag{45}\] with the first two coefficients in the explicit form: \[\bar{P}_{\beta}^{(0)}(iy) =-y\log\bigl{(}\eta^{2}(iy)\bigr{)}\,. \tag{46}\] \[\bar{P}_{\beta}^{(2)}(iy) =y\frac{\bigl{(}\eta^{\prime\prime}(iy)\bigr{)}^{2}\eta(iy)- \bigl{(}\eta^{\prime}(iy)\bigr{)}^{2}}{\eta^{2}(iy)}\,. \tag{47}\] One can check explicitly that the coefficients of the series (45) diverge in the thermodynamic limit which is consistent with the shrinking radius of convergence (44) as \(R\to\infty\). We show the first three non-zero coefficients \(\bar{P}_{\beta}^{(n)}\) in Fig. 6. One can also rewrite the Figure 6: The first three nonzero coefficients in the series (45) of pressure of the ring as a function of the (inverse) normalized radius \(1/L\). The direction of the thermodynamic limit is shown by the arrow. Figure 5: The thermal contribution to the pressure, \(P\equiv P(L,\nu)\), as a function of the (normalized) length of the ring \(L=2\pi R/\beta\) for various (normalized) statistical angles \(\nu=\chi/(2\pi)\) with the rational values \(\nu=n/10\) at \(n=0,1,\ldots,5\). The pressure is normalized to its value \(P_{0}\), Eq. (17), for a nonrotating ring, in the infinite-volume limit, \(L\to\infty\). For rational \(\nu\), the pressure (35), (38) in the infinite-volume limit takes fractal values (shown by the arrows) dictated by the Thomae function (15). The inset shows the zoom in on the small-radius region. terms of physical variables: \[\bar{P}_{\beta}(\Omega,R)=\sum_{n=0}^{\infty}P_{2n}(\beta,R)\Omega^{2n},\qquad| \mathrm{Re}\,\Omega|R<1\,, \tag{48}\] where \(\Omega=\Omega+i\Omega_{I}\) and \[P_{2n}(\beta,R)\equiv\left(\frac{\beta}{2\pi}\right)^{2n}\bar{P}_{\beta}^{(2n) }\!\left(\frac{i\beta}{2\pi R}\right), \tag{49}\] and the radius of convergence in the complex \(\Omega\) plane is determined by Eq. (44): \(\Omega_{c}=1/R\). The radius shrinks zero, \(\Omega_{c}\to 0\) as \(R\to\infty\), thus implying the absence of the direct analytical continuation between real and imaginary angular frequencies in the thermodynamic limit. Thus, thermodynamics of an infinite-volume system subjected to imaginary rotation is not directly connected to the thermodynamics of real rotation. ### How fractalization emerges as volume grows Figure 4 shows that at any fixed statistical angle \(\chi=2\pi\nu\) (or, equivalently, at any imaginary frequency \(\Omega_{I}\)) and any finite radius \(L\), thermodynamic pressure is described by a smooth analytical function of \(\chi\). For a rational normalized angle \(\nu=p/q\) with coprime integer numbers \(p\) and \(q\) (\(0<p<q\)), pressure depends both on the numerator \(p\) and the denominator \(q\) (we remind that in these units, \(\Omega_{I}=(2\pi/\beta)p/q\)). However, as the length \(L\) of the ring increases, pressure turns into a fractal implying, that it loses the sensitivity to the numerator \(p\) and keeps only the dependence on the denominator \(q\) that defines the imaginary frequency. This curious fractalization transition is shown in Fig. 5 for the particular set of imaginary frequencies \(\Omega_{I}\equiv 2\pi\nu/\beta=p\pi/(5\beta)\) with \(p=0,1,\ldots,9\). Given the periodicity (10), \(\nu\to\nu+1\), and the reflection symmetry (11), \(\nu\to-\nu\), of pressure, this particular choice leaves us with six distinct values of the normalized statistical angle: \(\nu=0/10,1/10,\ldots,5/10\). At small and moderate ring sizes up to \(L\simeq 4\), the pressure \(P=P(L,\nu)\) depends on the normalize statistical angle \(\nu\) monotonically, with \(P(L,\nu_{a})<P(L,\nu_{b})\) for \(1/2>\nu_{a}>\nu_{b}\) in the mentioned set of values. In other words, in the analytical region, the thermodynamics of the system behaves analytically, exhibiting a dependence on the actual value of the rational-valued normalized statistical angle and not on its numerator or denominator separately. As we have seen above, the transition to the fractal regime is associated with the loss of the analytical continuation from imaginary to real rotation. For purely imaginary rotation, the convergence segment (44) for the variable \(\nu\) is defined by the condition: \[|\nu|L<1\,, \tag{50}\] implying that for the largest value, \(\nu=1/2\), the non-analytical regime should come into play at the ring length \(L=2\), while for the smallest nonzero value, \(\nu=1/10\), the critical length is larger, \(L=10\). This region of the lengths - shown in the inset of Fig. 5 - is characterized by the breaking of the monotonic behavior of pressure on the statistical angle, which is a precursor of the fractal features observed at higher lengths of the ring. At higher \(L\), the behavior of pressure on \(\nu\) becomes more peculiar. To see this in detail, it is convenient to start from the non-rotating case, \(\nu=0/10=0\), and associate it to the pair \((p,q)=(1,1)\) since \(\nu=0/10\equiv 0\) and \(\nu=1/1\equiv 1\) correspond to the same static case related to each other by the translation symmetry, \(\nu\to\nu+1\). The \(\nu=0\) pressure, characterized by the denominator \(q=1\), is shared both by real, \(\Omega=0\), and imaginary, \(\Omega_{I}=0\), static cases. In Fig. 4, it provides us with a benchmark value for pressure in the large-\(L\) limit. The values of the statistical angle \(\nu=1/10\) and \(\nu=3/10\) correspond to rotations with different imaginary angular frequencies, to \(\Omega_{I}=\pi/(5\beta)\) and \(\Omega_{I}=3\pi/(5\beta)\), respectively, but they share the same denominator \(q=10\). According to the fractalization property (14), both these cases - which are characterized by the pairs of coprimes \((p,q)=(1,10)\) and \((p,q)=(3,10)\), respectively - should correspond, in thermodynamic limit, to the pressure of free bosonic gas at the same temperature \(T=1/(10\beta)\) which is ten times smaller than the temperature in the non-rotating \(\Omega_{I}=0\) (\(\nu=0\)) case. The pressure for \(\nu=1/10\) and \(\nu=3/10\) is, consequently, \(1/q^{2}\equiv 1/100\) of the gas pressure in the absence of imaginary rotation. The described features are clearly seen in Fig. 4: the \(\nu=1/10\) and \(\nu=3/10\) pressures, very different at low \(L\sim 1\), start to approach each other at \(L\sim 10\), converging into a single curve already at \(L\sim 50\). This asymptotic behaviour has fractal features as the thermodynamics of the gas is sensitive only to the denominator of the rational (properly normalized) angular frequency. The cases \(\nu=2/10\equiv 1/5\) and \(\nu=4/10\equiv 2/5\) correspond to the coprime pairs \((p,q)=(1,5)\) and \((p,q)=(2,5)\) that share the same denominator \(q=5\). The pressure for these imaginary angular frequencies collapse to a single line even earlier, at \(L\sim 7\), as it can be seen from the inset of Fig. 5. In both cases, pressure approaches the result for a free bosonic gas with temperature \(T=1/(5\beta)\) which is \(q^{2}=25\) times smaller than the pressure of the non-rotating gas. Finally, the normalized statistical angle \(\nu=5/10\equiv 1/2\) gives the denominator \(q=2\), temperature \(T=1/(2\beta)\) and a gas pressure which is \(q^{2}=4\) times smaller than the one of the non-rotating gas. The monotonic analytical behaviour of pressure \(P(\nu)\equiv P(\nu,l)\), seen at small lengths of the ring \(L\), \[[\mathrm{small}\ L\ (\mathrm{analytical})]: \tag{51}\] \[P(0)\!\!>\!\!P\Big{(}\frac{1}{10}\Big{)}\!\!>\!\!P\Big{(}\frac{ 2}{10}\Big{)}\!\!>\!\!P\Big{(}\frac{3}{10}\Big{)}\!\!>\!\!P\Big{(}\frac{4}{10} \Big{)}\!\!>\!\!P\Big{(}\frac{5}{10}\Big{)},\] is completely lost for large \(L\) giving us the fractal non analytical hierarchy: \[\text{[large $L$ (fractal)]}: \tag{52}\] \[P(0){>}P\Big{(}\frac{5}{10}\Big{)}{>}P\Big{(}\frac{2}{10}\Big{)}=P \Big{(}\frac{4}{10}\Big{)}{>}P\Big{(}\frac{1}{10}\Big{)}=P\Big{(}\frac{3}{10} \Big{)},\] as it is clearly seen in Fig. 4. ### Negative thermodynamic pressure of ninions Apart from the fractal features of the thermodynamic limit - already anticipated from the analytical approach discussed earlier - the pressure at finite volumes \(L\sim 1\dots 10\) appears to possess an unexpected feature. Namely, there are the regions of the statistical angle \(\chi\) where the thermal contribution to pressure is negative, as it is clearly seen in Fig. 4. In this sense, the "ninions" - the auxiliary particles which are associated with the ninionic deformation of the standard statistical distribution (12) - provide us with the similar phenomenon as the Casimir effect with one important difference: the negative "ninionic" pressure is produced by thermal, and not quantum, fluctuations. As temperature rises, the negative pressure rises as well. The effect of the negative pressure appears in the analytical region (50) as it is seen in Fig. 5 and especially in the inset of this figure. This unusual behavior is an exotic property of ninions which is not associated with the fractal statistics. ## IV Rigidly-rotating Bose-Einstein distribution We now move on and consider a \(3+1\)-D rigidly-rotating system comprised of uncharged, massless boson particles. In this section, we consider such a system from the perspective of relativistic kinetic theory, which is introduced briefly in Subsec. IV.1. In Subsec. IV.2, we discuss the thermodynamic properties of a rigidly-rotating system with real rotation parameter \(\Omega\). Finally, Subsec. IV.4 is dedicated to the case of imaginary rotation. ### Relativistic kinetic theory Although throughout this article we consider non-interacting bosonic systems, it is instructive to discuss, for a brief moment, an interacting model. This approach will allow us to elucidate thermal distributions in thermodynamic equilibrium and shed some light on the physical nature of imaginary rotating systems. In relativistic kinetic theory, the system dynamics are described using the relativistic Boltzmann equation: \[k^{\mu}\partial_{\mu}f_{\mathbf{k}}=C[f], \tag{53}\] where \(f_{\mathbf{k}}\equiv f_{\mathbf{k}}(x)\) is the one-particle distribution function and \(k^{\mu}=(k^{0},\mathbf{k})\) is the on-shell momentum satisfying \(k^{2}=0\). The macroscopic properties of the system can be described using the energy-momentum tensor, \[T^{\mu\nu}=\int dK\,k^{\mu}k^{\nu}f_{\mathbf{k}}, \tag{54}\] where \(dK=gd^{3}k/[(2\pi)^{3}k^{0}]\) is the Lorentz-invariant integration measure and the degeneracy factor of a single neutral scalar field considered in this paper is \(g=1\). The conservation law \(\partial_{\mu}T^{\mu\nu}=0\) demands that \(k^{\mu}k^{\nu}\) be a collision invariant, i.e. \[\int dKC[f]\,k^{\mu}k^{\nu}=0. \tag{55}\] The prototypical collision term is that corresponding to 2-to-2 scattering processes, \[C_{2\to 2}[f]=\frac{1}{2}\int dK^{\prime}dPdP^{\prime}W_{\mathbf{k} \mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{\prime}}\\ \times(f_{\mathbf{p}}f_{\mathbf{p}^{\prime}}\tilde{f}_{\mathbf{k}}\tilde{f}_{ \mathbf{k}^{\prime}}-f_{\mathbf{k}}f_{\mathbf{k}^{\prime}}\tilde{f}_{\mathbf{p}}\tilde{f}_{\bm {p}^{\prime}}), \tag{56}\] where \(\tilde{f}_{\mathbf{k}}=1+f_{\mathbf{k}}\) is the Bose enhancement factor and the Lorentz-invariant transition rate \(W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{\prime}}\) can be written in terms of the quantum-mechanical differential cross section \(d\sigma/d\Omega\) as \[W_{\mathbf{k}\mathbf{k}^{\prime}\to\mathbf{p}\mathbf{p}^{\prime}}=s\frac{d\sigma(s,\Theta_{s })}{d\Omega_{s}}(2\pi)^{6}\delta^{4}(k+k^{\prime}-p-p^{\prime}), \tag{57}\] with \(s=(k+k^{\prime})^{2}\) and \(\Theta_{s}\) being the center of mass squared energy and emission angle, respectively, with \[\cos\Theta_{s}=\frac{(k-k^{\prime})\cdot(p-p^{\prime})}{(k-k^{\prime})^{2}}. \tag{58}\] According to the H theorem, the collision term \(C[f]\) drives the system towards local thermal equilibrium, described for the case of a free bosonic gas by the Bose-Einstein distribution: \[f_{\mathbf{k}}^{\text{BE}}=\frac{1}{\exp[u_{\mu}(x)k^{\mu}/T(x)]-1}\,, \tag{59}\] where \(T(x)\) is the local temperature and \(u^{\mu}(x)\) is the local four-velocity. It is easy to check that \(C[f]=0\) when the gas is in thermal equilibrium, i.e. \(f_{*}=f_{*}^{\text{BE}}\) for \(*\in\{\mathbf{k},\mathbf{k}^{\prime},\mathbf{p},\mathbf{p}^{\prime}\}\). In global thermal equilibrium, \(f=f_{\text{BE}}\) at each space-time point and Eq. (53) becomes: \[k^{\mu}k^{\nu}\partial_{\mu}\beta_{\nu}(x)=0, \tag{60}\] where \(\beta^{\mu}(x)=u^{\mu}(x)/T(x)\) is the temperature four-vector. Thus, in global equilibrium, \(\beta^{\mu}\) satisfies the Killing equation, \(\partial_{\mu}\beta_{\nu}+\partial_{\nu}\beta_{\mu}=0\). In this paper, we seek the solution that corresponds to rigid rotation: \[\beta^{\mu}(\Omega)\partial_{\mu}=\beta(\partial_{t}-y\Omega\partial_{x}+x \Omega\partial_{y})=\beta(\partial_{t}+\Omega\partial_{\varphi}), \tag{61}\] where \(\Omega\) is the angular velocity and \(\beta=1/T_{0}\) is a constant corresponding to the inverse temperature on the rotation axis (where \(x=y=0\)). The equilibrium distribution (59) thus reads: \[f_{\mathbf{k}}^{\rm BE}(\Omega)=\frac{1}{\exp[\beta(k^{0}+\Omega k_{\varphi})]-1}\,, \tag{62}\] where \[k_{\varphi}=-\rho^{2}k^{\varphi}\,,\qquad k^{\varphi}=\rho^{-2}(-yk^{x}+xk^{y} )\,, \tag{63}\] with \(\rho=\sqrt{x^{2}+y^{2}}\) being the distance from the point at \((x,y,z)\) to the rotation axis. ### Thermodynamics of rigid rotation We now discuss the properties of the rigidly-rotating global equilibrium state. From the relation \(\beta^{\mu}\beta_{\mu}=1/T^{2}(x)\), we can identify the local temperature as \[T(\rho)=\beta^{-1}\gamma(\rho),\quad\gamma(\rho)=(1-\rho^{2}\Omega^{2})^{-1/2}, \tag{64}\] where \(\gamma(\rho)\) is the Lorentz factor of a co-rotating particle at a distance \(\rho\) from the rotation axis. Relation (64) corresponds to the Tolman-Ehrenfest law [40; 41], which relates local temperature to the metric in a static gravitational field, in the curvilinear background of co-rotating reference frame. Similarly, the local four-velocity reads \[u^{\mu}\partial_{\mu}=\gamma(\partial_{t}+\Omega\partial_{\varphi}). \tag{65}\] Both the Lorentz factor and the local temperature \(T(\rho)=\gamma(\rho)/\beta\) diverge on the light cylinder, as \(\rho\Omega\to 1\). The energy-momentum tensor \(T^{\mu\nu}(x)\) can be obtained via \[T^{\mu\nu}=\int dK\,k^{\mu}k^{\nu}f=(E+P)u^{\mu}u^{\nu}-Pg^{\mu\nu}, \tag{66}\] where \(dK=gd^{3}k/[(2\pi)^{3}k^{0}]\) is the Lorentz-invariant integration measure. For the case of a single neutral scalar field that we consider in this paper, the degeneracy factor is \(g=1\). In the case of massless particles, the energy \(E=3P\) is expressed via the local pressure \(P\), which reads \[P(\rho)=\frac{1}{3}\int dK\,(k\cdot u)^{2}f_{\rm BE}=\frac{\pi^{2}\gamma^{4}( \rho)}{90\beta^{4}}. \tag{67}\] We now consider the thermodynamic limit of our rigidly-rotating system. Identifying \(F(\rho)=-P\) as the local free-energy density, we consider the average free energy \(\overline{\mathcal{F}}=\mathcal{F}/V\) in a cylindrical volume \(V=\pi R^{2}L_{z}\) of height \(L_{z}\) and radius \(R\), centered on the rotation axis. The mean free energy density reads \[\overline{\mathcal{F}}(\Omega,R)=-\frac{2}{R^{2}}\int_{0}^{R}d\rho\,\rho\,P( \rho)=-\frac{\pi^{2}}{90\beta^{4}}\gamma^{2}(R). \tag{68}\] The same result can be obtained starting from the expression for the grand potential \(\mathcal{F}\) of relativistic bosons in rotation, \[\mathcal{F} =\int_{V}d^{3}x\int\frac{d^{3}k}{(2\pi)^{3}}\ln[1-e^{-\beta(k- \Omega J^{z})}] \tag{69}\] \[=\int_{V}d^{3}x\int\frac{d^{3}k}{2(2\pi)^{3}}\ln[1-2e^{-\beta k} \cosh(\beta\Omega k_{\varphi})+e^{-2\beta k}],\] where \(J^{z}=-k_{\varphi}\) is the \(z\) component of the particle's angular momentum. Other thermodynamic quantities can be obtained starting from the relation \[d\mathcal{F}=\beta^{-2}\mathcal{S}d\beta-\mathcal{P}dV-\mathcal{M}d\Omega, \tag{70}\] where \(\mathcal{S}\) and \(\mathcal{M}\) are the total entropy and angular momentum, respectively. Given that \(V=\pi R^{2}L_{z}\), the radial and vertical directions are not equivalent. Therefore, we replace the term \(\mathcal{P}dV\) by \[\mathcal{P}dV\to 2\pi RL_{z}\mathcal{P}_{R}dR+\pi R^{2}\mathcal{P}_{z}dL_{z}, \tag{71}\] with the hydrostatic pressure obtained as the weighted average \(\mathcal{P}=(2\mathcal{P}_{R}+\mathcal{P}_{z})/3\). Thus, the thermodynamic pressures are given by \[\mathcal{P}_{R}=-\frac{1}{2R}\frac{\partial(\overline{\mathcal{F}}R^{2})}{ \partial R},\qquad\mathcal{P}_{z}=-\overline{\mathcal{F}}. \tag{72}\] Similarly, the average entropy \(\overline{\mathcal{S}}\) and average angular momentum \(\overline{\mathcal{M}}\) are given by \[\overline{\mathcal{S}}=\beta^{2}\frac{\partial\overline{\mathcal{F}}}{ \partial\beta},\qquad\overline{\mathcal{M}}=-\frac{\partial\overline{ \mathcal{F}}}{\partial\Omega}. \tag{73}\] The average energy is then given by the Euler relation: \[\overline{\mathcal{E}}=\overline{\mathcal{F}}+\beta^{-1}\overline{\mathcal{S}} +\Omega\overline{\mathcal{M}}. \tag{74}\] Taking into account the thermodynamic relations in Eq. (73), the Euler relation can be written as \[\overline{\mathcal{E}}=\left(\frac{\partial(\beta\overline{\mathcal{F}})}{ \partial\beta}\right)_{\beta\Omega}. \tag{75}\] We now evaluate the above quantities using the classical expression in Eq. (68): \[\mathcal{P}_{R} =\frac{\pi^{2}}{90\beta^{4}}\gamma^{4}(R),\qquad\overline{ \mathcal{E}}=\frac{\pi^{2}}{90\beta^{4}}[2\gamma^{4}(R)+\gamma^{2}(R)],\] \[\overline{\mathcal{S}} =\frac{2\pi^{2}}{45\beta^{3}}\gamma^{2}(R),\quad\overline{ \mathcal{M}}=\frac{\pi^{2}R^{2}\Omega}{45\beta^{4}}\gamma^{4}(R). \tag{76}\] We remark that the average energy \(\overline{\mathcal{E}}\) can be obtained from the energy-momentum tensor \(T^{\mu\nu}\) via: \[\overline{\mathcal{E}}=\frac{2}{R^{2}}\int_{0}^{R}d\rho\,\rho\,T^{tt}, \tag{77}\] with \[T^{tt}=P(\rho)[4\gamma^{2}(\rho)-1]=\frac{\pi^{2}}{90\beta^{4}}\frac{3+\rho^ {2}\Omega^{2}}{(1-\rho^{2}\Omega^{2})^{3}}\,, \tag{78}\] where we took into account the explicit expressions for the local hydrostatic pressure (67) and the Lorentz factor (64). ### Slow rotation: moment of inertia and shape Considering now the free energy at small values of the rotation parameter \(\Omega\), we expand the free energy density in series of the velocity of a corotating particle \[v_{R}=\Omega R\,, \tag{79}\] at the system boundary \(\rho=R\), following Ref. [27]: \[\overline{\mathcal{F}}(\Omega)=\overline{\mathcal{F}}(0)\sum_{n=0}^{\infty} \frac{v_{R}^{2n}}{(2n)!}K_{2n}, \tag{80}\] where \(K_{2n}\) are \(\Omega\)-independent dimensionless coefficients and we took into account that \(\overline{\mathcal{F}}(\Omega)\) is an even function of \(\Omega\). By construction, one gets \(K_{0}=1\). Comparing Eqs. (80) and (68), it is clear that \[\overline{\mathcal{F}}(0)=-\frac{\pi^{2}}{90\beta^{4}},\quad K_{2n}=\left. \frac{1}{\overline{\mathcal{F}}(0)}\frac{\partial^{2n}\overline{\mathcal{F}} }{\partial v_{R}^{2n}}\right|_{\Omega=0}=(2n)!\,. \tag{81}\] The results for the coefficients \(K_{2n}\) are also valid in a multicomponent non-interacting gas since they reflect the rotational response normalized per degree of freedom. The zero-rotation limit of the moment of inertia, \(I_{0}\) of a one-component bosonic gas evaluates to \[I_{0} \equiv\lim_{\Omega\to 0}I(\Omega)=-\lim_{\Omega\to 0}\frac{1}{\Omega}\frac{ \partial\overline{\mathcal{F}}}{\partial\Omega}\] \[=-\overline{\mathcal{F}}(0)R^{2}K_{2}=\frac{\pi^{2}R^{2}}{45\beta ^{4}}, \tag{82}\] which follows from the thermodynamic relation (73) for the average angular momentum \(\overline{\mathcal{M}}(\Omega)=I(\Omega)\Omega\) and the definition of the moment of inertia \(I=I(\Omega)\) as the proportionality coefficient in the above relation. It is interesting to notice that recent first-principle simulations [27] indicate that in the high-temperature limit, the rotating gluon gas possesses the dimensional moment of inertia \(K_{2}=2.23(39)\) consistent with our estimation (81): \[K_{2}=2\,,\qquad\left[\text{per one bosonic d.o.f.}\right]. \tag{83}\] This result is not unexpected since at sufficiently high temperatures, the gluon plasma becomes a weakly-interacting gas of gluons. The next non-zero coefficient in the series (81), \[K_{4}=24\,,\qquad\left[\text{per one bosonic d.o.f.}\right]. \tag{84}\] corresponds to the correction to the free energy caused by the deformation of the rotating gas due to rotation. This correction also affects the moment of inertia, \(I(\Omega)=I_{0}+I_{2}v_{R}^{2}/2+\dots\), with the universal non-interacting coefficient \(I_{2}/I_{0}=4\). The positiveness of \(I_{2}>0\) implies that the rotating matter tends to increase its angular momentum with an increase in angular frequency. This property signals the change in the shape of rotating system leading to a spatial redistribution of energy as a result of the rotation, which can already be seen from Eq. (78): rotation tends to increase the contributions to the energy density (77) coming from the outer regions as compared to those coming from the inner ones. The physical situation is somewhat similar - neglecting viscosity effects - to water rotating in a glass: its moment of inertia increases with rotation because the distribution of mass within the glass changes, with the water particles moving away from the axis of rotation, increasing the distance of each mass element from the axis, and, hence, the moment of inertia larger. Finite-size corrections, related to the finite transverse size of the system and, consequently, to quantization of the transverse modes, will be discussed below in Subsect.s V.8 and V.9. ### Imaginary rotation We now turn to the case of imaginary rotation. Setting \(\Omega=i\Omega_{I}\) with real \(\Omega_{I}\) is not possible directly in \(f_{\text{BE}}\), because that would lead to a complex-valued distribution function. Instead, we can consider the dynamics of the system described by the distribution \[f_{\mathbf{k}}^{\text{im}} =\frac{1}{2}\big{[}f_{\mathbf{k}}^{\text{BE}}(i\Omega_{I})+f_{\mathbf{k}} ^{\text{BE}}(-i\Omega_{I})\big{]}\] \[=\frac{e^{\beta k}\cos(\beta\Omega_{I}k_{\varphi})-1}{e^{2\beta k }-2e^{\beta k}\cos(\beta\Omega_{I}k_{\varphi})+1}, \tag{85}\] which is nothing but a form of the ninionic deformation of the Bose-Einstein statistics (12). Since \(\beta^{\mu}(i\Omega_{I})\partial_{\mu}=\beta(\partial_{t}+i\Omega_{I}\partial _{\varphi})\) still satisfies the Killing equation, the left-hand side of the Boltzmann equation (53) vanishes. Somewhat unsurprisingly, the collision term on the right-hand side of the same equation does not vanish. This can be seen by considering the small-\(\Omega\) expansion of \(f_{\mathbf{k}}^{\text{BE}}(\Omega)\) introduced in Eq. (62): \[f_{\mathbf{k}}^{\text{BE}}(\Omega)=f_{\mathbf{k}}^{0}\Big{[}1-\bar{f}_{ \mathbf{k}}^{0}\Omega k_{\varphi}\\ +\frac{\Omega^{2}k_{\varphi}^{2}}{2!}\bar{f}_{\mathbf{k}}^{0}(f_{\mathbf{ k}}^{0}+\bar{f}_{\mathbf{k}}^{0})+O(\Omega^{3})\Big{]}, \tag{86}\] where \(f_{\mathbf{k}}^{0}\equiv f_{\mathbf{k}}^{\text{BE}}(\Omega=0)\) and \(\tilde{f}_{\mathbf{k}}^{0}=1+f_{\mathbf{k}}^{0}\). Considering now \(\Omega\to\pm i\Omega_{I}\) and taking the average as described in Eq. (85) gives \[f_{\mathbf{k}}^{\text{im}} =f_{\mathbf{k}}^{0}\left[1-\frac{\Omega_{I}^{2}k_{\varphi}^{2}}{2!} \tilde{f}_{\mathbf{k}}^{0}(f_{\mathbf{k}}^{0}+\tilde{f}_{\mathbf{k}}^{0})+O(\Omega_{I}^{4} )\right],\] \[\tilde{f}_{\mathbf{k}}^{\text{im}} =\tilde{f}_{\mathbf{k}}^{0}\left[1-\frac{\Omega_{I}^{2}k_{\varphi}^{ 2}}{2!}f_{\mathbf{k}}^{0}(f_{\mathbf{k}}^{0}+\tilde{f}_{\mathbf{k}}^{0})+O(\Omega_{I}^{4} )\right]. \tag{87}\] Taking this substitution back into the collision term (56) shows that \[\frac{f_{\mathbf{p}}^{\rm im}\,f_{\mathbf{p}^{\prime}}^{\rm im}\widetilde{f}_{ \mathbf{k}}^{\rm im}\widetilde{f}_{\mathbf{k}^{\prime}}^{\rm im}-f_{\mathbf{k}^{\prime}}^{ \rm im}\widetilde{f}_{\mathbf{k}^{\prime}}^{\rm im}\widetilde{f}_{\mathbf{p}^{\prime}}^{ \rm im}}{f_{\mathbf{p}^{\prime}}^{0}\widetilde{f}_{\mathbf{k}^{\prime}}^{0}\widetilde{f }_{\mathbf{k}^{\prime}}^{0}}=-\frac{\Omega_{I}^{2}}{2}\\ \times\left[p_{\varphi}^{2}(f_{\mathbf{p}}^{0}+\widetilde{f}_{\mathbf{p}} ^{0})+p_{\varphi}^{\prime 2}(f_{\mathbf{p}^{\prime}}^{0}+\widetilde{f}_{\mathbf{p}^{ \prime}}^{0})\right.\\ \left.-k_{\varphi}^{2}(f_{\mathbf{k}}^{0}+\widetilde{f}_{\mathbf{k}}^{0} )-k_{\varphi}^{\prime 2}(f_{\mathbf{k}^{\prime}}^{0}+\widetilde{f}_{\mathbf{k}^{ \prime}}^{0})\right]+O(\Omega^{4}). \tag{88}\] It can be seen that in general, \(C[f]\) does not vanish when \(f_{\mathbf{k}}=f_{\mathbf{k}}^{\rm im}\), hinting that thermal equilibration will generically reduce the magnitude of \(\Omega_{I}^{2}\). Keeping in mind that imaginary-rotation states are not in actual thermal equilibrium - in a sense that their deformed distribution (85) does not have the equilibrium Bose-Einstein form (2) and that the collision integral (88) does not vanish - we can still derive the macroscopic energy-momentum tensor, which becomes now diagonal: \[T_{\rm cl;im}^{\mu\nu}=\mathrm{diag}(E_{\rm cl}^{\rm im},P_{\rm cl;\rho}^{\rm im },\rho^{-2}P_{\rm cl;\varphi}^{\rm im},P_{\rm cl;z}^{\rm im}),\] (89a) with \[E_{\rm cl}^{\rm im}=\frac{\pi^{2}}{90\beta^{4}}\gamma_{I}^{4}(4 \gamma_{I}^{2}-1), \tag{89b}\] \[P_{\rm cl;\rho}^{\rm im}=P_{\rm cl;z}^{\rm im}=\frac{\pi^{2}}{9 0\beta^{4}}\gamma_{I}^{4},\] (89c) \[P_{\rm cl;\varphi}^{\rm im}=\frac{\pi^{2}}{90\beta^{4}}\gamma_{I}^{4}(4 \gamma_{I}^{2}-3), \tag{89d}\] where \(\gamma_{I}(\rho)\) is reminiscent of the Lorentz factor of corotating particles, \[\gamma_{I}=\frac{1}{\sqrt{1+\rho^{2}\Omega_{I}^{2}}}. \tag{90}\] Notice that the Euclidean version of the quantum Tolman-Ehrenfest effect gives a different Lorentz factor [26]: \[\gamma_{I}^{\rm TE}=\frac{1}{\sqrt{1+\rho^{2}\beta^{-2}[\Omega_{I}\beta]_{2 \pi}^{2}}}, \tag{91}\] where \([x]_{2\pi}=x+2\pi k\in(-\pi,\pi]\), with \(k\in\mathbb{Z}\), is invariant under the \(2\pi\) symmetry enforced by the natural periodicity of the imaginary rotation (10). The apparent non-compliance of the kinetic Euclidean Lorentz factor (90) with the periodicity requirement (10) can be traced back to the continuous nature of the angular component \(k_{\varphi}\) of the moment (63). From a thermodynamic point of view, the structure of the energy-momentum tensor reveals an underlying equilibrium (perfect fluid) contribution, \(T_{\rm cl;im;pf}^{\mu\nu}=\mathrm{diag}(E_{\rm cl;im}^{\rm im},\rho^{-2}P_{\rm cl }^{\rm im},P_{\rm cl}^{\rm im})\), with hydrostatic pressure \(P_{\rm cl}^{\rm im}=E_{\rm cl}^{\rm im}/3\), and a shear-stress tensor \(\pi_{\rm cl;im}^{\mu\nu}=T_{\rm cl;im}^{\mu\nu}-T_{\rm cl;im;pf}^{\mu\nu}\) with components \[\pi_{\rm cl;im}^{\mu\nu}=\frac{2\pi^{2}\gamma_{I}^{4}}{135}(1-\gamma_{I}^{2}) \times\mathrm{diag}(0,1,-2\rho^{-2},1). \tag{92}\] It is instructive to note that \(E_{\rm cl}^{\rm im}=E_{\rm cl}=\pi^{2}/(30\beta^{4})\) is independent of \(\Omega_{I}\) on the rotation axis, while at \(\rho=\sqrt{3}/\Omega_{I}\), the energy density reaches 0. At larger distances, \(E_{\rm cl}^{\rm im}\) decreases to a minimum (negative) value \(-\pi^{2}/(9720\beta^{4})\) (reached at \(\rho=\sqrt{5}/\Omega_{I}\)) and afterwards increases asymptotically towards its limit 0. In this large-\(\rho\) limit, Eq. (89) shows that \(T_{\rm cl;im}^{\mu\nu}\) behaves as follows: \[T_{\rm cl;im}^{\mu\nu}\simeq\frac{\pi^{2}}{90\beta^{4}}\gamma_{I}^{4}\mathrm{ diag}(-1,1,-3\rho^{-2},1), \tag{93}\] with \(\gamma_{I}\sim(\rho\Omega_{I})^{-1}\). Thus, far away from the rotation axis, the azimuthal pressure becomes negative and three times larger than the energy density, while the radial and vertical pressures remain positive, each being equal in magnitude to the energy density. We now consider the large-volume limit of our system. The average energy inside a cylinder of radius \(R\) is simply \[\overline{\mathcal{E}}_{\rm cl}^{\rm im}=\frac{\pi^{2}}{90\beta^{4}}[2\gamma_{I} ^{4}(R)+\gamma_{I}^{2}(R)], \tag{94}\] which agrees with the expression in Eq. (76) under the substitution \(\gamma(R)\to\gamma_{I}(R)\). Setting \(F_{\rm cl}^{\rm im}=-P_{\rm cl}^{\rm im}\) and integrating as in Eq. (68) will clearly give a different result for the average free energy than \(\overline{\mathcal{F}}_{\rm cl}^{\rm im}\) in Eq. (68). To achieve agreement up to the substitution \(\gamma(R)\to\gamma_{I}(R)\), we must consider \(F_{\rm cl}^{\rm im}=-P_{\rm cl;R}^{\rm im}\). This choice is supported also by the more fundamental expression for the free energy obtained by setting \(\Omega\to i\Omega_{I}\) in the second line of Eq. (69), i.e. \[\mathcal{F}_{\rm cl}^{\rm Im} =\int_{V}d^{3}x\int\frac{d^{3}k}{2(2\pi)^{3}}\ln[1-2e^{-\beta k} \cos(\beta\Omega_{I}k_{\varphi})+e^{2\beta k}]\] \[=-\frac{\pi^{2}V}{90\beta^{4}}\gamma_{I}^{2}(R), \tag{95}\] which is consistent with the expression in Eq. (75). Applying the same thermodynamic relations as described in the previous Subsect. for the real case gives expressions for quantities analogue to the system pressure, entropy and angular momentum: \[\mathcal{P}_{\rm cl;R}^{\rm im} =\frac{\pi^{2}}{90\beta^{4}}\gamma_{I}^{4}(R),\hskip 14.226378pt \mathcal{P}_{\rm cl;z}^{\rm im}=\mathcal{P}_{\rm cl;\rho}^{\rm im}=\frac{\pi^{2}}{9 0\beta^{4}}\gamma_{I}^{2}(R),\] \[\overline{\mathcal{S}}_{\rm cl}^{\rm im} =\frac{2\pi^{2}}{45\beta^{3}}\gamma_{I}^{2}(R),\hskip 14.226378pt \overline{\mathcal{M}}_{\rm cl}^{\rm im} =-\frac{\pi^{2}R^{2}\Omega_{I}}{45\beta^{4}}\gamma_{I}^{4}(R). \tag{96}\] The above quantities are compatible with an Euler-like relation, \[\overline{\mathcal{E}}_{\rm cl}^{\rm im}=\overline{\mathcal{F}}_{\rm cl}^{\rm im} +\beta^{-1}\overline{\mathcal{S}}_{\rm cl}^{\rm im}+\Omega_{I}\overline{ \mathcal{M}}_{\rm cl}^{\rm im}, \tag{97}\] formulated now for a system under imaginary rotation. Imaginary rotation of the scalar field vs. real rotation ### Mode solutions We consider a real, massless scalar field \(\hat{\phi}\). The decomposition of the field operator \(\hat{\phi}(x)\) reads as follows: \[\hat{\phi}(x)=\sum_{j}[\hat{a}_{j}f_{j}(x)+\hat{a}_{j}^{\dagger}f_{j}^{*}(x)], \tag{98}\] where \(f_{j}(x)\) represent a complete basis of orthonormal mode solutions of the Klein Gordon equation, \[(\Box+m^{2})f_{j}=0\,. \tag{99}\] These modes are taken as eigenfunctions of the Hamiltonian \(H=i\partial_{t}\), momentum component \(P^{z}=-i\partial_{z}\), and angular momentum component \(J^{z}=-i\partial_{\varphi}\): \[f_{j}=\frac{1}{2\pi\sqrt{2\omega_{j}}}e^{-i\omega_{j}t+ik_{j}z+im_{j}\varphi} J_{m_{j}}(q_{j}\rho), \tag{100}\] with \(q_{j}=\sqrt{\omega_{j}^{2}-k_{j}^{2}}\). The one-particle operators \(\hat{a}_{j}\) satisfy the canonical commutation relations, \[[\hat{a}_{j},\hat{a}_{j^{\prime}}]=0,\quad[\hat{a}_{j},\hat{a}_{j^{\prime}}^{ \dagger}]=\delta(j,j^{\prime}), \tag{101}\] where \(\delta(j,j^{\prime})=\omega_{j}^{-1}\delta(\omega_{j}-\omega_{j^{\prime}}) \delta(k_{j}-k_{j^{\prime}})\delta_{m_{j},m_{j^{\prime}}}\). The sum over the quantum numbers is abbreviated as \[\sum_{j}\rightarrow\sum_{m_{j}=-\infty}^{\infty}\int_{0}^{\infty}d\omega_{j} \omega_{j}\,\int_{-\omega_{j}}^{\omega_{j}}dk_{j}. \tag{102}\] In this and subsequent subsection, we pursue, for simplicity, a "hybrid" quantization approach: we work in the basis of the cylindrical waves (100) with continuous transverse momentum \(q_{j}\) which is typical for the unbounded system while restricting the integral (102) over the longitudinal momentum, \(|k_{j}|\leqslant\omega_{j}\), to preserve the hermiticity of the Hamiltonian. This set of modes is not suitable to describe rigid real rotation, since in that case, the system must be enclosed inside a boundary in order to preserve causality [42], as will be discussed in Sec. VI. In this section, we will focus primarily on the study of rigid rotation with imaginary angular velocity, for which no causality issues arise and the set of eigenmodes presented above is perfectly applicable. ### Thermal states The statistical operator for a thermal state at inverse temperature \(\beta\) which rotates with angular velocity \(\Omega\) is \[\hat{\rho}(\beta,\Omega)=e^{-\beta(\hat{H};-\Omega.\hat{J}^{z})}, \tag{103}\] where we took the normal-ordered operators \[:\widehat{H}:=\sum_{j}\omega_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j},\qquad:\hat{J }^{z}:=\sum_{j}m_{j}\hat{a}_{j}^{\dagger}\hat{a}_{j}. \tag{104}\] Using the commutation relations \[[\widehat{H},\hat{a}_{j}^{\dagger}]=\omega_{j}\hat{a}_{j}^{\dagger},\quad[ \hat{J}^{z},\hat{a}_{j}^{\dagger}]=m_{j}\hat{a}_{j}^{\dagger}, \tag{105}\] it is not difficult to establish that \[\hat{\rho}\hat{a}_{j}^{\dagger}\hat{\rho}^{-1}=e^{-\beta\tilde{\omega}_{j}} \hat{a}_{j}^{\dagger}, \tag{106}\] where \(\tilde{\omega}_{j}\equiv\tilde{\omega}_{j}(\Omega)=\omega_{j}-\Omega m_{j}\) represents the co-rotating energy (3). The thermal expectation value (t.e.v.) of an arbitrary operator \(\widehat{A}(x)\) is \[A(x)\equiv\langle\widehat{A}(x)\rangle=Z^{-1}\text{Tr}[\hat{\rho}\widehat{A}(x )], \tag{107}\] where \(Z=\text{Tr}(\hat{\rho})\) is the partition function. Using Eq. (106), the t.e.v. of the product of two one-particle operators can be seen to satisfy \[\langle\hat{a}_{j}^{\dagger}\hat{a}_{j^{\prime}}\rangle=e^{-\beta\tilde{ \omega}_{j}}\langle\hat{a}_{j^{\prime}}\hat{a}_{j}^{\dagger}\rangle. \tag{108}\] Using the commutation relations in Eq. (105) we establish \[\langle\hat{a}_{j}^{\dagger}\hat{a}_{j^{\prime}}\rangle=\frac{\delta(j,j^{ \prime})}{e^{\beta\tilde{\omega}_{j}}-1}. \tag{109}\] Introducing the functions \[G_{abc}=\sum_{j}\frac{2\text{Re}(f_{j}^{*}f_{j})}{e^{\beta\tilde{ \omega}_{j}}-1}\omega_{j}^{a}q_{j}^{b}m_{j}^{c}\] \[=\sum_{m=-\infty}^{\infty}\int_{0}^{\infty}\frac{d\omega}{e^{ \beta\tilde{\omega}}-1}\int_{0}^{\omega}\frac{dk}{2\pi^{2}}\omega^{a}q^{b}m^{ c}J_{m}^{2}(q\rho), \tag{110}\] the scalar condensate1 becomes Footnote 1: For brevity, we use the term “condensate” although the expectation value of the ordered operator \(\hat{\phi}^{2}\) does not imply the presence of a nonvanishing coherent condensate \(\langle\phi\rangle\). \[\phi^{2}\equiv\langle:\hat{\phi}^{2}:\rangle=G_{000}. \tag{111}\] Considering now the conformal energy-momentum tensor, defined classically as [43] \[T_{\mu\nu}=\frac{2}{3}\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{3}\phi\nabla_ {\mu}\nabla_{\nu}\phi-\frac{1}{6}g_{\mu\nu}(\nabla\phi)^{2}, \tag{112}\] its expectation value \(T^{\mu\nu}=\langle:\widehat{T}^{\mu\nu}:\rangle\) can be expressed as \[T^{tt}=G_{200}+\frac{1}{12\rho^{2}}G_{000}^{(2)}, \tag{113a}\] \[T^{\rho\rho} =G_{020}-\frac{1}{\rho^{2}}G_{002}+\frac{1}{4\rho^{2}}G_{000}^{(2)}+ \frac{1}{6\rho^{2}}G_{000}^{(1)}, \tag{113b}\] \[T^{\varphi\varphi} =\frac{1}{\rho^{4}}G_{002}-\frac{1}{12\rho^{4}}G_{000}^{(2)}- \frac{1}{6\rho^{4}}G_{000}^{(1)},\] (113c) \[T^{zz} =G_{200}-G_{020}-\frac{1}{12\rho^{2}}G_{000}^{(2)},\] (113d) \[T^{t\varphi} =\frac{1}{\rho^{2}}G_{101}. \tag{113e}\] where we introduced the notations: \[G_{000}^{(1)}=\rho\frac{dG_{000}}{d\rho},\qquad G_{000}^{(2)}=\rho\frac{d}{d \rho}\rho\frac{dG_{000}}{d\rho}, \tag{114}\] while all other components vanish. Turning back to the definition of the functions \(G_{abc}\) in Eq. (110), we immediately notice divergences associated with the Bose-Einstein factor \([e^{\beta\tilde{\omega}}-1]^{-1}\). For each value of \(m\) such that \(\Omega m>0\), there will be a value of \(\omega\) where this factor diverges. The only notable exception is the rotation axis, where \(J_{m}^{2}(q\rho)=\delta_{m0}\) and \([e^{\beta\omega}-1]^{-1}\) has the usual Bose-Einstein infrared divergence when \(\omega=0\). Thus, we are led to conclude that thermal rigidly-rotating states of the scalar field are ill-defined at each point outside the rotation axis due to long wavelength (super-horizon) modes, for which \(\omega\leq\Omega m\)[44]. We will come back to this issue in Sec. VI when we will discuss the Klein-Gordon field enclosed inside a cylindrical boundary. ### Evaluation for imaginary rotation We now seek to construct states which undergo imaginary rotation, \(\Omega=i\Omega_{I}\), where \(\Omega_{I}\in\mathbb{R}\). As also noted in Sec. IV.4, a state under imaginary rotation leads to complex values for the expectation values of physical observables. This problem can be alleviated by considering the hermiticized version of \(\hat{\rho}\), namely \[\hat{\rho}(\beta,\Omega)\rightarrow\frac{1}{2}[\hat{\rho}(\beta,\Omega)+\hat{ \rho}^{\dagger}(\beta,\Omega)], \tag{115}\] which is equivalent to averaging over the results obtained for positive and negative values of \(\Omega_{I}\). Under the above hermitization, the t.e.v. in Eq. (106) becomes \[\langle\hat{a}_{j}^{\dagger}\hat{a}_{j^{\prime}}\rangle_{\beta}=\frac{e^{ \beta\omega}\cos(\beta\Omega_{I}m)-1}{e^{2\beta\omega}-2e^{\beta\omega}\cos( \beta\Omega_{I}m)+1}\delta(j,j^{\prime}), \tag{116}\] which is similar to the relativistic kinetic theory distribution \(f_{\mathbf{k}}^{\rm im}\) in Eq. (85) under the substitution \(k_{\varphi}\rightarrow-m\). The thermodynamic state corresponding to the t.e.v. (116) is characterized by ninionic statistics (12). In what follows, we perform the calculations considering averages using the statistical operator \(\hat{\rho}(\beta,i\Omega_{I})\), keeping in mind that the final result is obtained by taking the real part. In order to analyse the functions \(G_{abc}\), the Bose-Einstein factor can be expanded in a power series, as follows: \[\frac{1}{e^{\beta\tilde{\omega}}-1}=\sum_{j=1}^{\infty}e^{-j\beta\tilde{ \omega}}, \tag{117}\] where \(\tilde{\omega}=\omega-i\Omega_{I}\) has positive real part, \(\omega>0\). Writing \[G_{abc}=\sum_{j=1}^{\infty}G_{abc}^{j}, \tag{118}\] it can be seen that the power of \(m\) can be accounted for by taking derivatives with respect to the rotation parameter: \[G_{abc}^{j}=\left(-\frac{i}{j\beta}\right)^{c}\frac{d^{c}G_{ab0}^{j}}{d\Omega _{I}^{c}}. \tag{119}\] On the other hand, the sum over \(m\) can be performed in \(G_{ab0}^{j}\) via \[\sum_{m=-\infty}^{\infty}e^{-imx}J_{m}^{2}(z)=J_{0}\left(2z\sin\frac{x}{2} \right), \tag{120}\] leading to \[G_{ab0}^{j}=\int_{0}^{\infty}d\omega\,e^{-j\beta\omega}\omega^{ \alpha}\int_{0}^{\omega}\frac{dk}{2\pi^{2}}q^{b}J_{0}\left(2q\rho\sin\frac{j \beta\Omega_{I}}{2}\right)\] \[=\int_{0}^{\infty}\frac{dxe^{-x}x^{a+b+1}}{2\pi^{2}(j\beta)^{a+b+ 2}}\int_{0}^{\pi/2}d\theta(\cos\theta)^{b+1}J_{0}(\alpha_{j}x\cos\theta), \tag{121}\] where \(x=j\beta\omega\) and \(\theta\) is defined by \((k,q)=\omega(\sin\theta,\cos\theta)\), while \(\alpha_{j}\) is given by \[\alpha_{j}=\frac{2\rho}{j\beta}\sin\frac{j\beta\Omega_{I}}{2}=\frac{l}{\pi j} \sin(\pi j\nu), \tag{122}\] with \[l =\frac{2\pi\rho}{\beta}, L=\frac{2\pi R}{\beta}, \tag{123a}\] \[\nu \equiv\nu_{I}=\frac{\beta\Omega_{I}}{2\pi}, \nu_{R}=\frac{\beta\Omega}{2\pi}, \tag{123b}\] where we, for convenience, reproduced Eq. (30) and introduced other notations to be used later (notice that \(0\leqslant l\leqslant L\)). In order to perform the integral with respect to \(\theta\) in Eq. (121), we replace the Bessel function \(J_{0}(x)\) by its series expansion, \[J_{0}(x)=\sum_{k=0}^{\infty}\frac{(-1)^{k}x^{2k}}{4^{k}(k!)^{2}}\,. \tag{124}\] The integral with respect to \(\theta\) can be performed now term by term using can be performed using the relation (valid for \(\mathrm{Re}\,\gamma>-1\)) \[\int_{0}^{\pi/2}\cos^{\gamma}\theta=\frac{\sqrt{\pi}\Gamma\left(\frac{1+\gamma}{ 2}\right)}{2\Gamma\left(1+\frac{\gamma}{2}\right)}\,, \tag{125}\] Using the following identities for the gamma functions, \[\Gamma(n+1)\bigg{|}_{n\in\mathbb{N}} =n!\,, \tag{126}\] \[\Gamma\left(\frac{1}{2}+n\right)\bigg{|}_{n\in\mathbb{N}} =\sqrt{\pi}\frac{(2n)!}{4^{n}n!}\,, \tag{127}\] we arrive at \[\int_{0}^{\pi/2}d\theta\,\cos\theta J_{0}(\alpha\cos\theta) =\sum_{k=0}^{\infty}\frac{(-\alpha^{2})^{k}}{(2k+1)!}\,, \tag{128a}\] \[\int_{0}^{\pi/2}d\theta\,\cos^{3}\theta J_{0}(\alpha_{j}x\cos \theta) =\sum_{k=1}^{\infty}\frac{(2k)^{2}(-\alpha^{2})^{k-1}}{(2k+1)!}\,, \tag{128b}\] corresponding to the case \(b=0\) and \(2\) in Eq. (121). The summation can be trivially performed, \[\int_{0}^{\pi/2}d\theta\,\cos\theta J_{0}(\alpha_{j}x\cos\theta) =\frac{\sin(\alpha_{j}x)}{\alpha_{j}x}, \tag{129a}\] \[\int_{0}^{\pi/2}d\theta\,\cos^{3}\theta J_{0}(\alpha_{j}x\cos \theta) =\frac{\cos(\alpha_{j}x)}{\alpha_{j}^{2}x^{2}}\] \[\quad+(\alpha_{j}^{2}x^{2}-1)\frac{\sin(\alpha_{j}x)}{\alpha_{j}^ {3}x^{3}}, \tag{129b}\] finally arriving at \[G_{n00}^{j} =\frac{1}{2\pi^{2}\alpha_{j}(j\beta)^{n+2}}\int_{0}^{\infty}dxe^ {-x}x^{n}\sin(x\alpha_{j}),\] \[G_{n20}^{j} =\frac{1}{2\pi^{2}\alpha_{j}^{3}(j\beta)^{n+4}}\int_{0}^{\infty} dxe^{-x}x^{n}[x\alpha_{j}\cos(x\alpha_{j})\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+(x^{ 2}\alpha_{j}^{2}-1)\sin(x\alpha_{j})]. \tag{130}\] Employing the identity \[\int_{0}^{\infty}dx\,e^{-x+i\alpha_{j}x}x^{n}=\frac{n!}{(1-i\alpha)^{n+1}}\,, \tag{131}\] or equivalently, \[\int_{0}^{\infty}dx\,e^{-x}x^{n}\sin(\alpha_{j}x) =n!\operatorname{Im}\left(\frac{1+i\alpha_{j}}{1+\alpha_{j}^{2}} \right)^{n+1}, \tag{132}\] \[\int_{0}^{\infty}dx\,e^{-x}x^{n}\cos(\alpha_{j}x) =n!\operatorname{Re}\left(\frac{1+i\alpha_{j}}{1+\alpha_{j}^{2}} \right)^{n+1}, \tag{133}\] with \(\operatorname{Re}(z)=(z+z^{*})/2\) and \(\operatorname{Im}(z)=(z-z^{*})/2i\) being the real and imaginary parts of a complex number \(z\), we obtain \[G_{n00}^{j} =\frac{n!}{2\pi^{2}\alpha_{j}(j\beta)^{n+2}}\operatorname{Im} \left(\frac{1+i\alpha_{j}}{1+\alpha_{j}^{2}}\right)^{n+1},\] (134a) \[G_{n20}^{j} =\frac{1}{2\pi^{2}\alpha_{j}^{3}(j\beta)^{n+4}}\Bigg{[}\alpha_{j} ^{2}(n+2)!\operatorname{Im}\left(\frac{1+i\alpha_{j}}{1+\alpha_{j}^{2}}\right) ^{n+3}\] \[\quad+\alpha_{j}(n+1)!\operatorname{Im}\left(\frac{1+i\alpha_{j}}{ 1+\alpha_{j}^{2}}\right)^{n+2}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[T^{t\varphi}=\sum_{j=1}^{\infty}\frac{4i\sin(2\pi j\nu)}{\pi^{2}(j\beta)^{5}(1+ \alpha_{j}^{2})^{3}}. \tag{135f}\] The above results show that the diagonal components of \(T^{\mu\nu}\) are real-valued and even with respect to \(\nu=\beta\Omega_{I}/2\pi\), while \(T^{\nu\varphi}\) is imaginary and odd with respect to \(\nu\to-\nu\). Under the hermitization (115), it is clear that \(T^{t\varphi}\) vanishes and \(T^{\mu\nu}\) remains diagonal. As in the case of the classical relativistic kinetic theory (RKT) analysis, the resulting state is not isotropic. Identifying as in the classical case \(E=T^{tt}\) and the perfect-fluid contribution \(T^{\mu\nu}_{\rm pf}=\text{diag}(E,P,\rho^{-2}P,P)\) with \(P=E/3\), the quantum shear-stress tensor \(\pi^{\mu\nu}=T^{\mu\nu}-T^{\mu\nu}_{\rm pf}=\text{diag}(0,\pi^{\rho\rho},\pi^{ \varphi\varphi},\pi^{zz})\) can be obtained as \[\pi^{\rho\rho} =\frac{4}{9}\sum_{j=1}^{\infty}\frac{3\alpha_{j}^{2}-(2\alpha_{j} ^{2}+1)\sin^{2}(\pi j\nu)}{\pi^{2}\beta^{4}j^{4}(1+\alpha_{j}^{2})^{3}},\] \[\pi^{\varphi\varphi} =-\frac{4}{9}\sum_{j=1}^{\infty}\frac{6\alpha_{j}^{2}-(4\alpha_{ j}^{2}-1)\sin^{2}(\pi j\nu)}{\pi^{2}\rho^{2}\beta^{4}j^{4}(1+\alpha_{j}^{2})^{3}},\] \[\pi^{zz} =\frac{4}{9}\sum_{j=1}^{\infty}\frac{3\alpha_{j}^{2}-2(\alpha_{j} ^{2}-1)\sin^{2}(\pi j\nu)}{\pi^{2}\beta^{4}j^{4}(1+\alpha_{j}^{2})^{3}}. \tag{136}\] At large distances from the rotation axis, \(\alpha_{j}\to\infty\), implying that \[T^{\mu\nu}=\text{diag}(-1,1,-3\rho^{-2},1)\\ \times\sum_{j=1}^{\infty}\frac{3-2\sin^{2}(\pi j\nu)}{3\pi^{2} \beta^{4}j^{4}(1+\alpha_{j}^{2})^{2}}. \tag{137}\] The structure of the above result is similar to that obtained in Eq. (93), with the important difference that the quantum field-theoretical (QFT) \(T^{\mu\nu}\) depends on \(\nu\) through the harmonic function \(\sin(\pi j\nu)\). This property implies that \(T^{\mu\nu}\) obtained in QFT is periodic with respect to \(\nu\), with period \(\Delta\nu=1\), in agreement with the symmetry (10) expected on general grounds. This periodic behaviour is fundamentally different from that observed in RKT (Subsect. IV.4), where no periodicity in \(\nu\) can be seen. ### Values on the rotation axis: no analytical connection between real and imaginary rotations On the rotation axis, we have \(\alpha_{j}=0\). Using the relations \(\sum_{j=1}^{\infty}1/j^{2}=\pi^{2}/6\) and \(\sum_{j=1}^{\infty}1/j^{4}=\pi^{4}/90\), we find that the expectation value of \(\phi^{2}\) is not affected by the imaginary rotation while the energy-momentum tensor acquires a nontrivial dependence on the imaginary angular frequency: \[\left.\phi^{2}\right|_{\rho=0} =\phi_{0}^{2}, \tag{138a}\] \[\left.T^{tt}\right|_{\rho=0} =T_{0}^{tt}\left(1-\frac{10\{\nu\}^{2}}{3}+\frac{20\{\nu\}^{3}}{3 }-\frac{10\{\nu\}^{4}}{3}\right),\] (138b) \[\left.T^{\rho\rho}\right|_{\rho=0} =T_{0}^{tt}\left(\frac{1}{3}-\frac{10\{\nu\}^{2}}{3}+\frac{20\{ \nu\}^{3}}{3}-\frac{10\{\nu\}^{4}}{3}\right),\] (138c) \[\left.T^{zz}\right|_{\rho=0} =T_{0}^{tt}\left(\frac{1}{3}+\frac{10\{\nu\}^{2}}{3}-\frac{20\{ \nu\}^{3}}{3}+\frac{10\{\nu\}^{4}}{3}\right),\] (138d) \[\left.T^{t\varphi}\right|_{\rho=0} =\pm\frac{4i\pi^{3}\{\nu\}}{45\beta^{5}}\big{(}1-10\{\nu\}^{2}+15 \{\nu\}^{3}-6\{\nu\}^{4}\big{)}, \tag{138e}\] and \(\left.\rho^{2}T^{\varphi\varphi}\right|_{\rho=0}=\left.T^{\rho\rho}\right|_{ \rho=0}\). In the above, \[\phi_{0}^{2}=\frac{1}{12\beta^{2}},\qquad T_{0}^{tt}=\frac{\pi^{2}}{30\beta^{ 4}}, \tag{139}\] represent the expectation values when \(\Omega_{I}=0\). The notation \(\{\nu\}=\nu-\lfloor\nu\rfloor\) represents the fractional part of \(\nu\) (\(0\leq\{\nu\}<1\)), while \(\lfloor\nu\rfloor\) represents its integer part. The \(\pm\) sign in the expression for \(\left.T^{t\varphi}\right|_{\rho=0}\) corresponds to the sign of \(\nu\). Figure 7(a) confirms that \(T^{\mu\nu}\) is periodic with respect to the imaginary rotation parameter \(\nu\), as implied by the presence of \(\{\nu\}\). The energy density \(T^{tt}\) and the radial pressure \(T^{\rho\rho}\) are decreased by the imagi Figure 7: The condensate \(\phi^{2}\) and the components of the energy-momentum tensor \(T^{\mu\nu}\) on the axis of rotation of the cylinder, \(\rho=0\), under (a) imaginary rotation with \(\nu\equiv\nu_{I}=\beta\Omega_{I}/2\pi\) and (b) real rotation with \(\nu_{R}=\beta\Omega/2\pi\), normalized with respect to their values in the absence of rotation. All quantities under imaginary rotation are periodic (\(\nu\to\nu+1\)) in agreement with Eq. (10). The dashed lines extending in the region \(|\nu_{R}|>1\) indicate the expected behaviour if the components of \(T_{\mu\nu}\) were periodic with respect to \(\nu_{R}\). nary rotation, while the azimuthal and vertical pressures \(T^{\varphi\varphi}=T^{zz}\) are increased. An alternative way of characterizing \(T^{\mu\nu}\) on the rotation axis is using the Bernoulli polynomial \[B_{n}(x) =\sum_{k=0}^{n}\binom{n}{k}B_{n-k}x^{k}\] \[=-\frac{n!}{(2\pi i)^{n}}\left[\mathrm{Li}_{n}(e^{2\pi ix})+(-1)^ {n}\mathrm{Li}_{n}(e^{-2\pi ix})\right], \tag{140}\] with \(\mathrm{Li}_{s}(x)=\sum_{k=1}^{\infty}x^{k}/k^{s}\) being the polylogarithm and \(B_{n}\equiv B_{n}(0)\) being the Bernoulli numbers: \[B_{n}=\sum_{k=0}^{n}\sum_{v=0}^{k}(-1)^{v}\binom{k}{v}\frac{v^{n}}{k+1}\,. \tag{141}\] In terms of the Bernoulli polynomials, the components of the energy-momentum read \[T^{tt}\big{|}_{\rho=0} =T_{0}^{tt}\left[\frac{8}{9}-\frac{10}{3}B_{4}(\{\nu\})\right], \tag{142a}\] \[T^{\rho\rho}\big{|}_{\rho=0} =T_{0}^{tt}\left[\frac{2}{9}-\frac{10}{3}B_{4}(\{\nu\})\right],\] (142b) \[T^{zz}\big{|}_{\rho=0} =T_{0}^{tt}\left[\frac{4}{9}+\frac{10}{3}B_{4}(\{\nu\})\right],\] (142c) \[T^{t\varphi}\big{|}_{\rho=0} =-\frac{8i}{15\pi^{3}\beta^{5}}B_{5}(\{\nu\}), \tag{142d}\] as well as \(\rho^{2}T^{\varphi\varphi}\big{|}_{\rho=0}=T^{\rho\rho}\big{|}_{\rho=0}\). These are the results on the axis of rotation for an infinite-volume system subjected to the imaginary rotation. We now compare the results in Eq. (138) to those derived on the basis of real rotation in Refs. [45; 46; 47; 48] using a perturbative approach for slow rotation, reproduced below for definiteness: \[\phi^{2}\big{|}_{\rho=0} =\phi_{0}^{2}, \tag{143a}\] \[T^{tt}\big{|}_{\rho=0} =T_{0}^{tt}\left(1+\frac{10\nu_{R}^{2}}{3}-\frac{10\nu_{R}^{4}}{3 }\right),\] (143b) \[T^{\rho\rho}\big{|}_{\rho=0} =T_{0}^{tt}\left(\frac{1}{3}+\frac{10\nu_{R}^{2}}{3}-\frac{10\nu_ {R}^{4}}{3}\right),\] (143c) \[T^{zz}\big{|}_{\rho=0} =T_{0}^{tt}\left(\frac{1}{3}-\frac{10\nu_{R}^{2}}{3}+\frac{10\nu_ {R}^{4}}{3}\right),\] (143d) \[T^{t\varphi}\big{|}_{\rho=0} =\frac{4\pi^{3}\nu_{R}}{45\beta^{5}}\left(1+10\nu_{R}^{2}-6\nu_{ R}^{4}\right), \tag{143e}\] and \(\rho^{2}T^{\varphi\varphi}\big{|}_{\rho=0}=\left.T^{\rho\rho}\right|_{\rho=0}\), with \(\nu_{R}\) defined in Eq. (123). The above results can be derived from Eq. (138) using the following replacements: \[\nu^{2}\to-\nu_{R}^{2},\qquad|\nu|^{3}\to 0,\qquad\nu^{4}\to\nu_{R}^{4}. \tag{144}\] It is remarkable to observe that the diagonal components of \(T^{\mu\nu}\) satisfy \[T^{\mu\nu}(\nu_{R}=\pm 1)=T^{\mu\nu}(\nu_{R}=0)\qquad\text{for $\mu=\nu$}\,, \tag{145}\] however contrary to the same quantities evaluated under imaginary rotations, they do not exhibit periodicity with respect to \(\nu_{R}\). Figure 7(b) shows that when \(|\nu_{R}|>1\), \(T^{zz}\) increase dramatically, while \(T^{tt}\) and \(T^{\rho\rho}=\rho^{2}T^{\varphi\varphi}\) eventually become negative. The dashed lines extending in the region \(|\nu_{R}|>1\) indicate the expected behaviour if \(T^{\mu\nu}\) were periodic with respect to \(\nu_{R}\). Before ending this subsection, we remark that the presence of odd powers of \(\nu\) in the expressions for \(T^{\mu\nu}\) is unexpected and seemingly unsupported by the formulas in Eq. (135). For example, in the case of \(T^{tt}\) given by Eq. (135b), a Taylor expansion of the summand around \(\nu=0\) fails to capture the \(\nu^{3}\) term revealed in Eq. (138b). Moreover, since \(\nu\) always appears multiplied by the summation variable \(j\), such an approach can reliably produce only the first two terms, proportional to \(j^{-4}\nu^{0}\) and \(j^{-2}\nu^{2}\). The third term proportional to \(j^{0}\nu^{4}\) cannot be computed due to the divergence of the sum over \(j\). We are thus led to believe that the \(\nu^{3}\) term appearing in Eq. (138) is related to an inherent non-analytic behavior of \(T^{\mu\nu}\) with respect to the rotation parameter \(\nu\). We remark that the results quoted in Eqs. (143) for the case of real rotation were obtained also using a Taylor series approach and may therefore omit similar non-analytical \(\nu^{3}\)-like terms. ### High temperature expansion Let us now consider the large temperature expansion, when \(\beta\to 0\). Since \(\beta\) comes multiplied by \(j\) under the summation sign in Eqs. (135), an higher-order terms with respect to \(\beta\) come with higher powers of \(j\). Since the summation over \(j\) and the power series with respect to \(j\beta\) in general does not commute, this procedure allows only the coefficients of the \(\beta^{-4}\) and \(\beta^{-2}\) terms to be extracted. The results are \[\phi^{2}=\frac{\gamma_{I}^{2}}{12\beta^{2}},\quad T^{tt}=\frac{ \pi^{2}\gamma_{I}^{4}}{90\beta^{4}}(4\gamma_{I}^{2}-1)-\frac{\Omega_{I}^{2} \gamma_{I}^{6}}{36\beta^{2}}(6\gamma_{I}^{2}-5),\] \[T^{\rho\rho}=\frac{\pi^{2}\gamma_{I}^{4}}{90\beta^{4}}-\frac{ \Omega_{I}^{2}\gamma_{I}^{6}}{36\beta^{2}},\qquad T^{zz}=\frac{\pi^{2}\gamma_{ I}^{4}}{90\beta^{4}}+\frac{\Omega_{I}^{2}\gamma_{I}^{6}}{36\beta^{2}},\] \[\rho^{2}T^{\varphi\varphi}=\frac{\pi^{2}\gamma_{I}^{4}}{90\beta^{ 4}}(4\gamma_{I}^{2}-3)-\frac{\Omega_{I}^{2}\gamma_{I}^{6}}{36\beta^{2}}(6 \gamma_{I}^{2}-5),\] \[T^{t\varphi}=I\Omega_{I}\left[\frac{2\pi^{2}\gamma_{I}^{6}}{45 \beta^{4}}-\frac{\Omega_{I}^{2}\gamma_{I}^{6}}{18\beta^{2}}(3\gamma_{I}^{2}-1) \right]. \tag{146}\] As discussed in the previous subsection, the \(\nu^{3}\) terms revealed in Eq. (138) are not captured by the perturbative series expansion approach. Nevertheless, the results reported in Eq. (146) are fully consistent with previously derived results, see Eq. (4.2.51) of Ref. [45]; Eqs. (A.21,A.22) of Ref. [47]; and Eqs. (7.19,7.22,7.23) of Ref. [48]. ### Emergence of fractal structure We now consider the case when \(\nu=p/q\) is a rational number, where \(p/q\) is an irreducible fraction. Writing \(j=Qq+j^{\prime}\), with \(0\leq Q<\infty\) and \(1\leq j^{\prime}\leq q\), the trigonometric functions taking as argument \(j\pi\nu=\pi pQ+j^{\prime}\pi p/q\) depend only on \(j^{\prime}\). As an illustrative example, \(\phi^{2}\) Figure 8: Fractalization of thermodynamics with increasing volume: The thermal expectation values of (top) \(\phi^{2}\) and (bottom) \(T^{tt}\) under imaginary rotation normalized with respect to their values in the absence of rotation (\(\phi_{0}^{2}=1/12\beta^{2}\) and \(T_{0}^{tt}=\pi^{2}/30\beta^{4}\)) as functions of dimensionless distance \(l=\rho/(2\pi\beta)\) from the rotation axis of the cylinder. The lines and points show results for rational values of \(\nu=\beta\Omega_{I}/2\pi\) of the form \(r/10\), with \(0\leq r\leq 5\), corresponding to irreducible fractions \(p/q\) with \(q=1\), \(2\), \(5\) and \(10\), which are identical to the imaginary frequencies used for the rotating ring in Fig. 5. The horizontal dashed black lines represent the expected large-distance plateau given by (top) \(1/q^{2}\) and (bottom) \(1/q^{4}\), which signal the fractal behaviour of thermodynamics. The gray dotted lines represent the relativistic kinetic theory prediction in Eq. (89b). A small segment of the result for \(T^{tt}\) when \(1/10\) corresponds to negative values and is represented with dashed lines. The values are obtained using Eq. (154). becomes \[\phi^{2} =\frac{1}{2\pi^{2}\beta^{2}q^{2}}\sum_{j=1}^{q}\sum_{Q=0}^{\infty} \frac{1}{(Q+\frac{j}{q})^{2}+x_{j}^{2}}, \tag{147a}\] \[T^{tt} =\frac{1}{3\pi^{2}\beta^{4}q^{4}}\sum_{j=1}^{q}\sum_{Q=0}^{\infty} \frac{(9-2s_{j}^{2})(Q+\frac{j}{q})^{2}-(3-2s_{j}^{2})x_{j}^{2}}{[(Q+\frac{j}{ q})^{2}+x_{j}^{2}]^{3}},\] (147b) \[T^{\rho\rho} =\frac{1}{3\pi^{2}\beta^{4}q^{4}}\sum_{j=1}^{q}\sum_{Q=0}^{\infty} \frac{3-2s_{j}^{2}}{[(Q+\frac{j}{q})^{2}+x_{j}^{2}]^{2}},\] (147c) \[T^{\varphi\varphi} =\frac{1}{3\pi^{2}\rho^{2}\beta^{4}q^{4}}\sum_{j=1}^{q}\sum_{Q=0} ^{\infty}\frac{(3-2s_{j}^{2})[(Q+\frac{j}{q})^{2}-3x_{j}^{2}]}{[(Q+\frac{j}{ q})^{2}+x_{j}^{2}]^{3}},\] (147d) \[T^{zz} =\frac{1}{3\pi^{2}\beta^{4}q^{4}}\sum_{j=1}^{q}\sum_{Q=0}^{\infty }\frac{(3+2s_{j}^{2})(Q+\frac{j}{q})^{2}+(3-2s_{j}^{2})x_{j}^{2}}{[(Q+\frac{j} {q})^{2}+x_{j}^{2}]^{3}},\] (147e) \[T^{\varphi t} =\frac{4i}{\pi^{2}\beta^{5}q^{5}}\sum_{j=1}^{q}\sum_{Q=0}^{\infty }\frac{2s_{j}c_{j}(Q+\frac{j}{q})}{[(Q+\frac{j}{q})^{2}+x_{j}^{2}]^{3}}, \tag{147f}\] where we introduced the notation \[x_{j}=\frac{ls_{j}}{\pi q},\quad s_{j}=\sin\left(\frac{\pi jp}{q}\right),\quad c _{j}=\cos\left(\frac{\pi jp}{q}\right), \tag{148}\] while \(j^{\prime}\) was relabeled as \(j\) for convenience. The sum over \(Q\) introduced by the procedure shown in Eq. (147) can be performed using \[\sum_{Q=0}^{\infty}\frac{1}{(Q+\frac{j}{q})^{2}+x_{j}^{2}} =\frac{1}{x_{j}}\text{Im}\,\psi_{j}, \tag{149a}\] \[\sum_{Q=0}^{\infty}\frac{1}{[(Q+\frac{j}{q})^{2}+x_{j}^{2}]^{2}} =\frac{\text{Im}\,\psi_{j}}{2x_{j}^{3}}-\frac{\text{Re}\psi_{j}^ {\prime}}{2x_{j}^{2}},\] (149b) \[\sum_{Q=0}^{\infty}\frac{1}{[(Q+\frac{j}{q})^{2}+x_{j}^{2}]^{3}} =\frac{3\text{Im}\,\psi_{j}}{8x_{j}^{5}}-\frac{3\text{Re}\,\psi_ {j}^{\prime}}{8x_{j}^{4}}-\frac{\text{Im}\,\psi_{j}^{\prime\prime}}{8x_{j}^{ 3}},\] (149c) \[\sum_{Q=0}^{\infty}\frac{Q+\frac{j}{q}}{(Q+\frac{j}{q})^{2}+x_{j}^{2}} =-\frac{\text{Im}\,\psi_{j}^{\prime}}{16x_{j}^{3}}+\frac{\text{Re }\,\psi_{j}^{\prime\prime}}{16x_{j}^{2}}, \tag{149d}\] where \[\psi_{j}\equiv\psi\left(\frac{j}{q}+ix_{j}\right), \tag{150}\] \(\psi(x)=\Gamma^{\prime}(x)/\Gamma(x)\) is the digamma function and the primes denote differentiation with respect to the argument, e.g. \(\psi^{\prime\prime}(x)=d^{2}\psi(x)/dx^{2}\). Also, Im and Re denote the real and imaginary parts of their arguments, respectively: \(\text{Im}\,\psi_{j}=\frac{1}{2i}(\psi_{j}-\psi_{j}^{*})\) and \(\text{Re}\,\psi_{j}^{\prime}=\frac{1}{2}(\psi_{j}^{\prime}+\psi_{j}^{\prime}{}^ {*})\), with \(\psi_{j}^{*}=\psi(\frac{j}{q}-ix_{j})\). When considering the summation over \(j\) appearing in Eq. (147), a special case corresponds to \(j=q\), when the sine term \(s_{j}=\sin(\pi jp/q)\) cancels. In this case, we employ \[\sum_{Q=0}^{\infty}\frac{1}{(Q+1)^{2}}=\frac{\pi^{2}}{6},\qquad\sum_{Q=0}^{ \infty}\frac{1}{(Q+1)^{4}}=\frac{\pi^{4}}{90}. \tag{151}\] Since \(s_{j}=0\) implies also \(x_{j}=0\), Eqs. (147) show that the \(j=q\) contribution becomes \(L\)-independent. This allows all expectation values to be split as \[\phi^{2}=\phi_{0}^{2}(q\beta)+\frac{\delta\phi^{2}}{2\pi^{2}q^{2}\beta^{2}}, \quad T^{\mu\nu}=T_{0}^{\mu\nu}(q\beta)+\frac{\delta T^{\mu\nu}}{2\pi^{2}q^{4} \beta^{4}}, \tag{152}\] where the first terms correspond to a bosonic gas at rest with inverse temperature \(q\beta\): \[\phi_{0}^{2}(q\beta) =\frac{1}{12q^{2}\beta^{2}},\] \[T_{0}^{\mu\nu}(q\beta) =\frac{\pi^{2}}{30q^{4}\beta^{4}}\text{diag}\left(1,-\frac{1}{3},- \frac{1}{3}\rho^{-2},-\frac{1}{3}\right). \tag{153}\] The factor \(q\) represents the denominator of the irreducible fraction \(\nu=p/q\). More importantly, because these terms are independent of the transverse distance to the rotation axis given by \(l\), they become dominant at large distances from the rotation axis, giving rise to fractal structures in the thermodynamic (infinite volume) limit. It is noteworthy that the fractal terms are completely absent in the relativistic kinetic theory analysis in Sec. IV.4 and thus represent a purely quantum effect. The second terms in Eq. (152) "defractalize" the result close to the rotation axis and are computed via: \[\delta\phi^{2} =\text{Im}\,\sum_{j=1}^{q-1}\frac{\psi_{j}}{x_{j}}, \tag{154a}\] \[\delta T^{tt} =\text{Im}\,\sum_{j=1}^{q-1}\left[\frac{s_{j}^{2}}{3}\left(\frac{ \psi_{j}}{x_{j}^{3}}-\frac{i\psi_{j}^{\prime}}{x_{j}^{2}}-\frac{\psi_{j}^{ \prime\prime}}{x_{j}}\right)+\frac{\psi_{j}^{\prime\prime}}{x_{j}}\right],\] (154b) \[\delta T^{\rho\rho} =\text{Im}\,\sum_{j=1}^{q-1}\left(1-\frac{2s_{j}^{2}}{3}\right) \left(\frac{\psi_{j}}{x_{j}^{3}}-\frac{i\psi_{j}^{\prime}}{x_{j}^{2}}\right),\] (154c) \[\rho^{2}\delta T^{\varphi\varphi} =\text{Im}\,\sum_{j=1}^{q-1}\left(1-\frac{2s_{j}^{2}}{3}\right) \left(-\frac{2\psi_{j}}{x_{j}^{3}}+\frac{2i\psi_{j}^{\prime}}{x_{j}^{2}}+\frac{ \psi_{j}^{\prime\prime}}{x_{j}}\right),\] (154d) \[\delta T^{zz} =\text{Im}\,\sum_{j=1}^{q-1}\left[\left(1-\frac{s_{j}^{2}}{3} \right)\left(\frac{\psi_{j}}{x_{j}^{3}}-\frac{i\psi_{j}^{\prime}}{x_{j}^{2}} \right)+\frac{s_{j}^{2}\psi_{j}^{\prime\prime}}{3x_{j}}\right],\] (154e) \[\delta T^{t\varphi} =-\frac{i}{2q\beta}\text{Im}\sum_{j=1}^{q-1}\sin\frac{2\pi jp}{q} \left(\frac{\psi_{j}^{\prime}}{x_{j}^{3}}-\frac{i\psi_{j}^{\prime\prime}}{x_{j}^{2}} \right). \tag{154f}\] where \(\psi_{j}\), \(x_{j}\) and \(s_{j}\) were introduced in Eq. (148). For \(\nu=1/2\), we have: \[\frac{\phi^{2}}{\phi_{0}^{2}}=\frac{1}{4}+\frac{3}{8l}\tanh\frac{l}{2}\,,\] Figure 9: Thermal expectation values of (left) \(\phi^{2}\) normalized by \(\phi_{0}^{2}=1/12\beta^{2}\); and (right) \(|T^{tt}|\) normalized by \(T_{0}^{tt}=\pi^{2}/30\beta^{4}\), shown with respect to the rotation parameter \(\nu=\beta\Omega_{I}/2\pi\), for various values of the distance parameter \(l=2\pi\rho/\beta\) from the axis of rotation of the cylinder. The purple circles correspond to the case when \(\nu=p/q\) is a rational number (we considered all irreducible fractions with \(1\leq q\leq 20\)). The green squares correspond to the irrational values \(\nu_{j}\) shown in Eq. (157). The empty symbols indicate the case when \(T^{tt}<0\). The dashed line shown in the bottom panels (for \(l=10^{4}\)) indicate the expected lower bounds (left) \(1/q^{2}\) and (right) \(1/q^{4}\) with \(q=20\). This figure should be compared with Fig. 4 for particles in the ring: As the size of the cylinder grows, the thermodynamic expectation values in the cylinder get fractal features similar to the fractalization of thermodynamics of scalar particles in the ring. \[\frac{T^{tt}}{T_{0}^{tt}} =\frac{1}{16}+\frac{5}{4l^{3}}\left(\tanh\frac{l}{2}-\frac{l/2}{ \cosh^{2}\frac{l}{2}}+\frac{l^{2}\tanh\frac{l}{2}}{\cosh^{2}\frac{l}{2}}\right)\,,\] \[\frac{T^{p\rho}}{T_{0}^{tt}} =\frac{1}{48}+\frac{5}{4l^{3}}\left(\tanh\frac{l}{2}-\frac{l/2}{ \cosh^{2}\frac{l}{2}}\right)\,,\] \[\frac{\rho^{2}T^{\varphi\varphi}}{T_{0}^{tt}} =\frac{1}{48}-\frac{5}{2l^{3}}\left(\tanh\frac{l}{2}-\frac{l/2}{ \cosh^{2}\frac{l}{2}}-\frac{l^{2}\tanh\frac{l}{2}}{4\cosh^{2}\frac{l}{2}} \right)\,,\] \[\frac{T^{zz}}{T_{0}^{tt}} =\frac{1}{48}+\frac{5}{2l^{3}}\left(\tanh\frac{l}{2}-\frac{l/2}{ \cosh^{2}\frac{l}{2}}+\frac{l^{2}\tanh\frac{l}{2}}{4\cosh^{2}\frac{l}{2}} \right)\,,\] \[T^{t\varphi} =0. \tag{155}\] The behaviour of the scalar condensate \(\phi^{2}\) and energy density \(T^{tt}\) as functions of \(L\) is illustrated in Fig. 8(a) and (b), respectively, where we consider the cases \(\nu=p^{\prime}/10\) with \(0\leq p^{\prime}\leq 5\), corresponding to irreducible fractions \(p/q\) with \(q=1\), \(2\), \(5\) and \(10\). As \(l>1\), visible differences between the curves corresponding to various values of \(\nu\) can be seen. Contrary to the classical case shown in Eq. (93), the far-field behavior of \(\phi^{2}\) and \(T^{\mu\nu}\) is dominated by quantum effects. An estimate of how these observables approach their asymptotic values \(\phi_{0}^{2}\) and \(T_{0}^{\mu\nu}\) can be obtained by considering the decay of the "classical" part from Eq. (93) to values of the same order of magnitude as \(\phi_{0}^{2}\) and \(T_{0}^{\mu\nu}\), which occurs at values \(l\gtrsim l_{q}\), where \[l_{q}\sim\frac{q}{\nu}=\frac{q^{2}}{p}. \tag{156}\] The \(q^{2}\) dependence of \(l_{q}\) is confirmed for both \(\phi^{2}\) and \(T^{tt}\), however the \(p\) dependence appears to be negligible. The emerging fractal behaviour exhibits a stark contrast to the classical result in Eq. (89b) derived within relativistic kinetic theory, which is also shown in Fig. 8(b) using dashed gray lines. Sizeable deviations can be seen for the curves with smaller value of \(q\), which reach the fractalized plateau at smaller values of \(L\). In the \(p/q=1/10\) case, the RKT curve follows closely the QFT one, providing a good approximation also in the region where \(T^{tt}\) becomes negative. Noting that the RKT result for \(p/q=3/10\) falls off too rapidly compared to the QFT curve leads us to conclude that the classical RKT description becomes valid only in the limit \(\nu\to 0\). As discussed above, the fractal behaviour manifests itself at large distances from the rotation axis, i.e., as \(l\rightarrow\infty\). Figure 9 illustrates the expectation values of \(\phi^{2}/\phi_{0}^{2}\) (left) and \(T^{tt}/T_{0}^{tt}\) (right) with respect to \(\nu\) for various values of the ring size \(L\). We considered \(\nu=p^{\prime}/q^{\prime}\) with \(1\leq q^{\prime}\leq 20\) and \(0\leq p^{\prime}\leq\lfloor q^{\prime}/2\rfloor\), covering all irreducible fractions \(p/q\) with \(1\leq q\leq 20\). These results are represented with purple symbols. We also considered a set \(\nu_{j}\) (\(0\leq j\leq n=20\)) of "irrational" values of \(\nu\), obtained as: \[\nu_{j}=\frac{j}{n}+\delta\nu_{j}, \tag{157}\] where \(-0.01<\delta\nu_{j}<0.01\) is a random number.2 In order to employ a logarithmic scale on the vertical axis, we represented the absolute values of our observables, with the convention that circles and squares are used when the observables are positive and negative, respectively. Since \(\phi^{2}>0\) for all values of \(\nu\) and \(L\), this discussion applies only to \(T^{tt}\) (see, e.g., the \((p,q)=(1,10)\) curve in Fig. 8). Footnote 2: For \(j=0\) and \(j=20\), we employed \(\nu_{0}=|\delta\nu_{0}|\) and \(\nu_{20}=1-|\delta\nu_{20}|\), respectively, in order to ensure that \(0\leq\nu_{j}\leq 1\). For \(L\lesssim 1\), both \(\phi^{2}\) and \(T^{tt}\) exhibit a smooth dependence on \(\nu\). As \(L\) is increased, the expectation values for the case when \(\nu=p/q\) is an irreducible fraction become frozen on their corresponding asymptotic values (\(1/q^{2}\) for \(\phi^{2}/\phi_{0}^{2}\) and \(1/q^{4}\) for \(T^{tt}/T_{0}^{tt}\)), earlier for smaller values of \(q\) than for larger values of \(q\). In contrast, the expectation values corresponding to the irrational values of \(\nu\) continue their decreasing trend towards \(0\). Strikingly, the thermodynamic of the scalar field in the cylinder, obtained numerically and shown in Fig. 9, resembles drastically the one at the ring, obtained analytically and represented in Fig. 4, with the fractalization features becoming more pronounced as the size of the cylinder \(L\) approaches the thermodynamic limit. ### Thermodynamic limit For the purpose of analyzing the large-volume limit of our system, we consider a fictitious cylinder of radius \(R\equiv\beta L/2\pi\) and of large vertical extent \(L_{z}\), centered on the rotation axis. The volume-averaged scalar condensate and energy density can be computed by integrating Eqs. (135a) and (135b) over this cylinder and dividing by the total volume \(V=\pi R^{2}L_{z}\): \[\overline{\Phi^{2}} =\phi_{0}^{2}(q\beta)+\frac{1}{\beta^{2}L^{2}}\sum_{j=1}^{q-1} \frac{1}{s_{j}^{2}}\ln\frac{\Gamma(j/q)}{|\Gamma_{j}|}, \tag{158a}\] \[\overline{\mathcal{E}} =T_{0}^{tt}(q\beta)+\frac{1}{L^{2}q^{2}\beta^{4}}\sum_{j=1}^{q-1} \Bigg{[}\left(\frac{1}{3}-\frac{1}{s_{j}^{2}}\right)\text{Re}\psi_{j}^{\prime}\] \[-\frac{\text{Im}\psi_{j}}{3X_{j}}+\frac{1}{s_{j}^{2}}\psi^{\prime }\left(\frac{j}{q}\right)\Bigg{]}, \tag{158b}\] where \(s_{j}\) was defined in Eq. (148), while \(X_{j}\) corresponds to the old \(x_{j}\) evaluated at the volume boundary: \[X_{j}=\frac{Ls_{j}}{\pi q}. \tag{159}\] Furthermore, we keep the notation \(\Gamma_{j}\) and \(\psi_{j}\) introduced in Eq. (150), but now we understand that these functions take the argument \(\frac{j}{q}+iX_{j}\). Furthermore, \(\phi_{0}^{2}\) and \(T_{0}^{tt}\) were introduced in Eq. (153). We now compute the average free energy \(\overline{\mathcal{F}}\) from \(\overline{\mathcal{E}}\) starting from Eq. (75): \[\overline{\mathcal{F}} =F_{0}(q\beta)-\frac{1}{\beta^{4}L^{2}q^{2}}\sum_{j=1}^{q-1}\Biggl{\{} \left(\frac{1}{3}-\frac{1}{s_{j}^{2}}\right)\frac{\text{Im}\psi_{j}}{X_{j}}\] \[+\frac{1}{s_{j}^{2}}\psi^{\prime}\left(\frac{j}{q}\right)-\frac{1 }{3X_{j}}\int_{0}^{X_{j}}\frac{dx}{x}\text{Im}\left[\psi\left(\frac{j}{q}+ix \right)\right]\Biggr{\}}, \tag{160}\] where the integral in the last term is evaluated after taking the imaginary part of the integrand in order to avoid a logarithmic singularity in \(X_{j}=0\). Also, \(F_{0}(\beta)=-\pi^{2}/90\beta^{4}\) is the free energy of a bosonic gas in the absence of rotation. The entropy and angular momentum given by Eq. (73) require taking derivatives of \(\overline{\mathcal{F}}\) with respect to \(\beta\) and \(\Omega_{I}\) at constant \(\Omega_{I}\) and \(\beta\), respectively. This is not possible at the level of the fractalized form in Eq. (160). Thus, we seek to obtain the free energy after rewriting \(\overline{\mathcal{E}}\) for general (not necessarily rational) values of \(\nu\): \[\overline{\mathcal{E}}=\sum_{j=1}^{\infty}\frac{3+\alpha_{j}^{2}(R)-\frac{2}{ 3}\sin^{2}(\pi j\nu)}{\pi^{2}\beta^{4}j^{4}[1+\alpha_{j}^{2}(R)]^{2}}. \tag{161}\] It can be checked that writing \(\nu=p/q\) and \(j=qQ+j^{\prime}\) gives Eq. (158b). Applying now Eq. (75) leads to \[\overline{\mathcal{F}}=-\sum_{j=1}^{\infty}\frac{1}{\pi^{2}\beta ^{4}j^{4}}\left\{\frac{\alpha_{j}^{2}+\frac{1}{3}\sin^{2}(\pi j\nu)}{\alpha_ {j}^{2}(\alpha_{j}^{2}+1)}\right.\] \[\left.-\frac{\sin^{2}\pi j\nu}{3\alpha_{j}^{3}}\left[\frac{\pi}{2} -\arctan\left(\frac{1}{\alpha_{j}}\right)\right]\right\}, \tag{162}\] where the term \(\pi/2\) appearing on the second line represents an integration constant such that \(\lim_{\Omega_{I}\to 0}\overline{\mathcal{F}}=-\sum_{j=1}^{\infty}1/(\pi^{2}\beta^{4}j ^{4})^{=}F_{0}\). Using Eq. (73), the average entropy \(\overline{\mathcal{S}}\) and angular momentum \(\overline{\mathcal{M}}\) are \[\overline{\mathcal{S}} =\sum_{j=1}^{\infty}\frac{1}{\pi^{2}\beta^{3}j^{4}}\Biggl{\{} \frac{2(\alpha_{j}^{2}+2)}{(\alpha_{j}^{2}+1)^{2}}+\frac{\sin^{2}(\pi j\nu)(1- \alpha_{j}^{2})}{3\alpha_{j}^{2}(\alpha_{j}^{2}+1)^{2}}\] \[+\frac{\pi j\nu}{\tan(\pi j\nu)}\left[\frac{2\alpha_{j}^{2}}{( \alpha_{j}^{2}+1)^{2}}+\frac{\sin^{2}(\pi j\nu)(3\alpha_{j}^{2}+1)}{3\alpha_{ j}^{2}(\alpha_{j}^{2}+1)^{2}}\right]\] \[-\frac{\sin^{2}(\pi j\nu)}{3\alpha_{j}^{3}}\left(1+\frac{\pi j\nu }{\tan(\pi j\nu)}\right)\left(\frac{\pi}{2}-\arctan\frac{1}{\alpha_{j}} \right)\Bigg{\}},\] \[\overline{\mathcal{M}} =-\sum_{j=1}^{\infty}\frac{\sin(2\pi j\nu)}{4\pi^{2}\beta^{3}j^{3} }\left[\frac{2L^{2}/\pi^{2}j^{2}}{(1+\alpha_{j}^{2})^{2}}+\frac{1-\alpha_{j}^ {2}}{3(1+\alpha_{j}^{2})^{2}}\right.\] \[\left.-\frac{1}{3\alpha_{j}^{3}}\left(\frac{\pi}{2}-\alpha_{j}- \arctan\frac{1}{\alpha_{j}}\right)\right]. \tag{163}\] Considering as before that \(\nu=p/q\) and writing \(j=qQ+j^{\prime}\), with \(0\leq Q<\infty\) and \(1\leq j^{\prime}\leq q\), we get \[\overline{\mathcal{F}}=F_{0}(q\beta)-\frac{1}{\beta^{4}L^{2}q^{2} }\sum_{j=1}^{q-1}\Biggl{\{}\left(\frac{1}{3}-\frac{1}{s_{j}^{2}}\right)\frac {\text{Im}\,\psi_{j}}{X_{j}}\] \[\quad+\frac{1}{s_{j}^{2}}\psi^{\prime}\left(\frac{j}{q}\right)- \frac{1}{3X_{j}}S_{Q}\Biggr{\}}, \tag{164a}\] \[\overline{\mathcal{S}} =\frac{S_{0}(q\beta)}{q}+\frac{1}{\pi^{2}q^{4}j^{3}}\sum_{j=1}^{q -1}\Biggl{\{}\frac{2}{X_{j}^{2}}\psi^{\prime}\left(\frac{j}{q}\right)-\frac{ \text{Im}\psi_{j}}{X_{j}^{3}}\] \[+\left(\frac{s_{j}^{2}}{3}-1\right)\left(\text{Re}\,\frac{\psi_{ j}^{\prime}}{X_{j}^{2}}-\frac{c_{j}p\pi}{s_{j}X_{j}}\text{Im}\psi_{j}^{\prime}\right)\] \[+\frac{c_{j}p\pi}{s_{j}X_{j}^{2}}\left(\frac{s_{j}^{2}}{3}-2 \right)\left[\psi\left(\frac{j}{q}\right)-\text{Re}\,\psi_{j}\right]\] \[-\frac{1}{3X_{j}}\left(\frac{\pi q}{L}\right)^{2}\left(S_{Q}+ \frac{\pi pc_{j}}{s_{j}}S_{Q}^{\prime}\right)\Biggr{\}},\] (164b) \[\overline{\mathcal{M}} =-\sum_{j=1}^{q-1}\frac{\sin(2\pi j\nu)}{3\pi^{2}q^{2}\beta^{3}} \Biggl{\{}\frac{s_{j}^{2}-6}{s_{j}^{2}X_{j}^{2}}\left[\psi\left(\frac{j}{q} \right)-\text{Re}\,\psi_{j}\right]\] \[-\frac{s_{j}^{2}-3}{s_{j}^{2}X_{j}}\text{Im}\,\psi_{j}^{\prime}- \frac{1}{X_{j}^{3}}S_{Q}^{\prime}\Biggr{\}}, \tag{164c}\] where \(s_{j}\) and \(c_{j}\) were introduced in Eq. (148), \(\psi_{j}\) in Eq. (150) with \(x_{j}\) replaced by \(X_{j}\), while \(X_{j}\) is defined in Eq. (159). Furthermore, the following notation was introduced: \[S_{Q} =\sum_{Q=0}^{\infty}\frac{1}{Q+\frac{j}{q}}\left[\frac{\pi}{2}- \arctan\left(\frac{Q+\frac{j}{q}}{X_{j}}\right)\right],\] \[S_{Q}^{\prime} =\sum_{Q=0}^{\infty}\left[\frac{\pi}{2}-\arctan\left(\frac{Q+\frac {j}{q}}{X_{j}}\right)-\frac{X_{j}}{Q+\frac{j}{q}}\right]. \tag{165}\] Comparing Eq. (164a) and (160), it can be seen that \[S_{Q}=\int_{0}^{X_{j}}\frac{dx}{x}\text{Im}\left[\psi\left(\frac{j}{q}+ix\right) \right]. \tag{166}\] The above identity is easily established by noting that \[\frac{\partial S_{Q}}{\partial X_{j}}=\sum_{Q=0}^{\infty}\frac{1}{(Q+\frac{j}{q})^{ 2}+X_{j}^{2}}=\frac{\text{Im}\,\psi_{j}}{X_{j}}. \tag{167}\] Integrating the above expression with respect to \(X_{j}\) and demanding \(S_{Q}(X_{j}=0)=0\) gives Eq. (166). A similar expression can be obtained for \(S_{Q}^{\prime}\). Taking the derivative with respect to \(X_{j}\) eliminates the arctangent, such that \[\frac{\partial S_{Q}^{\prime}}{\partial X_{j}} =-\sum_{Q=0}^{\infty}\frac{X_{j}^{2}}{(Q+\frac{j}{q})[(Q+\frac{j}{q})^{ 2}+X_{j}^{2}]}\] \[=\psi\left(\frac{j}{q}\right)-\text{Re}\left[\psi\left(\frac{j}{q}+ iX_{j}\right)\right]. \tag{168}\] Integrating the above relation with respect to \(X_{j}\) gives \[S_{Q}^{\prime}=X_{j}\psi\left(\frac{j}{q}\right)-\int_{0}^{X_{j}}dx\,\text{Re} \left[\psi\left(\frac{j}{q}+ix\right)\right]. \tag{169}\] For the case when \(\nu=p/q=1/2\), we find \[\frac{\overline{\Phi^{2}}}{\phi_{0}^{2}}=\frac{1}{4}+\frac{6}{L^{2}} \ln\left(\cosh\frac{L}{2}\right),\\ \frac{\overline{\mathcal{E}}}{T_{0}^{tt}}=\frac{1}{16}+\frac{5}{4 L^{2}}\left(1+2\tanh^{2}\frac{L}{2}-\frac{2}{L}\tanh\frac{L}{2}\right), \tag{170}\] while \(\overline{\mathcal{M}}=0\). ### Slow rotation: moment of inertia and shape In the case of slow rotation, the investigation of the coefficients \(K_{2n}\) introduced in Eq. (81) is difficult because of the representation (162) of the free energy when \(\nu\) is an arbitrary number. A series expansion with respect to \(\Omega=2\pi\nu/\beta\) is equivalent to an expansion of the sine function \(\sin(\pi j\nu)\), such that higher orders in \(\nu\) bring positive powers of \(j\). The leading term \(\overline{\mathcal{F}}(0)\) and the first coefficient \(K_{2}\), which corresponds to the dimensionless moment of inertia, can be computed, \[\overline{\mathcal{F}}(0)=-\frac{\pi^{2}}{90\beta^{4}},\qquad K_{2}=2\left[1 +\frac{10}{3L^{2}}+O\big{(}L^{-4}\big{)}\right], \tag{171}\] however the rotational shape coefficient \(K_{4}\) involves a summation over \(j\) of \(j^{0}\), which diverges. In the above, we took into account that \(\Omega_{I}^{2}=-\Omega^{2}\). Comparing Eq. (171) and (81), we see that the classical result \(K_{2n}=(2n)!\) receives an \(L\)-dependent correction that vanishes as the transverse size of the cylinder becomes infinite, \(L\to\infty\). In Subsect. VI.3, we discuss the dimensionless moment of inertia \(K_{2}\) and the rotational shape change coefficient \(K_{4}\) in a cylinder of a finite radius, taking into account the proper quantization of the radial modes. We will see that the behaviour of \(K_{2}\) qualitatively agrees with the convergence (171) while the coefficient \(K_{4}\) becomes finite and also converges to the result (81) in the infinite volume limit. ## VI Bounded Klein-Gordon field ### Eigenspectrum of the system and observables We now enclose the system inside a cylindrical surface at a distance \(R\) from the symmetry axis. Imposing Dirichlet boundary conditions on the eigenmodes \(f_{j}\) leads to the quantization of the transverse momentum, \[J_{m_{j}}(q_{j})=0. \tag{172}\] The resulting normalized modes are [42] \[f_{kmn}=\frac{e^{-i\omega_{mn}t+ikz+im\varphi}}{2\pi R|J_{m+1}(q_{mn}/R)|\sqrt {\omega_{mn}}}J_{m}(q_{mn}\rho). \tag{173}\] The t.e.v.s of \(\widehat{\Phi}^{2}\) and \(\widehat{T}^{tt}\) can be expressed in the form shown in Eqs. (111) and (113a), \[\phi^{2}=\overline{G}_{000},\qquad\quad T^{tt}=\overline{G}_{200}+\frac{1}{1 2\rho^{2}}\overline{G}_{000}^{(2)}, \tag{174}\] where \(\overline{G}_{abc}^{(2)}=\rho\frac{d}{d\rho}\rho\frac{d}{d\rho}\overline{G}_{abc}\) and the functions \(\overline{G}_{abc}\) generalize the functions \(G_{abc}\) in Eq. (110) to the bounded case considered here: \[\overline{G}_{abc}=\frac{1}{\pi^{2}R^{2}}\sum_{m=-\infty}^{\infty }\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{dk/\omega_{mn}}{e^{\beta\omega_{mn }}-1}\\ \times\frac{J_{m}^{2}(q_{mn}\rho)}{J_{m+1}^{2}(q_{mn}R)}\omega_{mn }^{a}q_{mn}^{b}m^{c}. \tag{175}\] ### Scalar condensate, energy-momentum expectation values and fractalization Figure 10 shows the main features of \(\phi^{2}\) (top panel) and \(T^{tt}\) (lower panel) as functions of \(l=2\pi\rho/\beta\) for several different radii \(R\) chosen such that the quantity \[L=\frac{2\pi R}{\beta} \tag{176}\] takes the values \(L=10\), \(100\), and \(1000\). Only the case of rational rotation parameter is considered, with \(\nu=\beta\Omega_{I}/2\pi=p^{\prime}/10\) and \(0\leq p^{\prime}\leq 5\), giving rise to all irreducible fractions \(p/q\) with \(q=1\), \(2\), \(5\), and \(10\). The dashed black lines represent the results obtained in the unbounded case, computed based on Eqs. (135a) and (135b). As expected, the Dirichlet boundary conditions considered in this section affect the behaviour of the observables close to the boundary. Specifically, \(\phi^{2}=0\) when \(\rho=R\), since \(\overline{G}_{000}(\rho=R)\) vanishes identically by virtue of the quantization condition \(J_{m}(q_{mn}R)=0\); while \(T^{tt}\) is decreased by about a factor of \(10\) compared to its bulk value. In both panels, the bounded and unbounded results stay in good agreement throughout most of the cylinder if \(L\) is sufficiently large. In particular, \(\phi^{2}\) exhibits a notably smaller value on the rotation axis when \(L=10\) compared to the unbounded case, while for the \(L=100\) and \(1000\) cases, good agreement can be seen. As mentioned in Sec. V.2, the boundary permits the study of a system undergoing rigid rotation, as long as \(\Omega R=\nu L\leq 1\) and the light cylinder is excluded from the system. It is thus interesting to compare expectation values computed for imaginary and real rotation, \(\nu\) and \(\nu_{I}\). To keep the comparison meaningful, both \(\nu\) and \(\nu_{I}\) are restricted to be lower than or equal to \(1/L\). Fig. 11 shows the radial profile \(T^{tt}(\rho)\) for three cylinders, with L = 1 (a), 10 (b) and 100 (c), in the case of slow (\(\nu L=0.1\), blue), medium (\(\nu L=0.5\), red) and fast (\(\nu L=1\), green) rotation. At small \(L=2\pi R/\beta\), the boundary effects dominate over thermal ones and \(T^{tt}\) decreases monotonically from the rotation axis towards the boundary. Furthermore, \(T^{tt}(\rho=0)\) is strongly suppressed (by four orders of magnitude at \(L=1\)) compared to its value for a boson gas at rest, \(T_{0}^{tt}=\pi^{2}/30\beta^{4}\). The effect of imaginary rotation is negligible, while in the case of real rotation, \(T^{tt}(\rho)\) increases slightly at \(\nu L=1\). At \(L\gtrsim 10\), the bulk of the system is dominated by thermal effects. Panels (b) and (c) also show the RKT result for \(T^{tt}\): \[T_{\rm cl}^{tt}=\frac{\pi^{2}\gamma^{4}}{90\beta^{4}}(4\gamma^{2}-1),\quad T_ {\rm cl;im}^{tt}=\frac{\pi^{2}\gamma_{I}^{4}}{90\beta^{4}}(4\gamma_{I}^{2}-1), \tag{177}\] where \(\gamma^{2}=1/(1-\nu^{2}l^{2})\) and \(\gamma_{I}^{2}=1/(1+\nu^{2}l^{2})\). In the case when \(L=100\), the QFT results deviate from the RKT ones only in a small vicinity of the boundary. Such good agreement is also a consequence of the fact that at large \(L\), \(\nu\) is constrained to be small. As discussed in Sec. V.6, RKT is expected to agree with QFT for small values of \(\nu\) and sufficiently far from the boundary (see also Fig. 8. Next, the value of \(T^{tt}(\rho=0;\nu)\) on the rotation axis for both real and imaginary rotation is shown in Fig. 12 for (a) \(L=1\), (b) \(L=10\) and (c) \(L=100\). As before, in the case \(L=1\), the value of \(T^{tt}\) is suppressed by over three orders of magnitude. Here, the rotation parameter covers an entire period of the system undergoing imaginary Figure 10: Same as Fig. 8 for the case when the system is enclosed within cylindrical boundaries of three different sizes, shown using vertical dotted lines, located such that \(L=2\pi R/\beta=10\), \(100\) and \(1000\). The black dashed lines represent the results obtained in the unbounded case, shown in Fig. 8. rotation. In the case of real rotation, no such periodicity arises, contrary to the expectation based on the result in Eq. (143b) for the unbounded case. The inset in panel (a) shows the effect of rotation on \(T^{tt}(\rho=0;\nu)\) for smaller values of \(\nu\). It can be seen that for \(\nu\lesssim 0.1\), the quantity \([T^{tt}(\nu)-T^{tt}(0)]\) has the same behavior for both real and imaginary rotation. At \(L=10\) and \(100\), \(T^{tt}(\rho=0;\nu=0)\) approaches \(T^{tt}_{0}=\pi^{2}/30\beta^{4}\). The maximum value of \(\nu\) is greatly reduced. It can be seen that for such a small interval of \(\nu\), the result corresponding to imaginary rotation is approximately equal to that obtained for real rotation, mirrorted with respect to the value \(T^{tt}(\rho=0;\nu=0)\) obtained in the absence of rotation (shown with black dotted lines). Furthermore, the red dotted lines indicate the analytical predictions in Eqs. (138b) and (143b), scaled by the value \(T^{tt}(\rho=0,\nu=0)/T^{tt}_{0}\) on the rotation axis in the absence of rotation, corresponding to the given value of \(L\). The agreement with the analytical predictions is better at larger values of \(L\), which may also be due to the smaller range allowed for \(\nu\). We now consider the thermodynamic system as a whole and discuss the volume-averaged free energy \(\overline{\mathcal{F}}=F/V\), computed via the equivalent of Eq. (69): \[\overline{\mathcal{F}}(\Omega,R) =\frac{1}{\pi^{2}R^{2}\beta}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}dk\ln(1-e^{-\beta\tilde{\omega}_{mn}})\] \[=-\frac{4}{L^{2}\beta^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk\,k^{2}/\omega_{mn}}{e^{\beta\tilde{\omega}_ {mn}}-1}. \tag{178a}\] Figure 11: Profiles of \(T^{tt}(\rho)/T^{tt}_{0}\), represented with respect to the dimensionless radial coordinate \(\rho/R\) for a system enclosed within a cylindrical boundary located at \(R=\beta L/2\pi\), with \(L=1\) (a), \(10\) (b), and \(100\) (c). The rotation parameter satisfies \(\nu L=0.1\) (blue squares), \(0.5\) (red circles), and \(1\) (green triangles). Solid and dashed lines and symbols denote profiles corresponding to real and imaginary rotation, respectively. The black dotted lines shown in panels (b) and (c) correspond to the RKT results (see text). Figure 12: Value of \(T^{tt}(\rho=0;\nu)\) on the rotation axis with respect to \(T^{tt}_{0}=\pi^{2}/30\beta^{4}\) for cylindrical systems with the dimensionless radius \(L=2\pi R/\beta=1\) (a), \(10\) (b) and \(100\) (c). The rotation parameter \(\nu=\beta|\Omega|/2\pi\) (shown on the \(x\) axis) spans \(0\leq\nu\leq 1/L\). The blue solid and dashed lines with squares denote results for the case of real and imaginary rotation, respectively. The black dashed lines in panels (b) and (c) are obtained by reflecting the results corresponding to real and imaginary rotation with respect to the value \(T^{tt}(\rho=0;\nu=0)\) obtained in the absence of rotation. The red dotted line represents the analytical expressions from the unbounded case, given in Eqs. (138b) and (143b), scaled by \(T^{tt}(\rho=0;\nu=0)\). Applying Eqs. (72) and (73) leads to the following expressions for the radial pressure \(\mathcal{P}_{R}\), average entropy \(\overline{\mathcal{S}}\) and average angular momentum \(\overline{\mathcal{M}}\): \[\mathcal{P}_{R} =\frac{2}{L^{2}\beta^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk}{e^{\beta\widetilde{\omega}_{mn}}-1}\left( \omega_{mn}-\frac{k^{2}}{\omega_{mn}}\right), \tag{178b}\] \[\overline{\mathcal{S}} =\frac{4}{L^{2}\beta}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk}{e^{\beta\widetilde{\omega}_{mn}}-1}\left( \widetilde{\omega}_{mn}+\frac{k^{2}}{\omega_{mn}}\right),\] (178c) \[\overline{\mathcal{M}} =\frac{4}{L^{2}\beta^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk\,m}{e^{\beta\widetilde{\omega}_{mn}}-1}, \tag{178d}\] while \(\mathcal{P}_{\underline{\varepsilon}}=-\overline{\mathcal{F}}\). The average energy \(\overline{\mathcal{E}}\) and scalar condensate \(\overline{\Phi^{2}}\) can be obtained by taking the volume average of the expressions in Eq. (174): \[\overline{\mathcal{E}} =\frac{1}{\pi^{2}R^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk\,\omega_{mn}}{e^{\beta\widetilde{\omega}_{mn }}-1}, \tag{178e}\] \[\overline{\Phi^{2}} =\frac{1}{\pi^{2}R^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk/\omega_{mn}}{e^{\beta\widetilde{\omega}_{mn }}-1}. \tag{178f}\] Figure 13: Same as Fig. 8 for the case when the system is enclosed within a cylindrical boundary located at \(R=\beta L/2\pi\), with \(L=10\), \(100\) and \(1000\). The black dotted lines represent the results obtained in the unbounded case, shown in Fig. 8. In deriving the above expressions, we employed the integration formula \[\int_{0}^{R}d\rho\,\rho J_{m}^{2}(q\rho)= \frac{R^{2}}{2}[J_{m}^{2}(qR)+J_{m+1}^{2}(qR)]\] \[\quad-\frac{mR}{q}J_{m}(qR)J_{m+1}(qR), \tag{179}\] together with the Dirichlet boundary conditions \(J_{m}(q_{mn}R)=0\). Comparing Eqs. (178e), (178a) and (178b), it is easy to see that \(\overline{\mathcal{E}}=\mathcal{P}/3\) with \(\mathcal{P}=\frac{2}{3}\mathcal{P}_{R}+\frac{1}{3}\mathcal{P}_{z}\) being the isotropic pressure. Moreover, the relations in Eq. (178) are compatible with the Euler relation (74). The relations in Eq. (178) are valid for both real and imaginary rotation. In the latter case, \(\overline{\mathcal{M}}\) becomes \[\overline{\mathcal{M}}=-\frac{8}{L^{2}\beta^{2}}\sum_{m=1}^{\infty}\sum_{n=1} ^{\infty}\int_{0}^{\infty}\frac{dk\,m\,e^{\beta\omega}\sin(\beta\Omega_{I}m)}{ e^{2\beta\omega}-2e^{\beta\omega}\cos(\beta\Omega_{I}m)+1}. \tag{180}\] Thus, \(\overline{\mathcal{M}}\) vanishes in the imaginary rotation case when \(\nu=1/2\), as was the case also in the unbounded system [see Eq. (164c)]. The results for \(\overline{\Phi^{2}}\) and \(\overline{\mathcal{E}}\) are shown in the top and bottom panels of Fig. 13. As before, we set \(\nu=p^{\prime}/10\) with \(0\leq p^{\prime}\leq 5\), leading to irreducible fractions \(p/q\) with \(q\in\{1,2,5,10\}\). The horizontal axis shows \(L=2\pi R/\beta\), where \(R\) represents the radius of the bounding cylinder. The dotted black lines represent same quantities computed for the unbounded system using Eqs. (158a) and (158b) for the same value of \(L\). For \(L\lesssim 10\), the boundary effects lead to strong quenching of both \(\phi^{2}\) and \(T^{tt}\), such that both \(\overline{\Phi^{2}}\) and \(\overline{\mathcal{E}}\) tend to \(0\) as \(L\to 0\) in the bounded case. This is contrary to the unbounded case, where the \(L\to 0\) limit is finite in both cases. As already seen in Fig. 10, with increasing \(L\), the boundary effects become localized around a small vicinity of the boundary and the bounded and unbounded results approach each other. While for \(\overline{\Phi^{2}}\), visible discrepancies remain even for \(L\gtrsim 10^{3}\), in the case of \(\overline{\mathcal{E}}\), the results obtained in the bounded case start following the ones corresponding to the unbounded case already when \(L\gtrsim 10\). As in the previous sections, the fractal structure reveals itself at large values of \(L\). ### Slow rotation: moment of inertia and shape Finally, we discuss the expansion in Eq. (80) of the free energy density \(\overline{\mathcal{F}}\) in the case of slow rotation. In particular, we focus on the free energy in the absence of rotation, \(\overline{\mathcal{F}}(0)\), as well as the first two coefficients, the dimensionless moment of inertia, \(K_{2}\), and the dimensionless shape coefficient, \(K_{4}\), which we evaluate using: \[\overline{\mathcal{F}}(0) =-\frac{4}{L^{2}\beta^{2}}\sum_{m=-\infty}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk\,k^{2}/\omega_{mn}}{e^{\beta\omega_{mn}}-1}, \tag{181a}\] \[\overline{K}_{2} =-\frac{16\pi^{2}}{\beta^{3}L^{4}}\sum_{m=1}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{m^{2}\,dk}{\sinh^{2}(\beta\omega_{m,n}/2)},\] (181b) \[\overline{K}_{4} =-\frac{16\pi^{4}}{L^{6}\beta^{3}}\sum_{m=1}^{\infty}\sum_{n=1}^{ \infty}\int_{0}^{\infty}\frac{dk\,m^{4}[2+\cosh(\beta\omega_{mn})]}{\sinh^{4} (\beta\omega_{mn}/2)}, \tag{181c}\] where the notation \(\overline{K}_{2n}=K_{2n}\overline{\mathcal{F}}(0)\) was employed for brevity. The coefficients \(\overline{\mathcal{F}}(0)\), \(K_{2}\) and \(K_{4}\) are studied as functions of the transverse size of the system \(L\) with respect to their unbounded counterparts. The ratios \(\overline{\mathcal{F}}(0)/F_{0}\), \(K_{2}/2!\) and \(K_{4}/4!\) are represented in Fig. 15, where the denominators of these expressions are the classical expectations given in Eq. (81). As we already noted in Subsect. IV.3, a strong quenching due to the boundary can be seen at small values of \(L\), which is however less pronounced for the shape coefficient \(K_{4}\) compared to the dimensionless moment of inertia \(K_{2}\) and the free energy \(\overline{\mathcal{F}}_{0}\) itself. At \(L=100\), these coefficients already reach their expected asymptotic values \(K_{2n}=(2n)!\), Eq. (81). In a cylinder of a finite radius \(R\), a large-size \(L\to\infty\) limit corresponds also to the high-temperature limit, \(T\to\infty\) since \(L=2\pi R/\beta\equiv 2\pi RT\). Figure 15 shows that the dimensionless moment of inertia \(K_{2}\) approaches the asymptotic value \(K_{2}=2\) from below, indicating that the moment of inertia should decrease as temperature decreases. This effect is related to the presence of an effective energy gap between the states with zero, \(m=0\), and non-zero, \(m\neq 0\), orbital momenta due to the finite size of the system. Therefore, at lower temperatures, the system mostly resides in the \(m=0\) state and the rotational modes, which contribute to the moment of inertia (181b), are not excited. Since the latter modes do not participate in rotation at low \(T\), the moment of inertia of the system decreases as the system gets colder. This effect should evidently also occur for \(K_{4}\) and higher coefficients. Interestingly, the same qualitative behaviour for the moment of inertia \(K_{2}\) is also observed in the first-principle simulations of gluon plasma in the high-temperature phase of Yang-Mills theory: as the temperature increases, the moment of inertia approaches the high-temperature value \(K_{2}=2\), Eq. (81), from below [27]. ## VII Conclusions In the present work, we studied the thermodynamic properties of massless scalar fields subjected to rigid rotation and an inter-relation of the real rotation with its imaginary analogue. The latter concept - a rigid rotation with an imaginary frequency [19; 23] - has a practical interest since rotating systems cannot be implemented in Euclidean path-integral formalism, suitable, for example, for numerical first-principle calculations on the lattice [17; 21; 26; 27; 30]. In this sense, rotation shares the deficiency suffered by finite-density systems, namely the sign problem [30], and needs to be implemented in Euclidean spacetime via its imaginary version supplemented with the subsequent analytical continuation to real angular frequencies [17; 21; 27]. Using the \(1+1\)-dimensional toy model of a scalar field under rigid rotation on a ring, we explicitly demonstrated that the analytical no-go theorem [32], which describes the impossibility of continuation of the imaginary-angular-frequency thermodynamics to the real angular frequencies is related to the development of the fractality of thermodynamics for the former. The result applies in the thermodynamic limit. Within this model, thermodynamic functions such as pressure and energy density can be expressed analytically via the Dedekind \(\eta\) function (29). The latter function tends to the fractal, non-analytical Thomae function (15) as its argument approaches the real axis [38], which corresponds to the infinite-volume (thermodynamic) limit. In the case of the \(3+1\)-dimensional Minkowski space, we first considered a classical description of scalar particles under rotation using relativistic kinetic theory. In the absence of boundaries, rigid rotation with a real rotation parameter leads to a violation of causality and subsequent divergence of all observables on the light cylinder. Imaginary rotation can be described by a real distribution function only as an average of clockwise and counterclockwise rotations, leading a seemingly non-equilibrium state. As expected, observables such as the energy-momentum tensor \(T^{\mu\nu}\) decrease as the distance to the rotation axis is increased, at a faster rate for faster rotation. Under the quantum-field theoretical treatment, rigid rotation with a real rotation parameter leads to the divergence of \(T^{\mu\nu}\) at each point of the space-time, also inside the light cylinder. Under imaginary rotation, \(T^{\mu\nu}\) evaluated on the rotation axis can be expressed via Bernoulli polynomials (135). Away from the rotation axis, we were able to demonstrate the fractalization of both the field fluctuations \(\phi^{2}\) and of \(T^{\mu\nu}\) in the case of imaginary rotation, in complete analogy to the \(1+1\)-dimensional toy model discussed above. In all cases, our results exhibit a periodicity with respect to the imaginary rotation parameter which is not present in the classical, kinetic analysis. Figure 15: Ratios \(\overline{\mathcal{F}}(0)/F_{0}\), \(K_{2}/2!\) and \(K_{4}/4!\) computed at various values of the normalized transverse size of the system, \(L=2\pi R/\beta\) for the bounded system discussed in Sec. VI. For this reason, we found agreement with kinetic theory only in the limit of slow rotation and only in the vicinity of the rotation axis (before fractalization sets in). For comparison with the case of real rotation, we took results obtained using a perturbative calculation for slow rotation (with respect to a stationary background) and found on the rotation axis an analytical result (143) which can be related to the one obtained for imaginary rotation in an essentially non-analytical way (144). We conclude that even on the axis of rotation - which appears to be static in the rotating system - the non-analyticity is strong and unavoidable. We demonstrated the same analytic-non-analytical transition using numerical calculations for the thermal scalar field system enclosed in a cylinder, undergoing rotation with imaginary angular frequencies. As the radius of the cylinder grows, the pressure becomes a non-analytical function of temperature expressed, again, via the Thomae function (demonstrated in Figs. 8-14). In this limit, the boundary effects become less important and the results obtained in the unbounded case provide a good approximation for our observables inside the cylindrical boundary. For values of the rotation parameter respecting the causality constraints (i.e., when the light cylinder is outside the boundary), the fractalization features do not appear in the case of imaginary rotation. For sufficiently slow rotation and high temperature, our numerically-obtained results are compatible with the flip \(\Omega_{I}^{2}\to-\Omega^{2}\) from imaginary to real rotation, signaling the restoration of analytical continuation in this limit. The exotic fractalization properties discussed above are related to a ninionic deformation (12) of statistical distributions at imaginary angular frequencies [32]. For this reason, we conclude that the results obtained in the infinite-volume system subjected to imaginary rotation cannot be analytically related to the properties of the physically rotating system (with a real angular frequency). However, we explicitly demonstrated both for the analytically treatable case on the ring and numerically accessible case of rotating cylinder that the imaginary rotation in a spatially bounded system in Euclidean space can be continued to the real-frequency domain in Minkowski spacetime provided that the causality is respected for the latter spacetime. We have also shown that for real-frequency rotation, the dimensionless moment of inertia \(K_{2}\), normalized per one degree of freedom, is equal to two, \(K_{2}=2\), in the thermodynamic limit of large radius \(R\) of the cylinder. The quantity \(K_{2}\) determines the correction to the free energy (80), or, equivalently, to the pressure of a non-interacting gas of scalar bosons, \[P(\Omega)=P(0)\bigg{(}1+\frac{1}{2}K_{2}R^{2}\Omega^{2}+\dots\bigg{)}\,, \tag{182}\] due to small nonzero angular frequency, \(\Omega\to 0\). This result matches well the first-principle result of Ref. [27] on the behavior of gluons in high-temperature limit of Yang-Mills theory. Below we summarize our main findings: 1. We demonstrated the no-go theorem [32] regarding the impossibility of continuation of the imaginary-angular-frequency thermodynamics to the real angular frequencies using an analytical \(1+1\)-dimensional toy model, revealing the development of the fractality of thermodynamics for the former. 2. Since fractalization does not show up in the classical, kinetic theory treatment of imaginary rotation, we conclude that this is a purely quantum effect. 3. We found similar fractalization on the unbounded Minkowski space for imaginary rotation, as well as in the thermodynamic (infinite volume) limit of the bounded system. 4. For the case of a causal boundary that excludes the light cylinder, we were able to restore the analytical continuation from imaginary to real rotation in the limit of slow rotation and large temperatures. 5. We attributed the exotic fractalization properties to a ninionic deformation (12) of statistical distributions at imaginary angular frequencies [32]. The results obtained in this paper shed light on the implications of the effects of imaginary rotation obtained in the context of first-principle lattice simulations and constitute the basis for the analysis of more complicated systems, e.g., free fermions or the chiral phase transition in the effective QCD models such as the Nambu-Jona-Lasinio model or nonlinear sigma models. ###### Acknowledgements. V.E.A. gratefully acknowledges the support through a grant of the Ministry of Research, Innovation and Digitization, CNCS - UEFISCDI, project number PN-III-P1-1.1-TE-2021-1707, within PNCDI III.
2306.13983
Towards Greener Data Centers via Programmable Data Plane
The energy demands of data centers are increasing and are expected to grow exponentially. Reducing the energy consumption of data centers decreases operational expenses, as well as their carbon footprint. We design techniques to reduce data center power consumption by leveraging Software-Defined Networking (SDN) and programmable data plane concepts. Relying solely on in-data plane registers, our proposed system P4Green consolidates traffic in the least number of network switches and shifts workloads to the servers with the available renewable energy. Unlike existing SDN-based solutions, P4Green's operation does not depend on a centralized controller, making the system scalable and failure-resistant. Our proof-of-concept simulations show that traffic consolidation can reduce data centers' aggregation switch usage by 36% compared to standard data center load balancing techniques, while workload control can boost renewable energy consumption for 46% of the daily traffic.
Garegin Grigoryan, Minseok Kwon
2023-06-24T14:28:47Z
http://arxiv.org/abs/2306.13983v1
# Towards Greener Data Centers via Programmable Data Plane ###### Abstract The energy demands of data centers are increasing and are expected to grow exponentially. Reducing the energy consumption of data centers decreases operational expenses, as well as their carbon footprint. We design techniques to reduce data center power consumption by leveraging Software-Defined Networking (SDN) and programmable data plane concepts. Relying solely on in-data plane registers, our proposed system P4Green consolidates traffic in the least number of network switches and shifts workloads to the servers with the available renewable energy. Unlike existing SDN-based solutions, P4Green's operation does not depend on a centralized controller, making the system scalable and failure-resistant. Our proof-of-concept simulations show that traffic consolidation can reduce data centers' aggregation switch usage by 36% compared to standard data center load balancing techniques, while workload control can boost renewable energy consumption for 46% of the daily traffic. ## I Introduction With the rapid growth of cloud-based data aggregation and computation, the data centers hosting clouds became the major consumers of electrical power and produce a significant carbon footprint [9, 26]. A Data Center Network (DCN) that includes servers and network infrastructure consumes about 40-50% of the total energy consumed by a data center overall, and that share is likely to grow [5, 25]. As the efficiency of data centers' non-IT components improves [12, 22], optimizing the energy consumption of computing and networking components becomes critical. Only by 2030, data center user behavior is expected to increase the energy consumption of data centers by 20% in case the current technological development trends do not change. However, the likely end of Moore's law may lead to data centers' energy consumption surge by up to 134% [19]. Energy consumption of network components is not proportional to traffic [10], hence even switches that transmit little traffic consume considerable energy resources. To reduce the energy consumption of network infrastructure, the state-of-the-art approaches propose carefully choosing the architecture for a data center (e.g., three-tier vs fat-tree data center network) to maximize the utilization of the available resources in a data center. To reduce the energy consumption of servers, workload scheduling systems [13], virtualization techniques [16], and server hardware optimizations [18] were proposed. Using renewable energy can reduce the carbon footprint of data center servers. One challenge is the volatility of renewable energy and the high costs of its storage [23]. To overcome this, workload migration techniques were proposed [26, 30]. Often these approaches require expensive hardware modifications; in the meantime, re-organizing the data center and consolidation of resources may lead to a lack of redundancy and therefore decreased quality of service during peak hours of a data center operation. In this work, we present _P4Green_, a dynamical adaptive system aimed at reducing the energy consumption of server and network components at a data center. Instead of relying on dedicated hardware, we leverage the technologies of switch programmability. Offloading green traffic engineering to programmable switches has several benefits. First, such switches can be programmed to arbitrarily process each data packet at the line rate; second, they can collect and store information reflecting the state of the data center network and its servers, including their renewable energy capacities. In addition, programmable switches are faster and more power-efficient than switches with fixed-function ASIC [3, 27]. Specifically, we leverage the Software-Defined Networking (SDN) via P4 programming language for the data plane and P4Runtime API for the control plane [6, 28]. The goal of P4Green is to consolidate traffic in the least number of switches based on the traffic volume and to schedule workloads on servers with the available renewable energy. P4Green switches evaluate the data center traffic per pre-defined time epochs and based on that adjust the packet forwarding algorithms to include or exclude additional network switches. In addition, they collect information about the availability of renewable energy at the data center servers, make workload allocation decisions, and forward the traffic to the most appropriate server. Unlike existing SDN-based traffic engineering solutions, P4Green does not rely on a controller and uses only its in-data plane registers for making forwarding decisions. P4Green operates in the data plane at the line rate, improving the scalability of the system and making it robust to control plane failures. To summarize our contributions, we: * Designed P4Green, a system for reducing power consumption of network devices in a data center and load balancing workloads towards servers with available green resources; * P4Green traffic engineering and forwarding operates at the line rate, fully in the data plane; the control plane is used only at the initialization stage; * We implemented and tested the prototype of P4Green using Mininet and bmv2 switch emulator; * Our results show a 36% reduction of data centers' aggregation switch usage compared to a traditional ECMP load balancing; as well as a boost of energy consumption for 46% of the daily traffic in a distributed data center scenario. ## II Design P4Green leverages the properties of the P4 switch architecture that allow limited programmability in the forwarding engine and full programmability at the control plane. The goals of P4Green are: * **Traffic consolidation during light load.** P4Green consolidates traffic by dynamically changing the number of active aggregation switches based on traffic volume, with no control plane involvement. * **Green energy-driven workload control.** Server workload is determined by the resources available at the server such as renewable energy. Switches obtain resource information from each server, choose appropriate servers for target workloads, and forward workload traffic to the chosen servers. Overall, P4Green helps reduce the energy consumption of a data center network by minimizing the number of active switches; in addition, P4Green boosts renewable energy usage by sending workloads toward servers that report the availability of such energy. In this section, we describe how we implement the functionality of P4Green while overcoming challenges, such as the data plane computational limitations, the dependency on the control plane, and the TCP session affinity requirements. ### _Overview_ The architecture of P4Green is shown in Figure 1 with its workflow. P4Green is implemented with a single P4 program for the data plane and a Python program for the SDN controller that uses P4Runtime API to set up the switches. When a packet arrives at the switch, it first is parsed to extract information needed for packet forwarding and future analysis. In addition to the standard fields such as Ethernet and IP headers, the parser extracts the TCP header as well as the TCP timestamp option. The TCP header is for load balancing across different aggregation switches via 5-tuple hashing. The TCP timestamp option enables TCP session affinity when distributing traffic to different servers. After parsing, the packet is passed to the ingress pipeline in which an output port is assigned based on the following factors: * Switch type: _core_, _aggregation_, or _access_; * Packet type: (a) _Aggregation_in_, packets that move from non-aggregation switches towards aggregation switches; (b) _Server_in_, packets that move in to servers; (c) _Server_out_, packets that move out from servers. As shown in Figure 1, the ingress pipeline consists of the _Traffic Consolidation_ and _Workload Control_ modules together with _Longest Prefix Matching_ (LPM) and the _Host_info_ match-action tables. Next, we describe these components in detail. ### _Traffic Consolidation_ A typical data center network has three layers of switches: _access switches_ that connect servers to the network, _core switches_ that forward heavy inter-traffic, and _aggregation switches_ that forward inter-traffic between access and core switches as well as intra-traffic between access switches (see Figure 2). The connectivity between the access and core layers exists as long as there is at least one active aggregation switch. The _Traffic Consolidation_ module helps determine the number of aggregation switches to be activated dynamically depending on traffic volume. Specifically, it estimates traffic arriving at core and access switches per epoch time, enables (or disables) more aggregation switches to forward traffic if traffic volume grows (or drops) significantly. This module runs only for ingress packets at core switches (_Aggregation_in_ packets), and for both ingress and server packets at access switches (_Aggregation_in_ and _Server_out packets_). The workflow of the module is defined in more detail in Figure 3 and in the following subsections. #### Ii-B1 Initialization First, three registers are initialized: * _aggr_switches_: number of enabled switches (initially set to 1). * _epoch_start_: starting timestamp for measuring traffic within a time interval. * _traffic_: counter to estimate traffic volume. These registers are set to the same values for all switches. This can be done solely at the switches themselves with no controller involved. There are also other registers possibly having different values based on the network topology and the preferences of network operators. They are indeed to be initialized by the controller: * _switch_type_: Core or access switches execute switch-type-specific commands while aggregation switches simply forward packets to the destination IP address (see Figure 1). * _epoch_length_: The length of an epoch is used for estimating traffic volume. * _traffic_threshold(s)_: This threshold(s) is used to enable or disable aggregation switches by recalculating the _aggr_switches_ register. Note that these registers can be modified during the operation of the switch using either the control or the data plane. As we show further in this section, some of these registers are modified automatically by the data plane of the switch. #### Ii-B2 Operation Core and access switches use ECMP for forwarding packets to aggregation switches based on 5-tuple hashing. These switches use _aggr_switches_ as the width of the ECMP hashing, i.e., the number of possible distinct outputs, which controls the number of aggregation switches that forward traffic. The value of _aggr_switches_ changes based on the traffic volume that core and access switches process. The counter _traffic_ is updated for each packet arrival and reset to zero at the end of each epoch, defined by _epoch_length_. All updates to _traffic_ are done directly in the switch. Specifically, for each incoming packet, the P4Green program reads the ingress timestamp in the metadata. If the timestamp exceeds _epoch_start_ by _epoch_length_, then _traffic_ contains the traffic volume the switch has processed during _epoch_length_. The traffic volume is then used to recalculate _aggr_switches_ by comparing _traffic_ to _traffic_thresholds_ provided by the control plane. Depending on the measured traffic volume, _aggr_switches_ is either increased or decreased, and then used as the width of the ECMP hashing for assigning output ports. Finally, _traffic_ is reset to zero and _epoch_start_ is reset to the current timestamp, and traffic load evaluation is restarted for a new epoch. In P4Green, the initialization is the only step that needs input from the control plane. The control plane program uses P4Runtime API to initialize the switch state variables. The traffic load evaluation and ECMP modification are performed solely in the data plane. In the meantime, P4Green requires a power control module to power off switches that do not forward any traffic. Such a module can control the switches' state by checking the value in _aggr_switches_ register of the switch. We leave the design of the power control module out of the scope of this paper. ### _Workload Control_ Distributed data centers naturally have heterogeneous servers in terms of geographical locations, renewable energy and volatile resources. In iLoad [15], servers determine their workload, e.g., machine learning tasks or web traffic, based on locations and resources. A programmable switch receives those workload requests directly from servers, and makes adjustments to packet forwarding but with little input from the controller. The controller in iLoad regularly updates the forwarding table at the switch, using the servers' workload requests information stored in the data plane. P4Green adopts a similar approach, but enhances it by completely eliminating the control plane from switch operation. We also use the TCP timestamp to provide TCP session affinity, leveraging the Fig. 1: P4Green architecture and workflow Fig. 3: Traffic Consolidation workflow Fig. 2: A simplified data center network approach presented in [4]. The algorithm is discussed below with its workflow depicted in Figure 4. #### Ii-B1 Initialization At the startup, registers and tables are initialized: * The _servers_data_ register block for storing per-server resource availability index is initially set to 0 for every server. _servers_data_ is used to select the most pertinent server ID for the target workload. * The _virtual_ip_ address, set by the control plane for processing client packets. In addition, the switch has the _Host_info_ table with actions to be performed when forwarding packets based on the server's ID (i.e., rewriting the MAC headers and the destination IP address). #### Ii-B2 Operation First, P4Green classifies the incoming packets into _Server_out_ (Case A) and _Server_in_ (Case B) categories. For each category, further classification is performed by P4Green. Info-packetsA server sends the switch an info-packet that contains a resource availability index (e.g., renewable energy, CPU, or memory availability). An info-packet is an IP packet with the first three octets of the destination IP address taken from the subnet to which the switch belongs and the last octet used to encode the payload. The protocol field has the specific value (0x8F in our example) for the switch to identify info-packets. Handling info-packets is illustrated on Figure 4, Case A1. The P4 program parses info-packets to obtain the sender ID (ingress port plus source IP address) and the resource availability index. The index is stored in _server_data_ and used to choose the most relevant server for client workload requests. Client requestsEach access switch is assigned with a Virtual IP address (VIP) at the initialization time, and clients send _Server_in_ packets using VIPs. Assuming the data center uses the TCP protocol, upon receiving a _Server_in_ packet, the switch extracts the TCP SYN flag to identify if it is a new request (_Case B1_ in Figure 4 when SYN=1). The switch then selects the most relevant server to handle this client request. If no server has reported the resource availability index greater than zero, the switch simply uses round-robin to select the server. If there are such servers, the switch selects the next server with available resources. The packet is then augmented with the server ID and passed to the _Host_info_ match-action table that in turn replaces the VIP with the destination IP address of the selected server. If SYN=0 (_Case B2_ in Figure 4), the server that previously served the corresponding TCP session needs to be selected for the TCP session affinity requirement. Finding such a server is, however, challenging since the original destination IP address is the VIP, not the server IP address. In the meantime, storing a per-session table of connections in the switches is not scalable. To get around this problem, we leverage TCP option fields to encode the server ID for each flow as proposed in [4]. It uses the TCP property that hosts echo the TCP timestamp (in _ms_) to each other while in session. A TCP _Server_out_ packet encodes the server ID into the last three bits of timestamp (see Case A2 for _Server_out_ packets in Figure 4). Now, the server ID can be obtained from the last three bits of the TCP timestamp echo field for _Server_in_ packets of existing client requests (when SYN=0). The packet is then matched to the _Host_info_ table to rewrite its headers. Note that we use only the three least significant bits of the TCP timestamp here, while the rest is not affected. Our solution does not require modifications to the server or client TCP/IP stack, except enabled TCP timestamp support. An alternative to leveraging TCP timestamp is using other reliable protocols such as QUIC [20] with the _Connection ID_ field in its header. ## III Evaluation We implement P4Green as a Mininet emulator on the bmv2 [2] switch, and test its traffic consolidation and workload control aspects as a proof-of-concept. For testing, we use the network topology illustrated in Figure 2 that consists of eight servers with unique IP addresses, four access switches with virtual IP addresses, three aggregation switches, and one core switch connected to the external network. First, traffic is sent from outside to the access switches, and we measure how Fig. 4: Workload Control workflow Fig. 5: Traffic consolidation in P4Green traffic volume affects packet forwarding. Moreover, we test how traffic consolidation helps minimize network resources usage. We use iperf3 [1] for traffic generation. After that, Hosts 1 and 2 send info-packets to access switch 1 with their renewable energy availability index. An external host then sends workload requests to VIP 1, and we observe which servers respond to those requests. Note that the Mininet emulator used in our experiments is not designed for processing large volumes of traffic. Hence, the traffic rates in our evaluation emulate the patterns of the daily traffic and do not represent the realistic data center traffic volumes. ### _Traffic Consolidation_ At access and core switches, we set the threshold for enabling two and three aggregation switches to 10KB and 20KB, respectively. Traffic is measured every _epoch_length_ which is set to 1 second. Traffic is generated with iperf3 to simulate a typical daily load with the peak in the afternoon. The results are displayed in Figure 5. With the low traffic (approximately less than 20KB on Aggregation Switch 1 or \(\approx\)37% of the peak load), only one aggregation switch receives traffic. As traffic grows, the second aggregation switch is activated in ECMP and helps decrease the load on the first switch. Once the traffic exceeds a threshold of 32KB (\(\approx\) 60% of the peak load), the traffic flows through all aggregation switches. Overall, the operation hours of aggregation switches are reduced by more than 36%. In a data center environment with up to 32 aggregation switches, each of them consuming about 400W*h (following the estimations made in [24]), that can save up to 4.6kW*h, not considering the reductions in the energy needs of cooling facilities due to reduced heat dissipation. The exact amounts of energy savings depend on the topology of the data center network and and its traffic behaviour. ### _Workload Control_ To test the workload control capabilities, we measure how traffic that servers receive changes as each server informs the closest access switch of available renewable energy. In the experiment, we simulate the case where Server 1 has more renewable energy in the first half of the day, while Server 2 has more in the second half. We assume that Server 1 is located in a time zone six hours ahead of the time zone of Server 2. For the results, Figure 6 shows the traffic processed on Servers 1 and 2 during the day in the timezone of Server 1. The reported energy values follow the regular solar energy pattern on a sunny day (see Figure 7). As we simulate a distributed data center in different time zones, we assume the same overall traffic load throughout the day with all the hours presented in Server 1's time zone. Figure 6 shows that when both servers report zero renewable energy between 00h:00m and 05h:00m, the standard round-robin distributes the client workload evenly between the two servers. When Server 1 reports energy and Server 2 does not, the modified round robin is used to send more workload to Server 1. The figure shows that Server 1 processes a majority of the workload between 05h:00m and 13h:00m (\(\approx\) 69% of the total load). After the energy peak hours for Server 1 before 22h:00m, \(\approx\) 68% of the load is forwarded to Server 2 with more renewable energy during that time interval. Finally, at the end of the day (after 22h:00m), the load is distributed evenly, since both Server 1 and Server 2 are past their peak energy hours. In summary, P4Green distributes 46% of the total traffic towards the servers that report more renewable energy. These results demonstrate that P4Green is successful in load balancing and traffic engineering with no help from control plane during the operational stage. Our implementation is publicly available1. Footnote 1: Link to the repository: [https://github.com/gareging/p4green](https://github.com/gareging/p4green) ## IV Related work ElasticTree [17] aggregates flows in the least number of networking devices thus reducing energy consumption. The ElasticTree controller periodically polls information from the data plane and updates the forwarding tables of the data Fig. 6: Green load balancing in P4Green (in Server 1’s time zone) Fig. 7: Reported renewable energy availability index by Server 1 and Server 2 (in Server 1’s time zone) plane. Li et al. in [21] propose a different approach, allocating networking resources to flows exclusively, rather than letting them share the same links and switches. Such a design requires the control plane to analyze the packets of unseen flows which creates a risk of a control plane bottleneck. GRASP [14] is designed with an SDN controller that subscribes to energy reports from the distributed data center servers and then reactively installs forwarding rules for new flows into the data plane. As with [21], such an approach may not be scalable for a data center with large volumes of traffic. Chuang et al. in [7] take into consideration a task execution length and use an OpenFlow-based controller to monitor and reschedule the data center jobs. iLoad [15] is an in-network green load balancing solution; however, it still requires the control plane to analyze and update the data plane registers during switch operations. Xu et al. in [29] design a green scheduler for data center flows with deadlines. Work such as [11, 8] aimed at predicting green energy generation at data center servers and proactively scheduling workloads to the most active data centers. The authors in [13] propose a green energy-aware algorithm for scheduling tasks in a data center. ## V Conclusion We leverage the emerging technologies of programmable switches to design a data-plane system called P4Green that can allocate and de-allocate IT resources based on traffic load and renewable energy available at servers. P4Green eliminates the control plane from its operation and is shielded from control plane delays and failures. Our proof-of-concept implementation shows that it significantly reduces network switch usage and effectively allocates workloads to servers that report the availability of volatile renewable energy resources.
2305.14564
PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over long input documents, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose PEARL, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, PEARL decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of PEARL is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate PEARL on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. PEARL outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of PEARL is critical to its performance. Overall, PEARL is a first step towards leveraging LLMs to reason over long documents.
Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer
2023-05-23T23:06:04Z
http://arxiv.org/abs/2305.14564v1
# Pearl: Prompting Large Language Models to ###### Abstract Strategies such as chain-of-thought prompting improve the performance of large language models (LLMs) on complex reasoning tasks by decomposing input examples into intermediate steps. However, it remains unclear how to apply such methods to reason over _long input documents_, in which both the decomposition and the output of each intermediate step are non-trivial to obtain. In this work, we propose Pearl, a prompting framework to improve reasoning over long documents, which consists of three stages: action mining, plan formulation, and plan execution. More specifically, given a question about a long document, Pearl decomposes the question into a sequence of actions (e.g., SUMMARIZE, FIND_EVENT, FIND_RELATION) and then executes them over the document to obtain the answer. Each stage of Pearl is implemented via zero-shot or few-shot prompting of LLMs (in our work, GPT-4) with minimal human input. We evaluate Pearl on a challenging subset of the QuALITY dataset, which contains questions that require complex reasoning over long narrative texts. Pearl outperforms zero-shot and chain-of-thought prompting on this dataset, and ablation experiments show that each stage of Pearl is critical to its performance. Overall, Pearl is a first step towards leveraging LLMs to reason over long documents.1 Footnote 1: [https://github.com/SimengSun/pearl](https://github.com/SimengSun/pearl) ## 1 Introduction Performing complex reasoning over long input documents often requires forming high-level abstractions of the text (e.g., plots and themes in a narrative) and then conducting a variety of inferences on top of those abstractions (Graesser et al., 1994). Consider the following question about the story "Breakaway" from the QuALITY dataset (Pang et al., 2022): To answer this question, we need to gather, evaluate, and synthesize information from across the story, which motivates decomposing the question into a _plan of actions_, as in: 1. Identify all participants in initial conversation. 2. Summarize the initial conversation. 3. Summarize events and themes of final scene. 4. Summarize roles of conversation participants in final scene. 5. Identify and rank connections between conversation and final scene. Each action in the above plan varies in complexity, from simple lookup-style actions (Step 1) to more challenging query-focused summarization (Steps 2-4) and conceptual linking (Step 5) actions that require deep narrative understanding. Given the rapidly advancing capabilities of large language models (LLMs), how can we use them to answer questions like these? While we could directly prompt LLMs to generate the answer, prior Figure 1: High-level overview of our framework Pearl. Each stage in Pearl is achieved via zero-shot or few-shot prompting of an LLM (in our work, GPT-4). We also provide example outputs from each stage. work on simpler reasoning-based tasks shows that this method is inferior to Chain-of-Thought prompting (Wei et al., 2022, CoT), which encourages the LLM to provide step-by-step explanations and intermediate outputs before producing the answer. Unfortunately, CoT is not well-suited for tasks involving complex reasoning over long input documents, as both the decomposition of the original question and the intermediate outputs of each step are non-trivial to obtain, as in the above example. Given the difficulty of obtaining plans and intermediate explanations for long documents, one potential solution is to delegate this task to smaller _executable_ modules instead of forcing the LLM to come up with all of them at once. In this work, we introduce Pearl, a framework that combines **P**lanning and **E**xecutable **A**ctions for **R**easoning over **L**ong documents. Each stage of Pearl -- action mining, plan decomposition, and plan execution -- is implemented by applying zero-shot or few-shot prompting to an LLM. The stages (Figure 1) can concisely be described as follows: 1. **Action mining:** An LLM is prompted to come up with simple actions that can help solve questions from an input training dataset. Unlike predefined "toolboxes" in methods such as Toolformer (Schick et al., 2023) or ReACT (Yao et al., 2023), the action set in Pearl is also generated by an LLM. 2. **Plan generation:** Given an input test question, an LLM generates an executable plan consisting of a series of actions selected from the action set produced in the previous stage. The plan is formatted as a simple program in which the execution result of one action can serve as an argument to future actions, which enables complex composition. 3. **Plan execution:** The LLM executes the plan action-by-action via a prompt template that includes an action and the long-form input document. Note that this is the only stage that includes the document, as the other stages operate over just questions. We demonstrate Pearl's effectiveness on a challenging subset of QuALITY (Pang et al., 2022), a reading comprehension dataset that contains questions about long-form articles. While QuALITY is originally a multiple-choice dataset, we reformulate it into a generation task: given a question and an article, an LLM is asked to generate a free-form answer. As a proxy for measuring answer correctness, we adopt a similar approach to Wang et al. (2020) by asking the LLM to map its generated answer to one of the multiple choice options, which allows us to compute its accuracy. Prompting LLMs with Pearl yields more accurate and comprehensive answers than those generated by directly prompting the LLM to answer the question, particularly for questions that require reasoning over the full long document. This result is particularly impressive given the potential for error propagation in the Pearl framework: as each stage is implemented via an LLM, errors in plan formulation or execution can significantly affect the output answer. To further verify the integrity of the plans, we perform human evaluation by asking annotators to provide feedback and ratings; annotators generally find the plans to be reasonable, although a small percentage contain unnecessary actions or omit critical actions. Overall, we hope Pearl further opens the door towards using LLMs for complex reasoning over long documents. ## 2 Related work Our work builds on recent LLM prompting research and also connects to work on reasoning over long documents. Before describing Pearl, we first survey related papers to contextualize our work within this fast-moving field. Prompting methods:Recently, the capabilities of large language models (Brown et al., 2020; Zhang et al., 2022; Touvron et al., 2023) have significantly increased as a result of learning from instructions or feedback (Stiennon et al., 2022; Ouyang et al., 2022; Chung et al., 2022) to better align their outputs to human preferences. When provided with well-crafted prompts, such as chain-of-thought (Wei et al., 2022) explanations, these state-of-the-art models exhibit impressive reasoning abilities. A plethora of new prompting techniques (Table 1) has been introduced lately to unlock more capabilities of LLMs via leveraging external tools (Chen et al., 2022; Schick et al., 2023; Lu et al., 2023), problem decomposition (Press et al., 2022; Dua et al., 2022; Khot et al., 2023; Yao et al., 2023), self-reflection and self-refinement (Huang et al., 2022; Shinn et al., 2023; Madaan et al., 2023), planning (Yao et al., 2023; Wang et al., 2023; Long, 2023), and other techniques (Yoran et al., 2023; Wang et al., 2023; Zhou et al., 2023). Reasoning over long documents:Large language models have showcased remarkable reasoning capabilities Huang and Chang (2022), including mathematical reasoning Cobbe et al. (2021), commonsense reasoning Talmor et al. (2019), and symbolic reasoning Nye et al. (2021). Most of these tasks do not involve long context inputs, and thus they are able to benefit from few-shot in-context CoT prompting. In this paper, we primarily focus on tasks that contain long input contexts Kocisky et al. (2018); Dasigi et al. (2021); Shaham et al. (2022); Sun et al. (2022), specifically generative question answering based on long input articles. To address the absence of reliable evaluation for long-form QA Krishna et al. (2021), Stelmakh et al. (2022) proposes automatic metrics for evaluating the correctness of the answer, whereas in this work, we use LLM-based evaluation by taking advantage of the multiple-choice setup of existing QA dataset. Prior to the shift to prompting-based methods, approaches including contrastive learning-based sequence-level objectives Caciularu et al. (2022), iterative hierarchical attention Sun et al. (2021), and joint modeling of machine reading and answer generation Su et al. (2022) have been employed to enhance long-context question answering. ## 3 Pearl: Planning and Executing Actions for Reasoning over Long Documents We are interested in using LLMs to solve tasks that require complex reasoning over long documents.2 In this paper, we focus on the task of answering questions about long-form narratives. Most prompting strategies that aim to improve the reasoning abilities of LLMs (e.g., CoT) are not applicable to this task due to the length and complexity of the input document. In this section, we specify our Pearl framework, which consists of three LLM-implemented stages that mine actions from a training corpus, formulate plans to answer held-out questions, and then execute the resulting plans to obtain answers. Footnote 2: As there is no consensus on what is “long”, we consider it to mean documents of several thousands of tokens in length. ### Action mining In many prior prompting techniques such as RecAT and Toolformer, the LLM is able to query external APIs (e.g., Wikipedia search or a calculator) to solve a given task. Unlike these works, which assume a predefined action space, Pearl mines actions directly from data of similar distribu \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Prompting Methods} & Explicit & Iterative & Does not rely on & Long \\ & plan & prompting & external tools & documents \\ \hline Chain-of-Thought Wei et al. (2022) & ✗ & ✗ & ✓ & ✗ \\ Program-of-Thought Chen et al. (2022) & ✗ & ✗ & ✗ & ✗ \\ Self-Ask Press et al. (2022) & ✗ & ✓ & ✗ & ✗ \\ Toolformer Schick et al. (2023) & ✗ & ✗ & ✗ & ✗ \\ ReAct Yao et al. (2023b) & ✗ & ✓ & ✗ & ✗ \\ Plan-and-Solve Wang et al. (2023a) & ✓ & ✗ & ✓ & ✗ \\ Pearl (_this work_) & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of Pearl to other recently-proposed prompting techniques. Pearl is the only one designed for and evaluated on tasks that require complex reasoning over long documents. Figure 2: Prompt sketch for action mining. It comprises human-written seed actions set and instructions, as well as question for which LLM will extract action(s) from. Finally, we also present an example mined action. More details can be found in the Appendix D. tion (in our case, training set questions of QuALITY). As shown by prior research (Graesser et al., 1994), answering complex queries over long documents requires specific reasoning techniques; as further evidence, Xu et al. (2022) demonstrates the presence of various discourse structures in good answers to long-form questions on Reddit. Learning dataset-specific actions enables Pearl to scale to different domains and tasks, as user queries may differ considerably in terms of complexity. Moreover, mining actions from training set can reduce human efforts in designing new actions. In this work, we define an "action" as a basic unit for long document reasoning. To obtain these actions, we first manually create a small set of _seed_ actions to use as demonstrations.3 Then, as shown in Figure 2, given an example question, we feed it along with the seed actions and instructions to the LLM to generate more task-specific actions. Each ACTION is formatted as a programmatic function with input arguments and is followed by a _model-generated function definition in natural language_. Below is an example action generated by the LLM: Footnote 3: See prompt for QuALITY action mining in Appendix D and few-shot examples guiding the plan generation. After a full pass over example questions in the training data, we obtain a final set of actions and their corresponding definitions which are then incorporated into the prompt of the next stage. ### Plan generation A plan serves as the guiding framework or outline for answering complex questions that may involve multi-step reasoning and/or global understanding of long documents. Given a question, as shown in Figure 3, we prompt an LLM to generate a plan based on the previously-mined action set. Each step of the plan is formatted as output = ACTION(arg1, arg2,...), where the output variable stores the result of the current ACTION, and the arguments can be (1) the input document, (2) a string, or (3) an output variable from previous steps of the plan. When generating the plan, we do not show the LLM the entire document as input, which provides ample space for incorporating few-shot in-context examples. Similar to the seed actions in the previous stage, we provide a small seed set of plans and allow the model to generate more demonstrations automatically. We provide more details in Section 4 about controlling the quality of model-generated in-context demonstrations. ### Plan execution In the previous stage, the LLM generates a plan that serves as a blueprint for producing a response. To execute each step in the plan, we prompt the LLM with a template filled with output from previous stages. Concretely, as shown in Figure 4, to execute the action FIND_BEhavior_REASON, the model fills in the prompt template with (1) the planned action Figure 4: Prompt sketch for plan execution. This prompt contains multiple _[placeholders]_ that will be filled with output from previous stages. Figure 3: Prompt sketch for plan generation. In the prompt, we include the list of actions mined from previous stage in-context, natural language detailing the task, and few-shot examples guiding the plan generation. and definition, (2) current action with specific input argument (e.g., aspirin_event), (3) assignment of argument name with output from previous stage (e.g., aspirin_event = "in the beginning of the story,..."), and (4) a one-sentence instruction for the current step, all of which are generated by LLM. As the long input article is involved during this stage, the prompt is executed in a zero-shot manner. ### Self-correction and self-refinement Since the plans are generated by an LLM, they may be incorrectly formatted or of otherwise low quality. To address this issue, similar to Shinn et al. (2023), we include a self-correction step prior to plan execution and a self-refinement step before incorporating model-generated plans as in-context few-shot examples. We implement a plan parser that returns relevant error messages when the plan does not conform to the defined format. The invalid plan as well as the error message are then passed into the LLM for correcting the plan's grammar. To ensure the quality of model-generated in-context examples, we validate them by executing the plan and evaluating the generated answer with a task-specific scoring function (more details in Section 4.1). If the answer is rejected by the evaluation in the end, we pass the plan to LLM for further self-refinement before being included in the context as few-shot examples. ## 4 Experiments We compare Pearl to baseline methods (zero-shot answering and zero-shot CoT) on a challenging subset of the QuALITY Question-Answering dataset that requires reasoning over long articles of several thousands tokens. In this section, we describe our dataset selection, experimental setup, and model configurations. Dataset selection:We focus on the QuALITY QA dataset Pang et al. (2022), which is a multiple-choice QA task in the SCROLLS benchmark Shaham et al. (2022). However, to better simulate LLMs usage in real-world scenarios, we turn this dataset into a _generative_ task4 in which an LLM does not have access to the choices and must instead generate a long-form answer. Then, we automatically map the generated answer back to one of the choices with an LLM to evaluate the accuracy as shown in Figure 5.5 The accuracy of mapped answers serves as a proxy for assessing the correctness of the provided answer. Footnote 4: We provide the performance of GPT-4 with standard multi-choice setup on the full QuALITY dev set in Appendix A. QuALITY contains a diverse variety of questions, each of which is annotated with the amount of context from the document needed to answer the question. In contrast to questions that can be correctly answered with local context once a piece of information is located, as in Who found Retief and Magnan in the trees? we are more interested in questions that require reasoning over long context, as in: How would you describe the changes in tone throughout the passage? These questions constitute an interesting and difficult subset that, unlike more straightforward information seeking questions, require global understanding and reasoning over the document to provide accurate answers. Therefore, we select a subset of questions rated as requiring long contexts to answer. In total, we create a dataset of 1K examples divided into two splits:6 (1) **Long**: 330 examples from the dev set, 368 examples from training set, and (2) **Short**: 302 examples from dev set that do not require long contexts to answer; the latter forms a control dataset to make sure our methods do not overly worsen performance on simpler questions. Footnote 6: Human annotation score on the required context ranges from 1 to 4. Questions in the long split are those with average human annotation score \(\geq 3\), questions in the short split have scores \(<3\). ### Experimental setup As each of the stages in Pearl has critical hyperparameters and implementation details, we describe our specific configurations here. Action mining:We provide an LLM with seven seed actions and two in-context examples to demonstrate the required format for generating new actions.7 We collect new actions by passing all training set questions into the model, excluding those questions in our evaluation set. Ultimately, we obtain 407 actions and corresponding definitions, of which several are duplicates or overly specific, and in total exceeds GPT-4's maximum context window of 8K tokens. As such, we instruct the LLM to simplify and abstract over existing actions in order to reduce the total number of actions. After repeating this process twice,8 we reduce the number of actions to 81, which forms the final action set for Pearl. Footnote 8: After one round, the actions reduced to \(\sim\)140, and after four rounds to \(\sim\)20. We provide ablations on the number of actions in Section 5. Self-correction retry limit:Despite utilizing self-correction to validate the generated plan's syntax, it is still possible that the model fails to generate a plan in the correct format. In such cases, we force the model to revert to the zero-shot baseline approach. Out of 1K examples across various Pearl variants, only 4 examples failed to parse within the retry count limit, which is within an acceptable range of failed examples. ### Baselines As existing sophisticated prompting methods require few-shot examples in-context, which is not feasible when long document is involved, we compare Pearl with simple zero-shot baselines (GPT-4 (OpenAI, 2023) and GPT-3.5 (Ouyang et al., 2022)), where we directly prompt the model to provide a detailed free-form answer. Additionally, we also evaluate zero-shot chain-of-thought prompting for GPT-4 by adding "Let's think step-by-step," to the prompt. ## 5 Main results We discover that Pearl significantly outperforms competing prompting methods on questions that require reasoning over long contexts, which demonstrates the utility of the planning module. We also observe a small drop in accuracy on questions that require only short contexts, possibly because the plans end up over-complicating what is a simple reasoning process. In this section, we dig deeper into the main results of our experiments, which are presented in Table 2. Pearl improves accuracy on long-document QA:Overall, Pearl's accuracy is higher than that of all competing methods, particularly for the QuALITY split annotated by humans as requiring long contexts to answer (**Long**). Furthermore, we observe in Figure 6 that for questions marked by QuALITY workers as requiring the longest possible context, Pearl improves substantially compared to the zero-shot GPT-4 baseline (72.4% vs \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Quality** & **Quality** & **All** & \(p\)**-val** \\ & **Long** & **Short** & & \\ \hline **Prompting Methods** & & & & \\ GPT-4 zero-shot & 64.3 & **79.1** & 68.8 & - \\ GPT-3.5 zero-shot (text-davinci-003) & 45.5 & 56.3 & 48.8 & 0.000 \\ GPT-4 zero-shot chain-of-thought & 65.9 & 77.2 & 69.3 & 0.766 \\ GPT-4 Pearl & **70.9** & 77.8 & **73.0** & 0.005 \\ \hline **Ablations of GPT-4 Pearl** & & & & \\ w/o plan execution & 67.3 & 77.2 & 70.3 & 0.295 \\ w/o self-refinement of plan demonstrations & 67.0 & 78.8 & 70.6 & 0.245 \\ \hline \hline \end{tabular} \end{table} Table 2: We present baseline and Pearl as well as ablation results on our generative subset of QuALITY questions. **Long** denotes the split where the questions require reasoning over long contexts to answer accurately. As we only evaluate on a subset, we also provide \(p\)-values to verify statistical significance against the zero-shot GPT-4 baseline. Figure 5: Generic illustration of our evaluation setup. Given the article and question, we prompt an LLM with Pearl to generate a long-form answer, which is later mapped to one of QuALITY’s multiple-choice options by the LLM itself. 61.9%). Our method's slightly diminished performance on the **short** split is likely due to both "over-thinking" these simpler questions, as well as error propagation from plan execution steps as highlighted in Section 6. Finally, we point out that all methods achieve higher accuracies on the **Short** split compared to the **Long** split, indicating the challenging nature of this set of questions. Number of actions impacts performance:In Figure 7, we show that the size of the action set is an important factor in Pearl's performance. With just a single action (i.e., execute a free-form natural language instruction),10 Pearl's accuracy on the **Long** subset drops to 64%. With too many actions (140 in the plot), its accuracy also degrades, likely because the action space is too fine-grained for the model to properly execute all actions. We note that the optimal number of actions likely differs from task to task, so it is an important hyperparameter to consider when tuning Pearl. Footnote 10: The short, long, and longer splits correspond to average annotation scores on the amount of required context [1, 3], [3, 3.5], and [3.5, 4], respectively. Action execution is necessary:Do we actually need to _execute_ the generated plans to answer these questions? Feeding just the generated plan to the model along with the question (minus any execution results) may still encourage the LLM to follow the plan's reasoning steps and generate a better answer. However, we observe that removing the execution results from the model's input reduces absolute accuracy by around 3 points, which suggests that it is important to perform multiple passes over the document to execute each action before answering the original question. With that said, we do observe a modest improvement over the GPT-4 zero-shot and CoT baselines (\(\sim 2\) absolute points), which suggests that the plan itself is also valuable. Self-refinement improves performance:To reduce human input, the majority of the plan generation demonstrations for Pearl are generated by the LLM with self-refinement. We observe that self-refinement is critical to performance: without it, the overall accuracy drops nearly 3 absolute points (see ablations in Table 2), which highlights the importance of high-quality few-shot examples for plan generation. ## 6 Analysis In this section, we analyze the behavior of Pearl by diving into the composition of its generated plans, its most preferred actions, and what types of questions it improves most on. We also offer a qualitative error analysis as well as a human evaluation on the correctness of the generated plans. Plan statistics:Plans are roughly 4 actions long on average, with around 3.4 unique actions per plan. The most commonly used actions are shown in Figure 8. Apart from the string concatenation action CONCAT, the most frequently used action is FIND_CHARACTER, which can be convenient for understanding long literary text. Other less often used actions cover both those that can transfer across domains, e.g., COMPARE, and those specific to narrative understanding, e.g., FIND_EMOTION. Accuracy by reasoning types:Since QuALITY questions require different reasoning strategies to solve, what types of reasoning does Pearl help improve the most? To this end, we further evaluate questions based on the type of reasoning re Figure 6: Accuracy by the amount of required context to answer,9as annotated by humans in QuALITY. Figure 7: Pearl accuracy given in-context action sets of various sizes. Having too few or too many actions impairs the performance. quired to answer them.11 Table 4 shows that Pearl significantly improves three reasoning types: _why_ questions (reasoning about a cause), _person_ questions (reasoning about the person(s) involved in an event), and _not/except_ questions (e.g., "which of the following is not a reason for..."). Footnote 11: We prompt GPT-4 with the definition of each reasoning type presented in QuALITY’s Appendix [12] and ask it to label each question with up to two reasoning types. Pearl is significantly slower than zero-shot prompting:The improved performance of Pearl comes at the cost of longer running time and cost. With an average of 30 examples, Pearl needs to handle 4.4 times more tokens in the prompt and generate 1.3 times more tokens owing to the intermediate steps. Specific examples where Pearl helps:To better understand Pearl, we qualitatively analyze 40 examples for which zero-shot GPT-4 generates incorrect answers while Pearl answers correctly. This analysis reveals two key advantages of Pearl. First, while zero-shot prompting is reasonably good at finding salient information from the input document, its generative answers tend to be based only on _local_ context around this information. For instance, when asked about the number of wives the character "Dan Merrol" has, the baseline successfully identifies six names that appear to be Dan's wives. However, Pearl takes into account the relation that these names "_were actually memories_ \begin{table} \begin{tabular}{l r r r} \hline \hline & **Count** & \begin{tabular}{c} **GPT-4** \\ **Pearl** \\ \end{tabular} & \begin{tabular}{c} **GPT-4** \\ **zero-shot** \\ \end{tabular} \\ \hline Description & 320 & 0.73 & 0.73 \\ Why/reason & 316 & 0.79\({}^{*}\) & 0.71\({}^{*}\) \\ Symbolism/interpretation & 262 & 0.73 & 0.70 \\ Person & 216 & 0.75\({}^{*}\) & 0.66\({}^{*}\) \\ Event & 199 & 0.69 & 0.68 \\ Not/except & 118 & 0.70\({}^{*}\) & 0.53\({}^{*}\) \\ How/method & 100 & 0.74 & 0.73 \\ Relation & 89 & 0.71 & 0.65 \\ Entity & 74 & 0.64 & 0.68 \\ Numeric & 49 & 0.67 & 0.78 \\ Location & 32 & 0.59 & 0.59 \\ What if & 21 & 0.71 & 0.76 \\ Object & 14 & 0.64 & 0.64 \\ Duration & 18 & 0.78 & 0.89 \\ Finish the sentence & 10 & 0.9 & 0.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy by reasoning types. \({}^{*}\) denotes statistically significant improvement with \(p\)-val < 0.005. Figure 8: Top-10 most frequently used actions by Pearl. \begin{table} \begin{tabular}{l l l} \hline \hline **Err. Category** & **Question** & **Model Generated Plan or Answer** & **Explanation** \\ \hline **True Negative** - Error in Plan (17.5\%) & Does the tone of the passage shift at all, and if it does, how & (Plan)... 3. tone\_shift = COMPARE(CTX, tone\_initial, tone\_final, & \\ **(17.5\%)** & does it shift? & "tone") : Compare the initial and final tones of the passage to determine if there is a shift. & \\ **True Negative** - Error in Exec. (55\%) & How many adult characters have speaking roles? & (Answer) In the input article, there 3 adult characters with speaking roles... & The correct answer involves two characters, whereas Pearl’s response mistakenly includes an additional name. The plan for this question is reasonable, but the problem stems from the execution of individual steps. & The correct answer involves two characters, whereas Pearl’s response mistakenly includes an additional name. The plan for this question is reasonable, but the problem stems from the execution of individual steps. & In this example, the model output is correct, but is mapped to an incorrect dataset option, which contains direct contradictions with the model output. \\ **False Negative** (12.5\%) & Does the story have a good ending? (Answer: Unclear, the story ends as Evely) enters a dangerous situation) & (Answer)...However, the ending of the story is somewhat ambiguous and leaves several questions unanswered. For instance, it is unclear whether Eve-ly will be able to successfully complete her mission... & In this example, the model output is not necessarily wrong in the absence of options. However, when provided with options durries about space exploration, alien emotions is clearly better. from the brain donors whose parts were used to reconstruct his brain_" and thus correctly reasons that Dan only has one wife. In this case, Pearl provides answer that demonstrates a more comprehensive understanding of the entire article. Second, Pearl generates more detailed and thorough answers. For instance, given the question _"Why is Kumaon a good region for potential forest preservation?"_, the zero-shot answer considers only one aspect of the reason, whereas Pearl elaborates on multiple aspects. This allows Pearl's answer to be mapped to the correct option ("All other choices"), while the zero-shot answer maps to the option corresponding to the single aspect. Where does Pearl go wrong?We additionally examine 40 examples for which Pearl answers incorrectly, and we categorize the errors into three categories (detailed examples and explanations in Table 11): * **True negatives:** Questions for which Pearl's generative answer is mapped to the wrong option. This category can be further divided into two subcategories: (1) cases where the plan has critical issues, and (2) cases where the plan is satisfactory but the intermediate execution produces incorrect output. Out of the 40 examples, 29 are true negatives, with 7 plan errors and 22 execution errors. * **False negatives:** Questions for which Pearl's generative answers are correct but incorrectly mapped to the wrong option. This kind of error is unavoidable as we use LLM for automatic answer mapping. Out of the 40 examples, 5 are false negatives. * **Other:** Some QuALITY questions are heavily dependent on the options; that is, the correct answer can only be determined after examining all the options. For instance, Table 11 presents a question asking who would enjoy the story the most of the given options. Although Pearl offers an answer based on the story's genre--which is not incorrect--it is not as accurate as the gold label. Furthermore, there are instances where the model's free-form answers lack sufficient details and can thus be mapped to more than one option or no options at all. We classify these responses as a separate category. Out of 40 examples, 6 fall into this **Other** category. Human evaluation of model-generated plans:The quality of plans generated by Pearl is critical, as they serve as the basis for the plan execution stage. To gain further insight on the quality of these plans, we perform a human evaluation by hiring annotators on Upwork12 to provide feedback on the generated plans.13 Concretely, we ask annotators to assess (1) the correctness of the plans (binary choice), assuming error-free execution at each step, and (2) provide free-form feedback on any flaws or potential improvements. On average, annotators regard over 97% of all plans as correct, with over 94% confidence, although these numbers are inflated because the annotators do not have access to the long story when making these judgments. More interestingly, Table 5 displays their feedback aggregated over common themes, which shows that the primary issue with existing plans is the presence of unnecessary steps (10% of the total annotated plans). Annotators also notice that GPT-4 can be inattentive to subtle details while generating plans. For example, given the question "_Do you think it would be fun to live in the universe in which this story takes place?_", the model decides to "_evaluate the pros and cons of living in the universe based on the features found in the input article_". However, human annotator argues that "_just because something is positive doesn't necessarily mean it is "fun". Any pros on the list might outweigh the dangers noted, resulting in an incorrect answer of 'yes'..._". Footnote 12: We pay the annotators at the rate of $25/h. Footnote 13: We provide a few examples in Appendix E. ## 7 Conclusion In this work, we introduce Pearl, a framework for tackling complex reasoning over long documents. To answer a question, Pearl first proposes a plan based on a set of actions mined from a training set, and then it executes the plan step by step via prompting itself with a template filled with output \begin{table} \begin{tabular}{l r} \hline \hline **Human annot. category** & **\# of plans** \\ \hline Unnecessary steps & 15 \\ Steps can be merged & 2 \\ Plan misses information & 3 \\ Plan may lead to incorrect answer & 4 \\ Plan needs slight edit & 7 \\ \hline \hline \end{tabular} \end{table} Table 5: human freeform feedback aggregation from previous stages. We demonstrate the effectiveness of Pearl on a challenging subset of QuALITY. Experiments and analysis show that prompting GPT-4 with Pearl yields more accurate and comprehensive answers than zero-shot and chain-of-thought prompting, and human annotators judge the generated plans to be reasonable. ### Limitations While Pearl shows promising results for long document reasoning, there are several limitations to our approach. Like other prompting methods, Pearl is susceptible to generating misinformation or hallucinations. It is also more time-consuming and computationally costly than the baseline approach of directly prompting an LLM to answer the question. Moreover, Pearl may over-complicate simple questions that only need superficial reasoning over long-form narratives. Finally, Pearl is still bounded by the maximum context window size of the LLMs. Overall, our work leaves many interesting directions in this space (e.g., new datasets, modules, stage refinements) open for exploration.
2306.12445
Free group of Hamel functions
We construct a free group of continuum many generators among those autobijections of $\mathbb{R}$ which are also Hamel bases of $\mathbb{R}^2$, with identity function included. We also observe two new cases when a real function is a composition of two real functions which are Hamel bases of $\mathbb{R}^2$.
Mateusz Lichman, Michał Pawlikowski, Szymon Smolarek, Jarosław Swaczyna
2023-06-20T14:15:37Z
http://arxiv.org/abs/2306.12445v1
# Free group of Hamel functions ###### Abstract. We construct a free group of continuum many generators among those autobiections of \(\mathbb{R}\) which are also Hamel bases of \(\mathbb{R}^{2}\), with identity function included. We also observe two new cases when a real function is a composition of two real functions which are Hamel bases of \(\mathbb{R}^{2}\). Key words and phrases:Hamel bases, Hamel functions, free groups, symmetry group of \(\mathbb{R}\) 2010 Mathematics Subject Classification: Primary 20B99, 26A99 Secondary 54C40, 26A21 The last-named author acknowledge with thanks support received from Lodz University of Technology via "FU\({}^{2}\)N - a fund for the improvement of the skills of young scientists". For sets \(X,Y\), \(Z\subset X\) and \(f\colon Z\to Y\) we say that \(f\) is a partial function from \(X\) to \(Y\) and write \(f:X\rightharpoonup Y\). In such a case we denote domain of \(f\) by \(\operatorname{dom}(f)\) and its range by \(\operatorname{rng}(f)\). We say that a function \(f\colon X\rightharpoonup\mathbb{R}\), where \(X\subset\mathbb{R}\), is a _partially linearly independent function_ (\(f\in\operatorname{PLIF}\) in short) if it is linearly independent over \(\mathbb{Q}\) as a subset of \(\mathbb{R}^{2}\). If moreover \(X=\mathbb{R}\), then we simply say \(f\) is a _linearly independent function_ (\(f\in\operatorname{LIF}\) in short). For \(A\subset\mathbb{R}^{2}\) by \(\operatorname{LIN}_{\mathbb{Q}}(A)\) we will denote the linear subspace of \(\mathbb{R}^{2}\) over \(\mathbb{Q}\) generated by \(A\). The following Proposition is very useful for working with Hamel functions. **Proposition 1**.: _[_3_, Fact 2.2.]_ _Let \(f:\mathbb{R}\to\mathbb{R}\) be a function, \(f\in\operatorname{LIF}\). Then \(f\in\operatorname{HF}\) if and only if \(\langle 0,x\rangle\in\operatorname{LIN}_{\mathbb{Q}}(f)\) for every \(x\in\mathbb{R}\)._ ## 3. Main result Our goal is to prove the following. **Theorem 1**.: _There exists a family \(\{f_{\beta}:\beta<\mathfrak{c}\}\) of Hamel bijections such that they are free generators of a group with respect to the composition and all elements of this group but identity are also Hamel bijections._ Proof.: The general idea of the proof is to construct functions \(f_{\beta}\), for \(\beta<\mathfrak{c}\), inductively. Those functions will be free generators of the desired group. By \({}_{\kappa}f_{\beta}\) we will denote the state of function \(f_{\beta}\) at \(\kappa\) stage of our construction. We define \[A\coloneqq\bigcup_{m\geq 1}(\mathbb{Z}\setminus\{0\})^{m}\times\mathfrak{c}^{m},\] \[B\coloneqq\{(n_{0},\ldots,n_{m-1},\gamma_{0},\ldots,\gamma_{m-1})\in A:\exists i <m-1\ \ \gamma_{i}=\gamma_{i+1},\ \ m\geq 1\}\] and enumerate \[A\setminus B=\{(n_{0}^{\alpha},\ldots,n_{m_{\alpha}-1}^{\alpha},\gamma_{0}^{ \alpha},\ldots,\gamma_{m_{\alpha}-1}^{\alpha}):\alpha<\mathfrak{c}\}.\] The set \(A\setminus B\) is identified with all possible words written in a reduced form. Let \(\mathbb{R}\times\mathfrak{c}=\{(x_{\kappa},\alpha_{\kappa}):\kappa<\mathfrak{ c}\}\) be a well-ordering of \(\mathbb{R}\times\mathfrak{c}\). For \(\alpha,\kappa<\mathfrak{c}\), by \({}_{\kappa}h_{\alpha}\) we denote the word \[{}_{\kappa}f_{\gamma_{m_{\alpha}-1}^{\alpha}}^{n_{m_{\alpha}-1}^{\alpha}} \circ\ldots\circ_{\kappa}f_{\gamma_{0}^{\alpha}}^{n_{\alpha}^{\alpha}}.\] Similarly by \(h_{\alpha}\) we denote the word \[h_{\alpha}=f_{\gamma_{m_{\alpha}-1}^{\alpha}}^{n_{m_{\alpha}-1}^{\alpha}} \circ\ldots\circ f_{\gamma_{0}^{\alpha}}^{n_{\alpha}^{\alpha}}=\bigcup_{ \kappa<\mathfrak{c}}\ {}_{\kappa}f_{\gamma_{m_{\alpha}-1}^{\alpha}}^{n_{m_{\alpha}-1}^{\alpha}} \circ\ldots\circ\ {}_{\kappa}f_{\gamma_{0}^{\alpha}}^{n_{\alpha}^{\alpha}}=\bigcup_{\kappa< \mathfrak{c}}\ {}_{\kappa}h_{\alpha}\] More precisely, for \(\beta<\mathfrak{c}\) we construct a sequence of partial functions \({}_{\kappa}f_{\beta}\), \(\kappa<\mathfrak{c}\) such that the following conditions hold. 1. \({}_{\kappa}f_{\beta_{n}}^{m_{n}}\circ\ldots\circ_{\kappa}f_{\beta_{1}}^{m_{1}}\in \operatorname{PLIF}\) for all \(n\in\mathbb{N},m_{1},\ldots,m_{n}\in\mathbb{Z}\setminus\{0\},\beta_{1}\neq \beta_{2}\neq\ldots\neq\beta_{n}\in\mathfrak{c}\). Note that \(\beta\)'s need not to be pairwise different. 2. \({}_{\kappa}f_{\beta}\) is one-to-one. 3. \(|\bigcup_{\beta<\mathfrak{c}}{}_{\kappa}f_{\beta}|\leq\omega+|\kappa|\). 4. \({}_{\kappa}f_{\beta}\subset\ {}_{\gamma}f_{\beta}\) for \(\kappa<\gamma\). 5. \(\langle 0,x_{\kappa}\rangle\in\operatorname{LIN}_{\mathbb{Q}}({}_{\kappa+1}h_{ \alpha_{\kappa}})\). 6. \(x_{\kappa}\in\operatorname{dom}({}_{\kappa+1}f_{\alpha_{\kappa}})\). 7. \(x_{\kappa}\in\operatorname{rng}({}_{\kappa+1}f_{\alpha_{\kappa}})\). Then for \(\beta<\mathfrak{c}\) let \(f_{\beta}\coloneqq\bigcup_{\kappa<\mathfrak{c}\ \kappa}f_{\beta}\). In condition (I) we consider the composition just at those points at which it is already defined. Note that once we are done, conditions (II), (VI) and (VII) guarantee that for every \(\beta<\mathfrak{c}\), \(f_{\beta}\) is an autobiection of \(\mathbb{R}\). Therefore every word constructed from the set \(\{f_{\beta}:\beta<\mathfrak{c}\}\) is an autobiection. \(\circ\) is a group operation on the set of all words. The condition (I) assures that every word is LIF. Using Proposition 1 and the condition (V) we get that every word is a Hamel function. Observe now, that condition (I) also assures that the set \(\{f_{\beta}:\beta<\mathfrak{c}\}\) generates a free group. Indeed, let \(n,m\in\mathbb{N}\), \(k_{1},\ldots,k_{n},l_{1},\ldots,l_{m}\in\mathbb{Z}\), \(\beta_{1}\neq\ldots\neq\beta_{n},\gamma_{1}\neq\ldots\neq\gamma_{m}\in \mathfrak{c}\) and suppose that \[g\coloneqq f_{\beta_{n}}^{k_{n}}\circ\ldots\circ f_{\beta_{1}}^{k_{1}}=f_{ \gamma_{m}}^{l_{m}}\circ\ldots\circ f_{\gamma_{1}}^{l_{1}}\eqqcolon h\] but \[(k_{1},\ldots,k_{n},\beta_{1},\ldots,\beta_{n})\neq(l_{1},\ldots,l_{m},\gamma _{1},\ldots,\gamma_{m}) \tag{1}\] i.e., the representation of the word \(g\) is not unique. Since \(g=h\) we get that \(g\circ h^{-1}=\mathrm{id}_{\mathbb{R}}\). However (1) implies that word \(g\circ h^{-1}\) written in reduced form remains nontrivial. Thus there exists \(p\leq m+n\), \(r_{1},\ldots r_{p}\in\mathbb{Z}\setminus\{0\}\), \(\delta_{1}\neq\ldots\neq\delta_{p}\in\mathfrak{c}\) such that \(\mathrm{id}_{\mathbb{R}}=f_{\delta_{p}}^{r_{p}}\circ\ldots\circ f_{\delta_{1}}^ {r_{1}}\), which contradicts \(f_{\delta_{p}}^{r_{p}}\circ\ldots\circ f_{\delta_{1}}^{r_{1}}\) being LIF, as \(\mathrm{id}_{\mathbb{R}}\notin\mathrm{LIF}\). As a consequence we get that for \(\alpha,\beta<\mathfrak{c}\), \(\alpha\neq\beta\), \(f_{\alpha}\neq f_{\beta}\) and thus the cardinality of \(\{f_{\beta}:\beta<\mathfrak{c}\}\) is \(\mathfrak{c}\). To sum up, we showed that \(H=\{h_{\alpha}:\alpha<\mathfrak{c}\}\cup\{\mathrm{id}\}\) equipped with the action of functions composition is the desired free group of \(\mathfrak{c}\) many generators. It remains to construct functions \({}_{\kappa}f_{\alpha}\) satisfying the conditions (I)-(VII), so take \(\gamma<\mathfrak{c}\). Assume that for all \(\beta<\mathfrak{c}\) partial functions \({}_{\alpha}f_{\beta}\) are constructed and conditions (I)-(VII) hold for \(\alpha<\gamma\). If \(\gamma=\emptyset\), then we let \({}_{\gamma}f_{\beta}\coloneqq\emptyset\) for \(\beta<\mathfrak{c}\). If \(\gamma>0\) is a limit ordinal, then for \(\beta<\mathfrak{c}\) set \({}_{\gamma}f_{\beta}\coloneqq\bigcup_{\alpha<\gamma}\,{}_{\alpha}f_{\beta}\). Otherwise there is \(\kappa\) with \(\kappa+1=\gamma\). **STEP I** In this step we assure that condition (V) is true and conditions (I)-(IV) hold. If \(\langle 0,x_{\kappa}\rangle\in\mathrm{LIN}_{\mathbb{Q}}(_{\kappa}h_{\alpha_{ \kappa}})\) then set \[{}_{\kappa}f_{\beta}^{\prime}\coloneqq_{\kappa}f_{\beta}\] for all \(\beta<\mathfrak{c}\). Otherwise the general idea will be to find \(x,y\in\mathbb{R}\) and extend existing functions in such a way that \({}_{\kappa}h_{\alpha}(x)=y\) and \({}_{\kappa}h_{\alpha}(-x)=x_{\kappa}-y\) (in order to simplify the notation we set \(\alpha:=\alpha_{\kappa}\)). Once it will be done, condition (V) will be satisfied. However, we must make sure that conditions (I)-(IV) still hold, so let us set \[C\coloneqq\{x_{\kappa}\}\cup\bigcup_{\alpha<\mathfrak{c}}(\mathrm{dom}(_{ \kappa}h_{\alpha})\cup\mathrm{rng}(_{\kappa}h_{\alpha}))=\{x_{\kappa}\}\cup \bigcup_{\beta<\mathfrak{c}}(\mathrm{dom}(_{\kappa}f_{\beta})\cup\mathrm{rng}( _{\kappa}f_{\beta})).\] Note that at the step \(\kappa\) at most \(|\kappa|+\omega\) of words \({}_{\kappa}h_{\alpha}\) are nonempty functions and cardinality of each \({}_{\kappa}h_{\alpha}\) is at most \(|\kappa|+\omega\), thus the set \(C\) has cardinality less than continuum. Recall that \(h_{\alpha}=\bigcup_{\kappa<\mathfrak{c}\ \kappa}f_{\gamma_{m_{\alpha}-1}}^{n_{m_{ \alpha}-1}^{\alpha}}\circ\ldots\circ\ {}_{\kappa}f_{\gamma_{\alpha}^{0}}^{n_{\alpha}^{0}}\), let \(s\coloneqq\sum\limits_{i=0}^{m_{\alpha}-1}|n_{i}^{\alpha}|\) be the number of letters used in the word \(h_{\alpha}\), and choose (one can do it since \(|C|<\mathfrak{c}\) so \(C\) does not span \(\mathbb{R}\)) \(x\in\mathbb{R}\setminus\mathrm{LIN}_{\mathbb{Q}}(C)\). Let \(z_{0}\coloneqq x\) and \(z_{l+1}\in\mathbb{R}\setminus\mathrm{LIN}_{\mathbb{Q}}(C\cup\{z_{0},\ldots,z_{ l}\})\) for \(l<s-1\). Set \[D\coloneqq C\cup\{z_{0},\ldots,z_{s-1}\},\] \(r_{0}\coloneqq-x\) and pick \(r_{l+1}\in\mathbb{R}\setminus\operatorname{LIN}_{\mathbb{Q}}(D\cup\{r_{0},\ldots, r_{l}\})\) for \(l<s-1\). Finally choose \(y\in\mathbb{R}\setminus\operatorname{LIN}_{\mathbb{Q}}(D\cup\{r_{0},\ldots r_{s- 1}\})\) and let \(y^{\prime}\coloneqq x_{\kappa}-y\). Then also \(y^{\prime}\in\mathbb{R}\setminus\operatorname{LIN}_{\mathbb{Q}}(D\cup\{r_{0}, \ldots r_{s-1}\})\) Let \(z_{s}\coloneqq y\) and \(r_{s}\coloneqq y^{\prime}\). Our goal now is to make functions \(\ {}_{\kappa}f^{\prime n_{m_{\alpha}-1}^{\alpha}},\ldots,{}_{\kappa}f^{\prime n_ {\alpha}^{\alpha}}_{\gamma_{0}^{\alpha}}\) such that for \[{}_{\kappa}h^{\prime}_{\alpha}\coloneqq\ {}_{\kappa}f^{\prime n_{m_{\alpha}-1}^{ \alpha}}\circ\ldots\circ{}_{\kappa}f^{\prime n_{\alpha}^{\alpha}}_{\gamma_{0 }^{\alpha}}\] we have \({}_{\kappa}h^{\prime}_{\alpha}(x)=y\) and \({}_{\kappa}h^{\prime}_{\alpha}(-x)=y^{\prime}\). Therefore we define * \(p_{0}\coloneqq 0\) * \(p_{j}\coloneqq\sum\limits_{i=0}^{j-1}|n_{i}^{\alpha}|\) for \(0<j<m_{\alpha}\) * \(p_{m_{\alpha}}\coloneqq s\) and then for \(j<m_{\alpha_{\kappa}}\): * if \(n_{j}^{\alpha}<0\) then we set \({}_{\kappa}f^{\prime}{}_{\gamma_{0}^{\alpha}}\coloneqq{}_{\kappa}f_{\gamma_{ 0}^{\alpha}}\cup\{\langle z_{p_{j}+1},z_{p_{j}}\rangle,\ldots,\langle z_{p_{j+1 }},z_{p_{j+1}-1}\rangle,\langle r_{p_{j}+1},r_{p_{j}}\rangle,\ldots,\langle r_{ p_{j+1}},r_{p_{j+1}-1}\rangle\}\) * \(n_{j}^{\alpha}>0\) then we set \({}_{\kappa}f^{\prime}{}_{\gamma_{0}^{\alpha}}\coloneqq{}_{\kappa}f_{\gamma_{ 0}^{\alpha}}\cup\{\langle z_{p_{j}},z_{p_{j}+1}\rangle,\ldots,\langle z_{p_{j+ 1}-1},z_{p_{j+1}}\rangle,\langle r_{p_{j}},r_{p_{j}+1}\rangle,\ldots,\langle r _{p_{j+1}-1},r_{p_{j+1}}\rangle\}\) The idea behind above enlarging is to ensure that \({}_{\kappa}f^{n_{j}^{\alpha}}_{\gamma_{0}^{\alpha}}(z_{p_{j}})=z_{p_{j+1}}\) and \({}_{\kappa}f^{n_{j}^{\alpha}}_{\gamma_{j}^{\alpha}}(r_{p_{j}})=r_{p_{j+1}}\), so roughly speaking to go from \(x\) to \(y\) via \(z\)'s and from \(-x\) to \(y^{\prime}\) through \(r\)'s. In fact, we are abusing the notation a bit, just to avoid introducing more indices. One generator might occur in the word \({}_{\kappa}h_{\alpha}\) several times. If \(f\) is such a partial function and we have already added some points to it, then we keep enlarging the enlarged function \(f\) instead of coming back to its previous form. Then our goal is reached and therefore \(\langle 0,x_{\kappa}\rangle\in\operatorname{LIN}_{\mathbb{Q}}({}_{\kappa}h^{ \prime}_{\alpha})\) since \(\langle x,y\rangle,\langle-x,y^{\prime}\rangle\in{}_{\kappa}h^{\prime}_{\alpha}\) and \({}_{\kappa}=y^{\prime}+y\). For those functions \({}_{\kappa}f_{\beta}\) that we have not changed during Step I we let \({}_{\kappa}f^{\prime}_{\beta}\coloneqq_{\kappa}f_{\beta}\). Now we will check that conditions (I)-(IV) still hold. Conditions (II)-(IV) are trivially seen to remain true. Note that all but those functions that occur in the word \({}_{\kappa}h_{\alpha}\) remain unchanged. This and the definition of \(C\) implies that the only words that may have changed during Step I are those words that are composed of partial functions \({}_{\kappa}f^{\prime}_{\gamma_{0}^{\alpha}}\), \(j<m_{\alpha}\). Let \(h\) be such a partial function. Let \(E\) be the set of points that we enlarged function \(h\) by in Step I. If \(E=\emptyset\), we have nothing to check. Assume that \(E\neq\emptyset\). Let \(k,n\in\mathbb{N}\), \(q_{1},\ldots,q_{k+n}\in\mathbb{Q}\) and \[\langle u_{1},y_{1}\rangle,\ldots,\langle u_{k},y_{k}\rangle\in h\setminus E\] \[\langle u_{k+1},y_{k+1}\rangle,\ldots,\langle u_{k+n},y_{k+n}\rangle\in E\] where \(x_{i}\neq x_{j}\) for \(i\neq j\). Suppose that \[\sum_{i=1}^{k+n}q_{i}\langle u_{i},y_{i}\rangle=\langle 0,0\rangle\] Then \[\sum_{i=1}^{k+n}q_{i}u_{i}=0\] We consider two cases. 1. If \(\{x,-x\}\not\subset\{u_{k+1},\ldots,u_{k+n}\}\), then for each \(i\leq n\) we have \[x_{k+i}\not\in\operatorname{LIN}_{\mathbb{Q}}(\{u_{1},\ldots,u_{k+n}\}\setminus \{u_{k+i}\})\] (see the definition of \(z_{0},\ldots,z_{s},r_{0},\ldots,r_{s}\)), thus \(q_{k+i}=0\). We get that \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle=\langle 0,0\rangle\] and since \(h\setminus E\in\operatorname{PLIF}\), we get that \(q_{i}=0\) for \(i\leq k\). 2. If \(\{x,-x\}\subset\{u_{k+1},\ldots,u_{k+n}\}\), then \(x=u_{j}\), \(-x=u_{l}\) for some \[j,l\in\{k+1,\ldots,k+n\}.\] Then for each \(i\leq n\), \(l-k\neq i\neq j-k\) we have \[u_{k+i}\not\in\operatorname{LIN}_{\mathbb{Q}}(\{u_{k+1},\ldots,u_{k+n}\} \setminus\{u_{k+i}\})\] and like in case (1) we get \(q_{k+i}=0\) for those \(i\). Hence we get \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle+q_{j}\langle x,y_{j}\rangle+q_{l }\langle-x,y_{l}\rangle=\langle 0,0\rangle\] and \[\sum_{i=1}^{k}q_{i}y_{i}+q_{j}y_{j}+q_{l}y_{l}=0.\] Since \(y_{l}\not\in\operatorname{LIN}_{\mathbb{Q}}(\{y_{1},\ldots,y_{k},y_{j}\})\) we have \(q_{l}=0\). Similarly we show that \(q_{j}=0\) and thus we get \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle=\langle 0,0\rangle\] and since \(h\setminus E\in\operatorname{PLIF}\) we get that \(q_{i}=0\) for \(i\leq k\). Finally we get that \(h\in\operatorname{PLIF}\). **STEP II** In this step we assure that condition (VI) is true and conditions (I)-(V) hold. If \(x_{\kappa}\in\operatorname{dom}(_{\kappa}f^{\prime}_{\alpha_{\kappa}})\) then set \({}_{\kappa}f^{\prime\prime}_{\beta}\coloneqq{}_{\kappa}f^{\prime}_{\beta}\) for all \(\beta<\mathfrak{c}\). Otherwise let \[F\coloneqq\bigcup_{\alpha<\mathfrak{c}}(\operatorname{dom}(_{\kappa}h^{ \prime}_{\alpha})\cup\operatorname{rng}(_{\kappa}h^{\prime}_{\alpha}))= \bigcup_{\beta<\mathfrak{c}}(\operatorname{dom}(_{\kappa}f^{\prime}_{\beta}) \cup\operatorname{rng}(_{\kappa}f^{\prime}_{\beta}))\] Note that at the step \(\kappa\) at most \(|\kappa|+\omega\) of words \({}_{\kappa}h^{\prime}_{\alpha}\) are nonempty functions and cardinality of each \({}_{\kappa}h^{\prime}_{\alpha}\) is at most \(|\kappa|+\omega\), thus set \(F\) has cardinality less than continuum. Therefore one can choose \(y\in\mathbb{R}\setminus\operatorname{LIN}_{\mathbb{Q}}(F)\). Then define \({}_{\kappa}f^{\prime\prime}_{\alpha_{\kappa}}\coloneqq{}_{\kappa}f^{\prime}_{ \alpha_{\kappa}}\cup\{\langle x_{\kappa},y\rangle\}\). For \(\beta\neq\alpha_{\kappa}\) let \({}_{\kappa}f^{\prime\prime}_{\beta}\coloneqq{}_{\kappa}f^{\prime}_{\beta}\). In any case by \({}_{\kappa}h^{\prime\prime}_{\alpha}\) we denote suitable composition of \({}_{\kappa}f^{\prime\prime}_{\beta}\)'s. Conditions (II)-(VI) clearly hold. We will check condition (I). Note that we only need to consider words that were enlarged by the point \(\langle x,y\rangle\) for some \(x\in\mathbb{R}\). Let h be such a word. Let \(k\in\mathbb{N}\), \(q_{1},\ldots,q_{k},p\in\mathbb{Q}\) and \[\langle u_{1},y_{1}\rangle,\ldots,\langle u_{k},y_{k}\rangle,\langle x,y\rangle\in h\] where \(x\neq u_{i}\neq u_{j}\) for \(i\neq j\). Suppose that \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle+p\langle x,y\rangle=\langle 0,0\rangle\] Then \[\sum_{i=1}^{k}q_{i}y_{i}+py=0\] Since \(y\not\in\operatorname{LIN}_{\mathbb{Q}}(\{y_{1},\ldots,y_{k}\})\) we get that \(p=0\) and \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle=\langle 0,0\rangle\] Since \(h\setminus\{\langle x,y\rangle\}\in\operatorname{PLIF}\) we get that \(q_{1}=\ldots=q_{k}=0\). Finally \(h\in\operatorname{PLIF}\). **STEP III** In this step we assure that condition (VII) is true and conditions (I)-(VI) hold. If \(x_{\kappa}\in\operatorname{rng}(_{\kappa}f^{\prime\prime}_{\alpha_{\kappa}})\) then set \({}_{\kappa+1}f_{\beta}\coloneqq_{\kappa}f^{\prime\prime}_{\beta}\) for \(\beta<\mathfrak{c}\). Otherwise let \[G\coloneqq\bigcup_{\alpha<\mathfrak{c}}(\operatorname{dom}(_{\kappa}h^{\prime \prime}_{\alpha})\cup\operatorname{rng}(_{\kappa}h^{\prime\prime}_{\alpha})) =\bigcup_{\beta<\mathfrak{c}}(\operatorname{dom}(_{\kappa}f^{\prime\prime}_{ \beta})\cup\operatorname{rng}(_{\kappa}f^{\prime\prime}_{\beta}))\] Note that at the step \(\kappa\) at most \(|\kappa|+\omega\) of words \({}_{\kappa}h^{\prime\prime}_{\alpha}\) are nonempty functions and cardinality of each \({}_{\kappa}h^{\prime\prime}_{\alpha}\) is at most \(|\kappa|+\omega\), thus the set \(G\) has cardinality less than continuum. Therefore one can choose \(x\in\mathbb{R}\setminus\operatorname{LIN}_{\mathbb{Q}}(G)\). Then define \({}_{\kappa+1}f_{\alpha_{\kappa}}\coloneqq{}_{\kappa}f^{\prime\prime}_{\alpha_ {\kappa}}\cup\{\langle x,x_{\kappa}\rangle\}\). For \(\beta\neq\alpha_{\kappa}\) let \({}_{\kappa+1}f_{\beta}\coloneqq{}_{\kappa}f^{\prime\prime}_{\beta}\). Conditions (II)-(VII) clearly hold. We will check condition (I). Note that we only need to consider words that were enlarged by the point \(\langle x,y\rangle\) for some \(y\in\mathbb{R}\). Let h be such a word. Let \(k\in\mathbb{N}\), \(q_{1},\ldots,q_{k},p\in\mathbb{Q}\) and \[\langle u_{1},y_{1}\rangle,\ldots,\langle u_{k},y_{k}\rangle,\langle x,y\rangle \in h\] where \(x\neq u_{i}\neq u_{j}\) for \(i\neq j\). Suppose that \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle+p\langle x,y\rangle=\langle 0, 0\rangle.\] Then \[\sum_{i=1}^{k}q_{i}u_{i}+px=0.\] Since \(x\not\in\operatorname{LIN}_{\mathbb{Q}}(\{u_{1},\ldots,u_{k}\})\) we get that \(p=0\) and \[\sum_{i=1}^{k}q_{i}\langle u_{i},y_{i}\rangle=\langle 0,0\rangle.\] Since \(h\setminus\{\langle x,y\rangle\}\in\operatorname{PLIF}\) we get that \(q_{1}=\ldots=q_{k}=0\). Finally \(h\in\mathrm{PLIF}\). The above Theorem suggest the following question: **Question 1**.: _Characterize these groups whose isomorphic copies may be found within the family of Hamel bijections with identity function included. In particular, is it possible to find there a free group of \(2^{\mathfrak{c}}\) many generators?_ ## 4. Remarks We conclude our note by some observations concerning [3, Problem 2.1]. As stated in [3, Corollary 2.3], each real function is a composition of three Hamel functions, and it is an open problem if there is a real function for which two Hamel functions won't be enough. By [3, Theorem 2.4] every LIF function is a composition of two Hamel functions. We will observe that similar result holds under different assumptions. **Corollary 1**.: _If \(g\) is a constant function, then \(g\) is a composition of two Hamel functions._ Proof.: Let \(g\) be a constant function \(g\equiv c\). By [3, Theorem 2.3], there is a Hamel function \(f\) whose range is linearly independent over \(\mathbb{Q}\). Then let us set \(h_{0}:\mathrm{rng}(f)\to\mathbb{R}\) to be constantly equal to \(c\), and then let \(h\) be a Hamel function extending the \(h_{0}\) obtained by [3, Lemma 2.1]. Note that \(g=h\circ f\). **Corollary 2**.: _If \(g:\mathbb{R}\to\mathbb{R}\) and there exists a Hamel basis \(H\) such that \(g|_{H\cup\{0\}}\) is constant then \(g\) is a composition of two Hamel functions._ Proof.: Let \(g:\mathbb{R}\to\mathbb{R}\) and \(H\) be a Hamel basis. Assume that \(g|_{H\cup\{0\}}\) is constant. Let \(H^{\prime}\) be any Hamel basis. Let \(f|_{\mathbb{R}\setminus H}\) be a bijection onto \(H^{\prime}\) and \(f|_{H}\equiv f(0)\). By the proof of [3, Theorem 2.3], \(f\) is a Hamel function. Clearly, \(\mathrm{rng}(f)=H^{\prime}\). Now, let \(h_{0}:\mathrm{H}^{\prime}\to\mathbb{R}\) be defined by \(h_{0}=g\circ f^{-1}\). There is one point in \(H^{\prime}\), namely \(f(0)\), on which \(f^{-1}\) is multi-valued. Nevertheless, the composition still makes sense as \(g\) is constant on the respective set. By [3, Lemma 2.1], \(h_{0}\) can be extended to a Hamel function \(h:\mathbb{R}\to\mathbb{R}\). Clearly, \(g=h\circ f\). ### Acknowledgements We are indebted to Szymon Glab for drawing our attention to groups consisting of Hamel autobiections of \(\mathbb{R}\).
2305.04777
Reducing system dimensionality with long-range collective dipole-dipole interactions
Dimensionality plays a crucial role in long-range dipole-dipole interactions (DDIs). We demonstrate that a resonant nanophotonic structure modifies the apparent dimensionality in an interacting ensemble of emitters, as revealed by population decay dynamics. Our measurements on a dense ensemble of interacting quantum emitters in a resonant nanophotonic structure with long-range DDIs reveal an effective dimensionality reduction to $\bar{d} = 2.20 (12)$, despite the emitters being distributed in 3D. This contrasts the homogeneous environment, where the apparent dimension is $\bar{d} = 3.00$. Our work presents a promising avenue to manipulate dimensionality in an ensemble of interacting emitters.
Ashwin K. Boddeti, Yi Wang, Xitlali G. Juarez, Alexandra Boltasseva, Teri W. Odom, Vladimir Shalaev, Hadiseh Alaeian, Zubin Jacob
2023-05-08T15:26:26Z
http://arxiv.org/abs/2305.04777v4
# Reducing system dimensionality with long-range collective dipole-dipole interactions ###### Abstract Dimensionality plays a crucial role in long-range dipole-dipole interactions (DDIs). We demonstrate that a resonant nanophotonic structure modifies the apparent dimensionality in an interacting ensemble of emitters, as revealed by population decay dynamics. Our measurements on a dense ensemble of interacting quantum emitters in a resonant nanophotonic structure with long-range DDIs reveal an effective dimensionality reduction to \(\bar{d}=2.20(12)\), despite the emitters being distributed in 3D. This contrasts the homogeneous environment, where the apparent dimension is \(\bar{d}=3.00\). Our work presents a promising avenue to manipulate dimensionality in an ensemble of interacting emitters. + Footnote †: preprint: , _Introduction-_ In a dense ensemble of interacting emitters, each emitter perceives the other neighboring emitters via position-dependent dipole-dipole interactions (DDIs). The role of geometry in such position-dependent collective interactions between an ensemble of emitters has been of fundamental interest [1; 2; 3; 4; 5; 6; 7]. Controlling the dimensionality is appealing as a lower-dimensional emitter geometry shows strong quantum fluctuations [8]. This can potentially provide a host of benefits in realizing platforms to probe long-range interactions [1; 2], quantum phases such as quantum spin-liquids [3; 4], transient super solid behavior [5], quantum phase transition in transverse Ising models [9], provide an advantage in quantum sensing applications, in mitigating decoherence [1; 5], and in long-range energy transport of delocalized excitons [7]. More recently, interesting physical effects on Dicke superradiance in 1D, 2D, and 3D arrays of atoms have been theoretically predicted [6]. Thus, realizing a lower-dimensional system supporting long-range DDIs is of significant importance. While 1D and 2D interacting ensembles of emitters have been realized in cold-atom systems, it remains largely unexplored in solid-state platforms. Only recent efforts demonstrating a thin layer of emitters (NV - P1 centers) have paved the way for realizing lower-dimensional systems in solid states [1; 2]. The P1 system's many-body noise is characterized by the decoherence of NV center probe spins and shows stretched exponential decay dynamics [1]. As DDIs are mediated by the underlying electromagnetic fields, tailoring them provides an alternative route to manipulate the apparent dimensionality. Recently, interfacing quantum emitters with light within nanophotonic structures has provided the means to control and study collective DDIs [10]. This led to the demonstration of long-range resonance energy transfer in incoherent systems [11; 12], and sub-and super-radiant emission dynamics in coherent systems [13; 14; 15; 16]. Here we modify the apparent dimensionality using a nanophotonic structure that supports dispersive delocalized resonant modes that mediate the interactions. These modes lead to modification of the spatial distribution of the perceived neighboring emitters. We experimentally probe the apparent dimensionality of the interacting ensemble of donor and acceptor emitters, encoded in the interacting emitters' temporal decay dynamics. While individual emitters decay exponentially, the lifetime decay dynamics of interacting ensemble of emitters follow a stretched exponential decay, revealing a non-integer power \(\beta\) in time, \[I(t)/I_{0}=exp(-\gamma_{D}t)exp(-\alpha t^{\beta}) \tag{1}\] where \(\gamma_{D}\) is the spontaneous decay rate and \(\alpha\) is the effective interaction volume [17; 18; 19]. The non-integer power, \(\beta\), originates due to DDIs between the emitters and captures the apparent dimensionality sensed by the mutually interacting emitters. \[\beta=\bar{d}/S \tag{2}\] \(\bar{d}\) is the apparent or fractal dimension, and \(S=6\) for electric DDIs [18]. Such relaxation decay dynamics arising due to DDIs are common in other systems such as the kinetic Ising model below the critical temperature, an interacting ensemble of spins [1], in ultra-cold atoms, and ions [20; 21; 22; 23; 24; 25]. The underlying physics that governs DDIs is universal; here, we focus on DDIs at room temperature, where it is difficult to discern coherent effects. The two underlying characteristics that relate to this intriguing non-integer power, \(\beta\) in the decay dynamics (and thus the apparent dimensionality) are (i) the distance scaling law associated with DDIs in the vicinity of nanophotonic environment and (ii) the competition between the characteristic DDI length-scale, \(R_{0}\), and the system size, \(L_{sys}\). The interplay between these two characteristic lengths determines the spatial extent of the emitters sensed by each donor quantum emitter. Figure 1 conceptually shows the origin of the reduced apparent dimensionality. In homogeneous environments, the DDI potential, \(V_{dd}\), scales as \(\sim 1/R^{3}\). The non-integer power, \(\beta=1/2\) (\(1/3\)) for the three-dimensional (two-dimensional) spatial distribution of emitters (see supplementary information) [17; 19; 26] for time-scales beyond the coherence times of the interacting system (i.e., the emitters do not possess memory of previous interaction events). In contrast to homogeneous environments, a resonant nanophotonic structure modifies the strength, range, and characteristic interaction length scale of DDIs [11; 12; 14; 27; 28]. Due to this modification of underlying electromagnetic fields, an ensemble of interacting quantum emitters coupled to such resonant nanophotonic structures perceive a modified spatial distribution of emitters. Thus, the spatial extent, strength, and confinement of electromagnetic fields, the hierarchy of distances (and thus the DDI strength) averaging over all possible sites of the interacting emitters is modified. This leads to a modification in the temporal decay dynamics which is reflected in the non-integer exponent, \(\beta\), and hence, the apparent dimensionality of the interacting system. _System_- In this study, we consider the interaction of an ensemble of donor (\(Alq_{3}\)) and acceptor (R6G) emitters in both resonant and off-resonant nanophotonic structures. The dipole-dipole interactions (DDIs) between the emitters lead to resonance energy transfer. The DDI potential is related to the dyadic Green's function, \(V_{dd}(\mathbf{r_{A}},\mathbf{r_{D}};\omega_{D})=-(\omega_{D}^{2}/\epsilon_{0} c^{2})\mathbf{n_{A}}.\overline{\mathbf{G}}(\mathbf{r_{A}},\mathbf{r_{D}};\omega_{D}). \mathbf{n_{D}}\), where \(\mathbf{r_{A}}\) and \(\mathbf{r_{D}}\) are the positions of the acceptor and donor emitters, respectively, \(\mathbf{n_{A}}\) and \(\mathbf{n_{D}}\) are unit orientation vectors of the acceptor and donor emitters, respectively, \(\omega_{D}\) is the radial frequency of the donor emitter, \(\epsilon_{0}\) is vacuum permittivity, and \(c\) is the speed of light [27; 12]. The interaction strength is proportional to the rate of energy transfer, \(\Gamma_{ET}(\mathbf{r_{A}},\mathbf{r_{D}};\omega_{D})=(2\pi/\hbar^{2})|V_{dd} (\mathbf{r_{A}},\mathbf{r_{D}};\omega_{D})|^{2}f_{D}(\omega_{D})\sigma_{A}( \omega_{D})\), where \(f_{D}(\omega_{D})\) and \(\sigma_{A}(\omega_{D})\) are the emission spectra of the donor emitter and absorption cross-section of the acceptor emitter, respectively. Figure 2(a) shows the spectral overlap between the donor emission spectrum (Alq3), the acceptor absorption spectrum (R6G), and the extinction spectrum of both a resonant and an off-resonant plasmonic lattice. The resonant plasmonic lattice modes mediate the DDIs between the donor and acceptor emitters. The resonant plasmonic lattice modifies the scaling, strength, and range of the DDI potential \(|V_{dd}|\) as shown in Fig 2(b). The scaling of the DDI potential, \(|V_{dd}|\), is significantly modified with distance \(R=|\mathbf{r_{D}}-\mathbf{r_{A}}|\) in a resonant structure, whereas the DDI potential decays rapidly with distance in an off-resonant plasmonic lattice. The resonances of the plasmonic lattice modes can be tuned by altering the lattice constant. The relaxation dynamics of the interacting ensemble of donor-acceptor emitters are governed by non-linear coupled rate equations (see supporting information). Here the Monte-Carlo simulation method is employed to esti Figure 1: The illustration depicts the concept of apparent dimensionality of an interacting ensemble of emitters. The apparent dimensionality is related to the non-integer exponent of time in the fluorescence decay dynamics \(I(t)/I_{0}=exp(-\gamma_{D}t)exp(-\alpha t^{\beta})\). In a homogeneous environment, \(\beta=0.5(0.33)\) for the 3D (2D) spatial distribution of emitters. A resonant nanophotonic environment modifies the spatial distribution of the neighboring emitters sensed by each interacting emitter which results in the modification of the temporal decay dynamics. This reduces the apparent dimensionality experienced by the interacting emitters, which is reflected in the non-integer exponent, \(\beta<0.5\), though, the emitters are distributed in a 3D volume. mate the temporal decay dynamics of the donor emitters (see supporting information). Figure 2(c) shows the estimated temporal decay dynamics for homogenous environments, i.e., \(R_{0}\ll L_{sys}\), where non-integer exponent, \(\beta\sim 0.52\). This is commensurate to a three-dimensional interacting system and matches well with the predicted theoretical value (see derivation in supporting information). The inset shows the estimated values of \(\beta\) for various runs of the Monte-Carlo simulations with different random spatial distributions of the emitters. On the other hand when \(R_{0}\sim L_{sys}\) as shown in Fig.2(d), the value of non-integer exponent, \(\beta\sim 0.42(12)\). This is commensurate to an effective dimension of \(\bar{d}\sim 2.50(72)\)-- a lower than a three-dimensional system. The inset shows the broad distribution in the values of \(\beta\) with a standard deviation of \(\sim 0.12\) for 1024 different iterations of the Monte-Carlo simulation. In practice, a resonant plasmonic lattice aides in realizing an apparent lower-dimensional system. The modified scaling of the DDI potential, \(|V_{dd}|\) coupled with increased interaction strength, leads to an increase in the characteristic interaction length scale, \(R_{0}\). Under certain conditions when the system size, i.e., the spatial extent of emitters, \(L_{sys}\) becomes comparable to the \(R_{0}\) in addition to the scaling law, the interacting system of emitters (in resonant nanophotonic structures) perceive an apparent lower dimension. We explore this effect here to engineer the dimensionality of collective (many-dipole) DDIs. _Experiment-_ To elucidate this, in the experiment, we measure the fluorescence lifetime decay trace of the interacting emitters in both resonant and off-resonant nanophotonic structures. The dye molecules \(Alq_{3}\) (0.83 mM) and \(R6G\) (0.25 mM) are embedded in PMMA polymer thin films on the aforementioned samples. We use time-correlated single-photon counting technique with a narrow-band filter (520(5) nm) centered at the peak emission of the donor emitter to measure the fluorescence lifetime decay traces (see supporting information for details). Figure 3 shows the measured lifetime decay when the interacting emitters embedded in different nanophotonic structures such as (i) glass substrate, (i.e. a homogeneous environment), Fig.3(a), (ii) a \(TiO_{2}\) dielectric lattice, Fig.3(b), (iii) an off-resonant plasmonic lattice, Fig.3(c), and (iv) a resonant plasmonic lattice, Fig.3(d). We observe a striking deviation to the non-integer exponent in time from the typical \(\beta=0.5\) in 3D homogeneous environments to \(\beta\sim 0.37\) (an effective lower dimension \(\bar{d}\sim 2.20(12)\)) in a dispersive resonant nanophotonic structure-- a plasmonic lattice. We note that this Figure 2: (a) The plot shows the acceptor emitter’s absorption spectrum (blue curve), the donor emitter’s emission spectrum (orange curve), the extinction spectrum of a resonant plasmonic lattice with lattice constant \(\sim 300\) nm (purple dash curve) and an off-resonant plasmonic lattice with lattice constant \(\sim 350\) nm (green dash-dot curve). The extinction spectrum of the resonant plasmonic lattice spectrally overlaps with the emission-absorption spectrum of the donor and acceptor emitters (yellow highlighted region) (b) The calculated dipole-dipole interaction potential \(|V_{dd}|\) for the resonant and off-resonant plasmonic lattice is shown. The resonant plasmonic lattice shows a strikingly modified scaling law. (c) Monte-Carlo simulations depicting the temporal decay dynamics of donor emitters for \(|V_{dd}|^{2}=R_{0}^{2}/R^{6}\) scaling and \(R_{0}\ll L_{sys}\) with \(\beta\sim 0.52\). The inset shows the values of \(\beta\) for randomized spatial distributions of emitters. (d) Monte-Carlo simulations showing the temporal decay dynamics of donor emitters for \(R_{0}\sim L_{sys}\). The reduced dimensionality is evident from the estimated values of \(\beta\sim 0.4\). The inset shows the values of \(\beta\) for randomized spatial distributions of emitters. Figure 3: The measured fluorescence lifetime decay when the interacting emitters are in different electromagnetic environments (a) glass substrate (i.e, a homogeneous environment), (b) TiO2 dielectric lattice (i.e. an off-resonant inhomogeneous electromagnetic environment), and (c) a plasmonic lattice (i.e. a resonant inhomogeneous electromagnetic environment). The value of \(\beta\sim 0.5\) in both inhomogeneous and off-resonant inhomogeneous environment. This is commensurate with a 3D system. In contrast, the faster-than-exponential decay dynamics on a resonant silver (Ag) plasmonic lattice reveals an exponent value of \(\sim 0.37\). This is commensurate to an effective lower dimension \(\bar{d}\sim 2.20(12)\). The emitters were embedded in a \(\sim\) 1 \(\mu m\) thick polymer thin films. value is close to that of a 2D system. This elucidates that the underlying resonant modes supported by the plasmonic lattice indeed modify the apparent dimension perceived by the interacting ensemble of emitters. The \(TiO_{2}\) dielectric lattice has the same geometric features as the resonant plasmonic lattice but supports no resonances. The measurements on the \(TiO_{2}\) lattice help rule out effects due to the underlying geometry of the lattice. On the other hand, the measurements on the off-resonant plasmonic lattice elucidate that the origin of the apparent lower dimension is purely due to the lattice resonance and not from the localized-surface-plasmon-resonance of the constituent metal nanoparticles. The non-integer exponent in time is estimated by fitting the temporal fluorescence decay trace with a Laplace transform of an underlying probability density function[26], \[\frac{I(t)}{I_{0}}=\int_{0}^{\infty}G_{\delta}(\gamma)e^{-\gamma t}d\gamma\int _{0}^{\infty}H_{\beta}(\Gamma_{ET})e^{-\Gamma_{ET}t}d\Gamma_{ET} \tag{3}\] In Eq.3, the first term is associated with the spontaneous decay of donor emitters, whilst the second term is associated with resonance energy transfer (DDIs). \(G_{\delta}(\gamma)\) is the probability density function (PDF) associated with the distribution of spontaneous emission decay rates, and \(H_{\beta}(\Gamma_{ET})\) is the PDF for resonant energy transfer rates. For a homogenous environment, with no significant enhancement in the local density of optical states (LDOS), \(G_{\delta}(\gamma)=\delta(\gamma-\gamma_{D})\), where \(\delta(\gamma-\gamma_{D})\) is the delta function, \(\gamma_{D}\) is the decay rate of the individual donor emitter. In contrast, in an inhomogeneous environment, each donor experiences different LDOS and, thus, different spontaneous emission decay rates [29]. As DDIs in this particular scenario is a weak perturbation, the spontaneous decay rate of the donors, is estimated from the fluorescence decay trace of donors in the absence of the acceptor emitter(see supporting information). The PDF of resonance energy transfer rates, \(H_{\beta}(\Gamma_{ET})\), is estimated by fitting the fluorescence lifetime decay trace with the spontaneous decay rate PDF, \(G_{\delta}(\gamma)\) as a fixed parameter. The underlying probability distributions have a characteristic long-tail behavior and are related to Levy stable distributions [30]. Figure 4 shows the extracted PDF of the resonant energy transfer rate (\(\Gamma_{ET}\)) distribution. The PDFs obtained in the resonant inhomogeneous environment are observed to differ from those in the homogeneous and off-resonant inhomogeneous environments. This directly indicates that the sensed spatial distribution of emitters is modified. As the plasmonic lattice supports dispersive delocalized resonant modes that can mediate interactions between the donor and acceptor emitters over larger distances, the underlying PDFs show a broader distribution of rates. Furthermore, the number of interaction events in the tail of the distribution reduces, which indicates a reduction in the total number of larger magnitude DDI interaction strengths, \(\Gamma_{ET}\), see inset of Fig. 4. _Conclusion-_ In summary, we experimentally demonstrated that the apparent dimensionality of an interacting ensemble of emitters could be modified using a resonant nanophotonic structure. The temporal fluorescence decay dynamics show a non-integer exponent, \(\beta\), that relates to the apparent dimensionality of the interacting system. The value of apparent dimensionality on a resonant plasmonic lattice shows a stark contrast value of \(\bar{d}\sim 2.20(12)\), in comparison to \(\bar{d}\sim 3.0\) obtained on glass, an off-resonant \(TiO_{2}\) dielectric lattice, and an off-resonant plasmonic lattice. Further, we extract the underlying distribution of energy transfer rates for the emitters' interacting ensemble, indicating that the interacting emitters' perceived apparent dimensionality is modified. This arises due to modifying the underlying distribution of energy transfer rates. This work paves the way for engineering interacting systems with apparent lower dimensionality. Though the presented results are semi-classical and discernible coherent effects cannot be observed at room temperatures, they can readily be applied to regimes where quantum effects are more prominent such as in ultra-cold atoms [14], solid-state emitters systems [1; 2], rare-earth ions [31; 32], Rydberg excitons in solids [33], and quantum-dots systems [13]. Such nanophotonic structures can potentially provide an alternative route to realize two-dimensional systems that host new quantum many-body phases, help mitigate decoherence for quantum sensing, memories, and quantum network applications, realize novel, more efficient light-harvesting systems, and potentially improve biological samples imaging. This work was supported by the U.S. Department Figure 4: The extracted probability density function (PDF) for the resonance energy transfer rate on various electromagnetic environments (1) Glass, a homogeneous environment (dash-dot red curve), (2) An in-homogeneous environment, \(TiO_{2}\) nanoparticle lattice having the same lattice constant and dimensions as the resonant plasmonic lattice (dot-line yellow curve). (3) An off-resonant plasmonic lattice (purple curve) and (4) A resonant plasmonic lattice (blue curve). The PDF of the energy transfer rates on the resonant plasmonic lattice is not only shifted but also broader. The inset shows the reduced number of events having stringent interaction strength (in the tail) of Energy (DOE), Office of Basic Sciences under DE-SC0017717 (A.K.B, A.B, V.S, and Z.J), the Purdue University start-up grant (H.A), the Office of Naval Research (ONR) under ONR N00014-21-1-2289 (Y.W., and T.W.O.) and the National Science Foundation under DMR-2207215 and DMR-1904385 (X.G.J. and T.W.O.). This work made use of the NUFAB and EPIC facilities of Northwestern University's NUANCE Center, which has received support from the SHyNE Resource (NSF ECCS-2025633), the IIN, and Northwestern's MRSEC program (NSF DMR-1720139).
2310.11872
MUSE observations of the giant low surface brightness galaxy Malin 1: Numerous HII regions, star formation rate, metallicity, and dust attenuation
Giant low-surface brightness (GLSB) galaxies are an extreme class of objects with very faint and extended gas-rich disks. Malin 1 is the largest GLSB galaxy known to date, but its formation is still poorly understood. We use VLT/MUSE IFU spectroscopic observations of Malin 1 to reveal, for the first time, the presence of H$\alpha$ emission distributed across numerous regions along its disk, up to radial distances of $\sim$100 kpc. We made an estimate of the dust attenuation using the Balmer decrement and found that Malin 1 has a mean H$\alpha$ attenuation of 0.36 mag. We observe a steep decline in the star formation rate surface density ($\Sigma_{\rm SFR}$) within the inner 20 kpc, followed by a shallow decline in the extended disk. Similarly, the gas phase metallicity we estimated shows a steep gradient in the inner 20 kpc, followed by a flattening of the metallicity in the extended disk with a relatively high value of $\sim$0.6 $Z_{\odot}$. We found that the normalized abundance gradient of the inner disk is similar to values found in normal galaxies but with an extreme value in the extended disk. A comparison of the star formation rate surface density and gas surface density shows that, unlike normal disk galaxies or other LSBs, Malin 1 exhibits a very low star formation efficiency. Owing to the detection of emission lines over a large part of the disk of Malin 1, this work sheds light on the star formation processes in this unique galaxy, highlighting its extended star-forming disk, dust attenuation, almost flat metallicity distribution in the outer disk, and exceptionally low star-formation efficiency. Our findings contribute to a more detailed understanding of the formation of the giant disk of Malin 1 and also constrain possible proposed scenarios on the nature of GLSB galaxies in general.
Junais, P. M. Weilbacher, B. Epinat, S. Boissier, G. Galaz, E. J. Johnston, T. H. Puzia, P. Amram, K. Małek
2023-10-18T10:42:09Z
http://arxiv.org/abs/2310.11872v1
MUSE observations of the giant low surface brightness galaxy Malin 1: Numerous HII regions, star formation rate, metallicity, and dust attenuation ###### Abstract Context:Giant low-surface brightness (GLSB) galaxies are an extreme class of objects with very faint and extended gas-rich disks. Malin 1 is the largest GLSB galaxy known to date, and one of the largest individual spiral galaxies observed so far, but the properties and formation mechanisms of its giant disk are still poorly understood. Aims:We use VLT/MUSE IFU spectroscopic observations of Malin 1 to measure the star formation rate, dust attenuation, and gas metallicity within this intriguing galaxy. Methods:We performed a pPXF modeling to extract emission line fluxes such as H\(\alpha\), H\(\beta\), [N ii]\({}_{653}\) and [O iii]\({}_{5007}\) along the central region as well as the extended disk of Malin 1. Results:Our observations reveal, for the first time, the presence of strong H\(\alpha\) emission distributed across numerous regions along the extended disk of Malin 1, reaching up to radial distances of \(\sim\)100 kpc, indicating recent star formation activity. We made an estimate of the dust attenuation in the disk of Malin 1 using the Balmer decrement and found that Malin 1 has a mean H\(\alpha\) attenuation of 0.36 mag. We observe a steep decline in the radial distribution of star formation rate surface density (\(\Sigma_{\rm SFR}\)) within the inner 20 kpc, followed by a shallow decline in the extended disk. We estimated the gas phase metallicity in Malin 1, and also found for the first time, that the metallicity shows a steep gradient in the inner 20 kpc of the galaxy from solar metallicity to sub-solar values, followed by a flattening of the metallicity in the extended disk with a relatively high value of \(\sim\)0.6 \(Z_{\odot}\). We found that the normalized abundance gradient of the inner disk of Malin 1 is similar to values found in normal galaxies. However, the normalized gradient observed in the outer disk can be considered extreme when compared to other disk galaxies. A comparison of the star formation rate surface density and gas surface density shows that, unlike normal disk galaxies or other LSBs, the outer disk of Malin 1 exhibits relatively low star formation efficiency based on atomic gas mass estimates, which may be mildly exacerbated by the vanishing upper molecular gas mass limits found by recent CO studies. Conclusions:Owing to the detection of emission lines over a large part of the Malin 1 extended disk, this work sheds light on the star formation processes in this unique galaxy, highlighting its extended star-forming disk, dust attenuation, almost flat metallicity distribution in the outer disk, and exceptionally low star-formation efficiency. Together with previous results, our findings contribute to a more detailed understanding of the formation of the giant disk of Malin 1 and also to constrain possible proposed scenarios on the nature of GLSB galaxies in general. ## 1 Introduction Low surface brightness galaxies (LSBs) form a diverse class of galaxies that exhibit significantly lower brightness per unit area than "normal" high surface brightness galaxies, and certainly lower surface brightness than the dark night sky. In the past decade, LSBs have obtained a lot of attention due to their extreme characteristics and potential implications for our understanding of galaxy formation scenarios. LSBs are commonly defined as galaxies with an average \(r\)-band surface brightness (\(\bar{\mu}_{r}\)) below the typical level of the night sky (\(\bar{\mu}_{r}>23\) mag arcsec\({}^{-2}\); Martin et al. 2019; Junais et al. 2023). Among LSBs, there is a distinct sub-population known as giant LSB galaxies (GLSBs), having a massive faint extended disk and rich in gas content (Bothun et al. 1987; Sprayberry et al. 1995; Matthews et al. 2001). Malin 1 is the archetype of GLSB galaxies and has captivated the attention of astronomers since its accidental discovery nearly four decades ago (Bothun et al. 1987). Malin 1 has a radial extent of at least \(\sim\)120 kpc (Moore & Parker 2006), and an extrapolated central disk central surface brightness of \(\mu_{\rm 0,V}\approx 25.5\),mag arcsec\({}^{-2}\) (Impey & Bothun 1997). Despite its faint surface brightness disc, Malin 1 has a total absolute magnitude of \(M_{V}\approx-22.9\) mag (Pickering et al. 1997) and an HI mass of approximately \(5\times 10^{10}M_{\odot}\) (Pickering et al. 1997; Matthews et al., 2001). It is currently considered the largest spiral galaxy known to date. Malin 1 is situated in a relatively low-density environment in the large-scale structure with close proximity to a filament, offering the stability and richness of its huge gaseous disk (Junais, 2021). A Hubble Space Telescope (HST) \(I\)-band image analysis by Barth (2007) has identified that Malin 1 has a normal barred inner spiral disk embedded within an extensive, diffuse LSB disk. This has similarities to galaxies with extended ultraviolet (XUV) discs found in approximately 30% of nearby galaxies (Thilker et al., 2007). The extension and scope of the enormous spiral arms of Malin 1 were shown by Galaz et al. (2015). Later Boissier et al. (2016) performed an analysis on the radial stellar profiles and suggested that the extended disk of Malin 1 has an angular momentum about 20 times larger than Milky Way. Therefore, Malin 1 represents an extreme case among the class of GLSBs. The nature and origin of such GLSBs are still poorly understood, although more GLSBs, but less extreme ones, were discovered over the years (Hagen et al., 2016; Saburova et al., 2021). In recent work, Saburova et al. (2023) suggested that GLSBs are not rare objects as previously thought. Based on the volume density of GLSBs they predicted, \(\sim\)13000 GLSBs could exist within the local universe at \(z<0.1\). It is expected that soon, with the deep Legacy Survey of Space and Time (LSST; Ivezic et al., 2019), we will be able to uncover even more of these giant, faint galaxies. Despite its significance, Malin 1 has been subject to limited spectroscopic studies. Previous efforts have primarily relied on velocity maps with low spatial resolution obtained from HI data (Lelli et al., 2010), along with optical spectra of the central region from SDSS observations (Subramanian et al., 2016). Later, Junais et al. (2020) performed a spectroscopic analysis of Malin 1 using long-slit data obtained from the IMACS/Magellan spectrograph, focusing mostly on the central region (\(<\)10 kpc) of Malin 1, along with only one small region detected in the extended disk (at \(\sim\)25 kpc radius). The lack of in-depth spectroscopic analysis of Malin 1, especially on its large extended disk, hinders a comprehensive understanding of its properties and formation. In this work, we present a spectroscopic study of Malin 1 utilizing MUSE Integral Field Unit (IFU) data, aiming to shed light on the nature and formation of this extraordinary GLSB galaxy. Based on these data we perform a detailed analysis of Malin 1's star formation rate, dust attenuation, and gas-phase metallicity of its extended disk. Throughout this work, we adopt a flat \(\Lambda\)CDM cosmology with \(H_{0}=70\) km s\({}^{-1}\)Mpc\({}^{-1}\), \(\Omega_{M}=0.27\) and \(\Omega_{\Lambda}=0.73\), which corresponds to a projected angular scale of 1.56 kpc arcsec\({}^{-1}\) and a luminosity distance of 377 Mpc. The paper is structured as follows. Sect. 2 provides an overview of the data and observations, including the MUSE IFU data acquisition and reduction process. Section 3 focuses on the analysis and results. Section 4 and 5 present the discussion and the conclusions, respectively. ## 2 Observation and Data Analysis Malin 1 was observed with the VLT/MUSE integral field spectrograph (Bacon et al., 2010) on 18 April 2021 under program ID 105.20GH.001 (PI Gaspar Galaz). The galaxy covers more that \(2^{\prime}\times 2^{\prime}\) on the sky, but the field of view of MUSE is \(1^{\prime}\times 1^{\prime}\), with a sampling scale of \(0.2^{\prime\prime}\) pixel\({}^{-1}\). Out of the four planned MUSE pointings, only the northern quadrant of Malin 1 was observed (see Fig. 1). However, the center of the galaxy was well covered. Observations were conducted at an airmass of \(\sim\)1.3 and an external seeing of \(\sim\)1.2\({}^{\prime\prime}\). The observations were carried out with four exposures resulting in a total exposure time of 4640 seconds. The spectral resolving power ranges from \(R\simeq 1770\) at 4800 A to \(R\simeq 3590\) at 9300 A. The observations used the ground-layer adaptive optics (AO) system (Strobele et al., 2012) so that the full width at half maximum (FWHM) in the MUSE instrument at 7000 A was estimated to be \(\sim\)0\(\aas@@fstack{\prime\prime}\)55 from the telemetry of the AO system (Fusco et al., 2020). ### Data reduction We reduced the data with the MUSE pipeline (v2.8, Weilbacher et al., 2020) called from the ESO Recipe Execution Tool (EsoREx) tool. We largely used standard processing, including the creation of master bias, flat field, and trace tables, as well as deriving wavelength solutions separately for each CCD, with an overall twilight-sky correction for the whole field, all with default parameters. We also used standard bad pixel and geometry tables as well as a line-spread function automatically associated with the data by the ESO archive. All master calibrations were then applied to the raw on-sky data (standard stars, science exposures, and offset sky fields). While the standard star (HD 49798) observed at the beginning of the night was usable for the flux calibration, we used two other standard star exposures to derive the telluric correction (LTT 3218 from 14 April 2021 and EG 274 taken on 14 April 2021). These provide a good match in the integrated water vapor levels (recorded in the IW keywords in the raw data headers) and hence correct the telluric A- and B-bands better. The science data was corrected for the sky background, using internal sky instead of the offset sky fields, since the spiral arms of Malin 1 fill only a portion of the MUSE field. The data were further corrected for distortions using the provided astrometric calibration and shifted to a barycentric velocity frame. All four science exposures were aligned using a star from the GaiaEDR3 catalog (Lindegren et al., 2021) and combined to form a single data cube with wavelength coverage 4595-9350 A. The final spatial FWHM measured in the reconstructed \(R\)-band is 0\(\aas@@fstack{\prime\prime}\)63. ### Emission line fitting To disentangle the ionized gas emission lines from the stellar continuum and measure their fluxes corrected for hydrogen absorption, throughout this work, we used the Penalized PiXelFitting (pPXF) tool (v8.2.4, Cappellari, 2017). To maximize the detection along different regions of Malin 1, we employed several approaches for the emission line fitting. This is described in the following sub-sections. #### 2.2.1 Central region In the central region of Malin 1 (\(21^{\prime\prime}\times 21^{\prime\prime}\)) where the stellar continuum is strong, we performed a spectral binning using the Voronoi algorithm of Cappellari & Copin (2003) to target a final signal-to-noise ratio (SNR) \(\approx 20\). We include only the spatial pixels (spaxels) where the median continuum SNR in the wavelength range 5600-6500 A was at least 1. Then, on the binned spectra of the central region, the pPXF was set up to simultaneously fit the continuum, using the GALEV SSPs (Kotulla et al., 2009) constructed with the Munari stellar library (Munari et al., 2005) as templates, and the known strong emission lines between H\(\gamma\) and Pa14 - corresponding to the restframe \(\lambda\)-range at the redshift of Malin 1 - modeled as Gaussian peaks. To reduce the effects of flux calibration inaccuracies we employ a multiplicative polynomial of order 7. This is the same setup already employed and discussed in more detail by Weilbacher et al. (2018) and Micheva et al. (2022). Lines falling into masked spectral regions1 at the redshift of Malin 1 were excluded. As initial guesses for the kinematics, we used \(z=0.08\) for the velocity and \(\sigma=75\) km s\({}^{-1}\), for both stars and gas. We check this setup against the default setup of pPXF using the E-MILES SSP templates (Vazdekis et al. 2016). Since we do not find significant differences, we prefer to use the results from the GALEV+Munari setup in the following part of this work, as it produces fits that have fewer residuals or artifacts. All emission line flux measurements are then projected back into the 2D plane to form a map (see Fig. 2). Footnote 1: We mask significant telluric residuals that might affect the fit, and also the NaD-gap created to suppress the AO laser emission, and also the atmospheric Raman features created by the lasers. #### 2.2.2 Extended disk regions To extract emission line fluxes over the whole MUSE field, including the extended disk of Malin 1, we first defined HII regions using the dendrogram algorithm (using the Python package astrodendro2) which tracks isophotal contours around peaks in images. It creates a tree-like structure that allows us to relate _leaves_ (contours within which only a single peak is located), to _branches_ (contours containing several leaves). We ignore the hierarchical nature of the data structure and just use the leaves, which represent the largest contour around a peak that has not merged with neighboring peaks, as the HII regions. As input, we use an H\(\alpha\) image created by fitting a Gaussian function to the expected redshifted emission line in each spaxel of the cube. We filter this image with a 2D Gaussian function with 0\(\aas@@fstack{\prime\prime}\)6 FWHM to enhance real features. Peaks are detected above a limit of \(1.5\times 10^{-19}\) erg s\({}^{-1}\) cm\({}^{-2}\), with a minimum number of 9 pixels above the limit. We do not impose a limit on the height of the peaks. We detect 62 peaks in the cube and extract spectra and data variance by averaging them over the spaxels of the corresponding leaves (HII regions) defined by astrodendro. They are saved as row-stacked spectra and subsequently input to pPXF as discussed in Sect. 2.2.1 to extract emission line fluxes. Only the line fluxes with an SNR \(>2.5\) were used in this analysis. We found that such an approach of fitting the integrated spectrum of a region, compared to a pixel-level fitting, significantly increases the SNR and hence increases the number of regions (by a factor of about 8) with faint emission line measurements (H\(\beta\), [N ii]\({}_{6583}\) and [O iii]\({}_{5007}\)). Figure 3 shows the emission line maps of all the identified regions. We can clearly see several H\(\alpha\) detected regions throughout the disk of Malin 1, with most of them extending up to \(\sim 80\) kpc from the center of the galaxy and one region detected at a radius of about 100 kpc (ID 62 in Fig. 3). Footnote 2: [http://www.dendrograms.org/](http://www.dendrograms.org/) #### 2.2.3 H\(\alpha\) radial average Apart from the fitting of the central region and the selected HII regions in the extended disk, we also performed an integrated spectral fitting along several radial bins of Malin 1. This was done to estimate the H\(\alpha\) radial average fluxes within the galaxy. For this purpose, we placed 14 concentric rings on the MUSE cube, starting from the center of Malin 1 to a radial distance of \(\sim\)100 kpc, each ring with a width of 5\(\arcsec\) (7.8 kpc). Then we followed a similar approach as discussed in Sect. 2.2.2, by stacking all the spaxels within a ring to obtain its corresponding integrated spectrum. However, as each radial bin spans over several kpcs, resulting in azimuthal variations of the line-of-sight velocity, we need to correct for such velocity variations along the spaxels of a ring to generate high SNR integrated spectra3. Footnote 3: We did not perform any velocity corrections during the HII region spectral stacking discussed in Sect. 2.2.2, as the average size of a region (\(\sim\)2 kpc) in that case was small to have any significant velocity variations. To correct for the velocity variations, we extracted moment maps and masks over the whole MUSE field-of-view using the python software Camel4 (see Epinat et al. 2012) on the group of emission lines H\(\alpha\), [S ii]\(\lambda\lambda\)6716,6731, and [N ii]\(\lambda\lambda\)6548,6583. In order to avoid local velocity variations and to increase the sensitivity at the edge of emitting regions, and thus the spatial extent, the MUSE datacube was first smoothed using a 2-pixel FWHM Gaussian kernel. Only the spaxels within the wavelength range corresponding to the redshift range \(0.0714<z<0.0934\) are considered around each line within the MUSE cube in order to have some continuum but with a reasonable weight in the fit. Camel then fits all lines and the continuum simultaneously for each spaxel of the MUSE cube, taking advantage of the variance cube produced during data reduction. Lines are modeled as Gaussian functions, using a common velocity but with a width that can vary from one species to another, since both lines in a doublet are expected to have the same origin and are close enough in wavelength to avoid line-spread function variations. The continuum is adjusted with a third-degree polynomial function. Flux maps are generated for all fitted emission lines, together with associated error maps, as well as SNR maps, veloc Figure 1: Colour composite image of Malin 1 from the CFHT-Megacam NGVS (Ferrarese et al. 2012) \(u\), \(g\), and \(i\)-band images. The image spans a width of \(\sim 2.6\arcmin\times 2.6\arcmin\). The red box shows the MUSE field of observation (\(1\arcmin\times 1\arcmin\)). ity dispersion fields, and the velocity field. We further compute a spatial mask in order to exclude regions with no signal coherent with the large-scale velocity pattern. We remove all pixels having an SNR in the H\(\alpha\) line below 2.5 and with velocities incompatible with Malin 1, and further removed spurious isolated groups of pixels smaller than 1'' in diameter, which is smaller than the seeing FWHM of the observations. Once those maps are generated, for each spaxel of the original unsmoothed datacube, the inferred velocity is used to compute the wavelengths at which the spectrum is actually sampled in the restframe corresponding to Malin 1 systemic velocity. The spectrum at each spaxel is then re-sampled on a single spectral grid for all spaxels by performing a linear interpolation to apply the velocity correction. Finally, for each ring, we sum all spaxels at Malin 1 rest, lying within both the ring and mask to produce the corresponding integrated spectrum. We performed a pPXF fit on these spectra as discussed in Sect. 2.2.1 to obtain the radial average H\(\alpha\) fluxes along each ring (the average H\(\alpha\) flux along each ring was obtained by dividing the total flux by the unmasked ring area). We only include the H\(\alpha\) fluxes with an SNR \(>\) 2.5 in this work (11 among the 14 rings). From hereupon, we correct all the observed emission line fluxes from Sect. 2.2 for foreground Galactic extinction using the Schlegel et al. (1998) dust maps and the Cardelli et al. (1989) Milky Way dust extinction law. ## 3 Results ### Balmer decrement and dust attenuation The Balmer ratio (H\(\alpha\)/H\(\beta\) flux ratio) is commonly used as a diagnostic tool for dust attenuation in galaxies (e.g., Dominguez et al., 2013; Boselli et al., 2015). The intrinsic Balmer ratio, (H\(\alpha\)/H\(\beta\))\({}_{\rm int}\), remains roughly constant for typical gas conditions in galax Figure 2: Emission line flux maps of the central region of Malin 1 (21′′ \(\times\) 21′′) obtained from the pPXF fitting. The H\(\alpha\), and the H\(\beta\) lines are in the top panels, whereas the [N ii]\({}_{6583}\) and the [O iii]\({}_{5007}\) lines are along the bottom panels. The color bar indicates the flux corresponding to each line. ies. For Case B recombination5, (H\(\alpha\)/H\(\beta\))\({}_{\rm int}=2.86\)(Osterbrock, 1989). Therefore, comparing the observed Balmer ratio with the theoretical value, we can obtain the attenuation at a wavelength, \(A_{\lambda}\), following Eq. 6 of Yuan et al. (2018): Footnote 5: This assumes 1) optically thin gas, 2) which is ionized by a harder radiation field than produced by the recombination process itself, and 3) that there is negligible influence of the ionizing radiation on the gas temperature. See also Nebrin (2023). \[A_{\lambda}=-2.5\frac{k(\lambda)}{k({\rm H}\alpha)-k({\rm H}\beta)}\log\left[ \frac{({\rm H}\alpha/{\rm H}\beta)_{\rm obs}}{2.86}\right], \tag{1}\] where \(k(\lambda)\) is the value of the attenuation curve at a wavelength \(\lambda\) and (\(H\alpha/{\rm H}\beta\))\({}_{\rm obs}\) is the observed Balmer ratio. Assuming a Calzetti et al. (2000) dust attenuation law with \(R_{V}=4.05\), we obtain \(k({\rm H}\alpha)=3.33\); \(k({\rm H}\beta)=4.60\) ; \(k([NII]_{6583})=3.31\) and \(k([OIII]_{5007})=4.46\). We can then obtain the attenuation for all the emission lines presented in this work using Eq. 1. Fig. 4 shows our observed Balmer ratio and H\(\alpha\) attenuation. We can see that the central region of Malin 1 (\(<10\) kpc) has a large range of Balmer ratios up to 5, with a mean value of about 3.26. This corresponds to an H\(\alpha\) attenuation (\(A_{H\alpha}\)) up to 1 mag, with a mean attenuation of \(\sim\)0.3 mag. Similar to the central Figure 3: Emission line flux maps of Malin 1 obtained from the pPXF fitting of the HII regions. The H\(\alpha\), and the H\(\beta\) lines are in the top panels, whereas the [N ii]\({}_{6583}\) and the [O iii]\({}_{5007}\) lines are along the bottom panels. The ID of each region, as discussed in Sect. 2.2.2, is labeled in black in the top left panel. The regions with ID 27 and 49 are excluded from the maps as they do not have an SNR \(>2.5\) in any of the emission lines. The color bar indicates the flux corresponding to each emission line. Note that the flux value of each region is given as the average flux per pixel within that region obtained from the fitting of its row-stacked spectrum. region, in the extended disk we observe a mean Balmer ratio of 3.25 and \(A_{H\alpha}\) of 0.43 mag. We thus conclude that Malin 1 has non-negligible dust attenuation in the central and extended disk regions with a mean \(A_{H\alpha}=0.36\) mag. ### Star formation rate surface density We use our H\(\alpha\) emission line flux measurements to estimate the radial variation of the star formation rate surface density (\(\Sigma_{\rm SFR}\)) in Malin 1. The measured H\(\alpha\) fluxes were converted to star formation rate (SFR) following Boissier (2013) and a Kroupa (2001) initial mass function (IMF) using \[{\rm SFR}_{\rm H\alpha}\,({\rm M}_{\odot}\,{\rm yr}^{-1})=5.1\times 10^{-42}\,{ \rm L}_{\rm H\alpha}\,({\rm erg\,s}^{-1}), \tag{2}\] where \(L_{\rm H\alpha}\) is the H\(\alpha\) luminosity corresponding to the observed H\(\alpha\) flux. The H\(\alpha\) fluxes, before the estimation of SFR, were corrected for dust attenuation using the attenuation measurements discussed in Sect. 3.1. The SFR values were converted to \(\Sigma_{\rm SFR}\) by dividing by the area in which the H\(\alpha\) flux was measured. Figure 5 shows the \(\Sigma_{\rm SFR}\) as a function of galactocentric radius. In this plot, we also show the average \(\Sigma_{\rm SFR}\) estimated from the H\(\alpha\) radial average measurements discussed in Sect. 2.2.3. While the local measurements are indicative of individual star-forming regions, radial averages are meaningful to understand the galaxy evolution over orbital time-scales, to compare to 1-dimensional models depending on the galactocentric radius, or to the gas distribution over large scales. We find a steep gradient in \(\Sigma_{\rm SFR}\) within the central 10 kpc of the galaxy. This is consistent with the long-slit observations of Malin 1 from Junais et al. (2020). Beyond the central regions, the \(\Sigma_{\rm SFR}\) is mostly flat at about \(\Sigma_{\rm SFR}\sim 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) for the extended disk regions, but based on the radial average measurements there is a shallow decline to about \(\sim 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) until 100 kpc. However, we see a clear spike in \(\Sigma_{\rm SFR}\) around 50 to 60 kpc radius, in both the H\(\alpha\) selected regions and the radial average estimates. This corresponds to the extended bright star-forming regions found at this radius as clearly seen in the H\(\alpha\) map from Fig. 3. These individual star-forming regions have in general larger \(\Sigma_{\rm SFR}\) than the radial average values. For comparison, we estimated a similar radial average of \(\Sigma_{\rm SFR}\) using the UVIT FUV image of Malin 1 from Saha et al. (2021), in the same field of our H\(\alpha\) observations, and we obtain a similar peak.6. The attenuation at the FUV wavelength was obtained by adopting a Calzetti et al. (2000) attenuation and a gas-to-stellar reddening factor of 0.44 as discussed in Sect. 4. From Fig. 6 we can clearly see that most of the bright H\(\alpha\) regions coincide well with the UV-blobs. These UV blobs were not resolved in the previous GALEX images of Malin 1 from Boissier et al. (2016). With the improved angular resolution of the UVIT images (\(\sim\)1.6\({}^{\prime\prime}\); Saha et al. 2021), which is three times better than that of GALEX, many H\(\alpha\)-bright regions we observe Figure 4: Radial variation of Balmer ratio (left panel) and H\(\alpha\) attenuation (right panel). The blue circles and the brown hexagons are the central regions and the extended disk regions, respectively, obtained from the pPXF fit discussed in Sect. 2.2. The red horizontal dashed line marks the intrinsic Balmer ratio of 2.86 for Case B recombination. To all regions with the Balmer ratio below this value, we assign zero attenuation. The histograms beside each panel give the overall distribution of each quantity (blue solid line and brown dashed line for the central region and extended disk, respectively), with their mean values indicated at the top of each panel. Figure 5: Star formation rate surface density of Malin 1 as a function of galactocentric radius. The blue circles and the brown hexagons are the central regions and the extended disk regions, respectively, obtained from the pPXF fit discussed in Sect. 2.2. The red diamonds are the H\(\alpha\) radial average measured along concentric rings of 5\({}^{\prime\prime}\) width from the center. The orange stars are the radial averages measured on the same field in the UVIT FUV image of Malin 1 from Saha et al. (2021), as shown in Fig. 6. For illustration purposes, the UV data points are horizontally shifted by 2 kpc. The green squares are the data points from Junais et al. (2020), based on the IMACS-Magellan H\(\alpha\) long-slit spectra of Malin 1. can be identified as resolved individual regions in the UVIT image. Similarly, the \(\Sigma_{\rm SFR}\) values based on the UV data are well consistent with the estimates from the H\(\alpha\) measurements within their uncertainties (see Fig. 5), although many UV radial points only provide an upper limit in \(\Sigma_{\rm SFR}\) while the H\(\alpha\) determination is well constrained due to our spectral stacking technique discussed in Sect. 2.2.3. We also performed a UV radial average measurement of \(\Sigma_{\rm SFR}\) for the full galaxy (instead of only the one-quarter where we have MUSE observations) and found that they are very similar to our initial estimates with a difference of less than 0.1 dex. ### Metallicity We estimated the radial variation in metallicity in Malin 1 using the observed emission lines. We use the N2 and the O3N2 metallicity calibrators from Marino et al. (2013), given as: \[12+\log(O/H)=8.743+0.462\,N2 \tag{3}\] \[12+\log(O/H)=8.533-0.214\,O3N2, \tag{4}\] where the flux ratios are encoded as \(N2=\log([NII]_{\rm 683}/H\alpha)\) and \(O3N2=\log([OIII]_{\rm 5007}/H\beta)-\log([NII]_{\rm 683}/H\alpha)\). Figure 7 shows the radial metallicity distribution of Malin 1. The metallicity estimate based on the N2 calibrator indicates that the central region of the galaxy (within a few kpc) has nearly solar metallicity, whereas, in the inner disk out to 20 kpc, we see a steep gradient in metallicity that reaches sub-solar values (\(\sim\)0.65 \(Z_{\odot}\)). However, for the outer disk beyond 20 kpc, we see a flattening in the metallicity (\(\sim\)0.6 \(Z_{\odot}\)), consistent with no or a shallow slope, compared to the central regions. Such a behavior is also found in XUV disk galaxies like M83 (Bresolin et al. 2009; Bresolin 2017) and NGC 1512 (Lopez-Sanchez et al. 2015). Both the N2 and the O3N2 calibrators show a similar trend, although in general, the metallicity estimated from O3N2 is lower than the N2 estimates by on average about 0.07 dex. Such an offset among different strong-line calibrations is often found in the literature as a result of differences in the excitation parameter and ionization states of the various lines used (e.g., Kewley & Ellison 2008; Micheva et al. 2022). For instance, Kewley & Ellison (2008) show that offsets among different calibrations can go up to 0.6 dex in metallicity, and with a large scatter. This is consistent with the offset of 0.07 dex we obtained between our N2 and O3N2 estimates. Other commonly used metallicity calibrators in the literature such as R\({}_{23}\) or N2O2 cannot be estimated using our data as the emission lines required for those calibrators are not in the MUSE spectral coverage. Table 1 provides the measured quantities of all the H\(\alpha\)-selected regions in the extended disk of Malin 1 discussed in this section. Based on the H\(\alpha\) fluxes from Table 1, it is interesting to note that we observe H\(\alpha\) luminosities (\(L_{H\alpha}\)) in the range of 10\({}^{38}\) to 10\({}^{40}\) erg s\({}^{-1}\) (with a median \(L_{H\alpha}\) of 10\({}^{38.7}\) erg s\({}^{-1}\)), similar to the range found for HII regions of LSB galaxies by Schombert et al. (2013). However, due to the limited resolution we have, where we cannot resolve individual HII regions at the distance of Malin 1, it is hard to make a direct comparison. ## 4 Discussion ### Dust attenuation in Malin 1 Low surface brightness galaxies are generally considered to be dust poor (Hinz et al. 2007; Rahman et al. 2007; Liang et al. 2010). However, these results are based on either very small samples or shallow data. Recently Junais et al. (2023) performed a large statistical analysis of dust content in 1003 LSBs using deep data and found that although a fraction of LSBs is dust poor, a small fraction of them (\(\sim\)4%) contains high dust attenuation. However, they observed that these LSBs with high attenuation also possess similarities with the giant LSBs in terms of their average stellar mass surface density and surface brightness. This may indicate a higher dust attenuation in GLSBs. Our dust attenuation measurements of Malin 1 show that this is indeed the case. We observe a non-negligible amount of dust attenuation in the central region as well as the extended disk of Malin 1. Junais et al. (2023) calibrated the variation of attenuation as a function of the stellar mass surface density. Their equation 3 predicts that a Malin 1-like galaxy with an average stellar mass surface density of 10\({}^{7.9}\) M\({}_{\odot}\) kpc\({}^{-2}\) should have an attenuation \(A_{V}\) of about 0.33 mag. The mean H\(\alpha\) attenuation we obtained for Malin 1 corresponds to nearly 0.36 mag. This is consistent with the scaling relation predictions from Junais et al. (2023). The extended disk of Malin 1 is undetected in _Spitzer_(Hinz et al. 2007), _Herschel_(Boissier et al. 2016), and WISE7 imaging. For instance, Hinz et al. (2007) found that Malin 1 has a 1\(\sigma\) flux upper-limit of 10 mJy in the _Spitzer_ MIPS 160 \(\mu\)m observations. Our observed measurements of attenuation in the disk of Malin 1 challenge these non-detections. However, by construc Figure 6: FUV image of Malin 1 from Saha et al. (2021) of the same field as our MUSE observations (shown as the red box). The green contours mark the H\(\alpha\)-detected regions as discussed in Sect. 2.2.2. tion, our relatively high attenuation8 is found in star-forming regions that cover only a small fraction of the extended disk (see Fig. 3). The relationship between the gas and the stellar attenuation has been largely discussed in the literature. Calzetti (1997) proposed a factor of 0.44 between the gas and the stellar reddening as \(E(B-V)_{\rm star}=0.44E(B-V)_{\rm gas}\). Lin & Kong (2020) investigated this relation for a large sample of galaxies with a wide range of physical properties and found that such a relation varies with several galaxy properties. The low covering fraction of star-forming regions in Malin 1 could be related to the low attenuation suggested by optical broadband studies. This calls for deeper observations of Malin 1 at MIR and FIR wavelengths. Moreover, exploring the geometric distribution of the dust and stars could also provide insights into the measured high attenuation and current non-detections at IR wavelengths (e.g., Hamed et al., 2023). Footnote 8: Assuming Case B recombination, i.e. optically thin gas. An alternative explanation of this discrepancy might be that the conditions in the star-forming regions of Malin 1 are characterized by partial self-absorption of Balmer photons synonymous with mildly optically thick conditions. Such conditions would increase the expected Balmer ratio and consequently decrease the expected dust attenuation, making it more consistent with previous IR measurements. ### Correlation of radial gas metallicity and stellar profile Figure 8 (top panel) shows the radial variation of gas-phase metallicity using the N2 calibrator in Malin 1. For the first time, we observe a relatively steep decline from the solar metallicity in the inner region of Malin 1 followed by a flattening of metallicity around 0.6 \(Z_{\odot}\) beyond the 20 kpc radius. The radius at which this flattening occurs also coincides with the \(I\)-band optical break radius of Malin 1 (19.6 kpc) found by Junais et al. (2020). This latter break radius corresponds to the transition from the inner disk to the outer disk, based on a broken exponential disk profile (Erwin et al., 2008). This indicates that the inner and the outer disks of Malin 1 have different metallicity gradients. A flattening in metallicity beyond the break radius9 is also observed in galaxies like M83 (Bresolin et al., 2009). Interestingly, M83 is an XUV galaxy with a very extended UV disk. XUVs are thought to have similarities with GLSBs (Thilker et al., 2007; Bigiel et al., 2010; Hagen et al., 2016). Our observed similarities in the metallicity gradient support this hypothesis. The flattening of the metallicity beyond the break radius could be due to several reasons. Bresolin et al. (2009) propose that such metallicity gradient could be the result of the flow of metals from the inner to the outer disk, accretion of pre-enriched gas, or due to past interaction with a satellite galaxy. We compared our observations in Malin 1 with the metallicity gradient from a model similar to the one shown in Boissier et al. (2016). To this aim, we updated the Boissier et al. (2016) model by performing a similar fit on the photometric profiles, but with the same grid of models as in Junais et al. (2022) (excluding ram-pressure). We find the best parameters to be very close to the original ones: \(V_{C}=380^{+200}_{-60}\) km s\({}^{-1}\) and \(\lambda=0.58^{+0.18}_{-0.08}\), where \(V_{C}\) and \(\lambda\) are the circular velocity and the halo spin parameters, respectively from the models of Boissier et al. (2016). We do, however, find a difference as the metallicity is much higher than in the model published in Boissier et al. (2016). For instance, at a radius of 55 kpc in Malin 1, Boissier et al. (2016) predicts a metallicity of 0.1 \(Z_{\odot}\), whereas the best model we obtained now has a metallicity of 0.45 \(Z_{\odot}\) at the same radius (see the dot-dashed magenta line in Fig. 8). After inspection, we realized that the model in Boissier et al. (2016) was actually using the Kroupa et al. (1993) initial mass function (IMF), while we adopted in Junais et al. (2022) the Kroupa (2001) IMF. Differences in IMF can indeed generate large differences in the net yield integrated over the IMF (e.g., Vincenzo et al., 2016). The Kroupa et al. (1993) IMF being much poorer in massive stars than the one of Kroupa (2001), the metallicity profile here is about 5 times higher than the one published in Boissier et al. (2016). The overall metallicity level of the model is consistent with the observations within their scatter in the inner region at \(\lesssim 20\) kpc (see Fig. 8), especially if we take into account the large systematic uncertainties of the model in the IMF and yields. However, the metallicity gradient of the model fails to reproduce the metallicity plateau beyond 20 kpc and is steeper than in the observations. This indicates that this simple model (in which the system evolves in isolation without radial migration) is probably insufficient to reproduce all the Figure 7: Radial variation of the metallicity of Malin 1 using the N2 calibrator (left panel) and the O3N2 calibrator (right panel), based on Marino et al. (2013). The blue circles and the brown hexagons are the central regions and the extended disk regions, respectively, as discussed in Sect. 2.2. The green horizontal dotted line marks the solar metallicity. Note that the N2 and O3N2 calibrations from Marino et al. (2013) have an additional calibration uncertainty of 0.16 dex and 0.18 dex, respectively. properties of the galaxy (see e.g., Kubryk et al. 2013 for the effect of radial migration on chemical abundance gradients). The bottom panel of Fig. 8 shows the radial variation of the \(g\)-band and \(i\)-band optical surface brightness and color profiles of Malin 1 from Boissier et al. (2016), respectively. We can clearly notice a correlation in the radial surface brightness and color profiles with that of the metallicity gradient shown in the top panel of Fig. 8. For both the surface brightness and color profiles, we observe a steep decline in the inner part out to \(\sim\) 20 kpc, followed by a flattening, consistent with the trend in the metallicity. This is similar to the observation of Marino et al. (2016) for the extended disks of local spiral galaxies from the CALIFA survey. Based on the shape of the surface brightness profile, Malin 1 has an up-bending or anti-truncated Type III profile (Erwin et al. 2005), where the inner disk has a steeper surface brightness slope than the extended outer disk (see bottom panel of Fig. 8). Marino et al. (2016) found that Type III galaxies have a positive correlation between the change in color and metallicity gradient, followed by a flattening in metallicity. We find the same trend in Malin 1. There are several scenarios proposed in the literature on the formation of Type III profiles. Younger et al. (2007) found that minor mergers can produce Type III surface brightness profiles. On the other hand, Ruiz-Lara et al. (2017) suggested that Type III profiles form as a result of the radial migration of material from the inner to outer disk, as well as the accretion of material from the outskirts. This notion was challenged by Tang et al. (2020), who found that stellar migration alone cannot form Type III profiles. Therefore, it is likely that most of the anti-truncated disk of Malin 1 resulted from the accretion of material from the outskirts (e.g., pre-enriched gas from a minor merger). The flattening of the color and metallicity profiles, and the rather high metallicity (0.6 \(Z_{\odot}\)) in the extended disk also point in this direction. Using the mass-metallicity scaling relations for gas and stars in isolated Local Group dwarf galaxies (see Hidalgo 2017), the outer disk gas abundance in Malin 1 would correspond to an LMC-type dwarf galaxy with a stellar and gas mass of \(\sim\) 10\({}^{9.5-10}\)\(M_{\odot}\) each. As gas metallicity traces recent star formation, photometric colors trace older stellar population ages (and abundances). The similarity of their radial profiles in Figure 8, together with the break at 20 kpc in both, points to distinct evolutionary histories of the inner and outer galaxy systems over the epoch of the tracers, i.e. from Gyr in the past to the recent dozens of Myr traced by the gas. This in turn means that the mechanisms of star formation and feedback/enrichment in either of the components were not significantly influenced by the other, neither were both components altered by radial migration. In that sense, one could speak of Malin 1 as a composite galaxy with morphological sub-components that have evolved independently over many Gyr. The quantitative implications of this picture will be explored in future work. The apparent correlation between the abundance and surface brightness profile also echoes the fact that abundance gradients expressed per disk scale length (\(R_{d}\)) tend to have a universal value with only a small scatter for disk galaxies (Prantzos & Boissier 2000; Sanchez et al. 2014). In Malin 1, we separated the inner disk (\(<\) 20 kpc) and the outer disk (\(>\) 20 kpc) as shown in Fig. 8. Junais et al. (2020) showed that the inner and outer disks of Malin 1 are separated at \(\sim\) 20 kpc, with scale lengths of 5.3 kpc and 41.8 kpc, respectively. We performed separate linear fits to the abundance gradients of both disks. We found that, for the inner and the outer disk, the slope of the abundance gradient normalized by its disk scale length has a value of \(-\)0.09 dex/\(R_{d}\) and 0.04 dex/\(R_{d}\), respectively. Sanchez et al. (2014) showed that for local disk galaxies, this value is in the range of \(-\)0.06 \(\pm\) 0.05 dex/\(R_{d}\). Therefore, the normalized abundance gradient of the inner disk of Malin 1 has a negative value, close to the mean value found in normal galaxies. However, the outer disk is consistent with a small positive gradient, but within the 2\(\sigma\) scatter of Sanchez et al. (2014), suggesting that the extended disk of Malin 1 lies along the extreme tail of "standard" galaxies. This may point toward a particular mode of star formation at relatively low densities. ### Star formation efficiency in the low-density regime The efficiency of star formation activity in the low-density regime is often debated. LSBs are believed to have very low star formation efficiencies compared to their HSB counterparts Figure 8: _Top:_ Metallicity gradient in Malin 1. The brown hexagons are the Malin 1 extended disk regions discussed in Sect. 3.3. The observed metallicities shown here are based on the Marino et al. (2013) N2 calibrator. The magenta dot-dashed line is the best-fit model of Malin 1 with a circular velocity of 380 km s\({}^{-1}\) and a spin parameter of 0.58, obtained after a re-fitting following Boissier et al. (2016). _Bottom:_ Optical surface brightness profile of Malin 1 in the \(g\)-band (black solid line) and the \(i\)-band (black dot-dashed line) from Boissier et al. (2016). The secondary axis on the right shows \(g-i\) color profile from Boissier et al. (2016) as the red dotted line. Note that the stellar profiles are shown only until the radial range where we have a metallicity estimate. The vertical dashed blue line is the \(I\)-band surface brightness break radius from Junais et al. (2020) (e.g., Bigiel et al., 2008; Wyder et al., 2009). This results in the divergence of LSBs from the global Kennicutt-Schmidt law (KS relation; Kennicutt, 1998) for star-forming galaxies as shown in Fig. 9. This may be related to the possible existence of a gas density threshold (de los Reyes & Kennicutt, 2019), below which star formation has a very low efficiency. The extended disks of GLSB galaxies offer a new laboratory to study the star formation activity in this low-density regime. Figure 9 shows our measured \(\Sigma_{\rm SFR}\) and gas surface density for Malin 1 at various radii of its disk. For this, we used the H i gas profile from Lelli et al. (2010), which has been obtained from VLA data, with a resolution of \(\sim\)20\(\arcsec\). We combined these profiles with our azimuthally averaged SFR profile from Sect. 3.2. Even if our data is only obtained in one-quarter of the galaxy, we can guess this comparison because the gas distribution is expected to be symmetric (see Fig. 6 of Lelli et al., 2010). We see that, at a given H i gas surface density, \(\Sigma_{\rm gas}\), the \(\Sigma_{\rm SFR}\) level of Malin 1 falls much below the level and scatter expected for normal star-forming spiral galaxies from de los Reyes & Kennicutt (2019). This indicates that the Malin 1 disk has a very low star formation efficiency. This effect is dominant in the outermost part of the disk (\(r>60\) kpc) where we see a large difference of \(\sim\)2 dex in \(\Sigma_{\rm SFR}\) with respect to normal star-forming galaxies, whereas it is \(\sim\)1 dex for the inner regions. Within this region, the \(\Sigma_{\rm SFR}\) of Malin 1 is also consistent with what is observed for other LSBs and GLSBs in the literature (Bossier et al., 2008; Wyder et al., 2009; Saburova et al., 2021). Our estimates agree with the average value of \(\Sigma_{\rm SFR}\) for Malin 1 found by Wyder et al. (2009) based on the GALEX UV data. Overall, we can clearly see that the Malin 1 disk lies along the extreme end of \(\Sigma_{\rm SFR}\) level than for any other known LSB galaxy. We point out that the gas surface density of normal spiral galaxies shown in Fig. 9 from de los Reyes & Kennicutt (2019) includes the total of atomic H i and molecular H\({}_{2}\) gas whereas the LSBs from Wyder et al. (2009) and our Malin 1 data points only include H i. However, it is reasonable to assume that the molecular gas fraction is negligible in LSBs and GLSBs (Braine et al., 2000; Galaz et al., 2008; Wyder et al., 2009). Moreover, any additional H\({}_{2}\) gas in Malin 1 would move the points shown in Fig. 9 towards the right, i.e. further away from the relation for normal galaxies. The fact that, on the one hand, we are detecting huge regions of young stars surrounded by ionized gas emitting in H\(\alpha\), and on the other hand, no CO emission was detected in the past efforts, neither recent ones with mm observations, points toward a quite peculiar interstellar medium (ISM) in Malin 1. As suggested by some authors (Galaz et al., 2022, and references therein), not only could be ISM be at a very low density, as all these studies seem to indicate, but also at a higher temperature than the ISM observed in high surface brightness spirals. Following Boissier et al. (2004), we made an estimate of the gas-to-dust ratio in the extended disk of Malin 1 by using the ratio of our observed \(V\)-band attenuation A\({}_{V}\) (from Eq. 1) and the H i column density (\(N_{H}\)) from Lelli et al. (2010). We found a mean gas-to-dust ratio of \(\log(M_{\rm gas}/M_{\rm dust})=2.06\). For our average gas metallicity of \(\sim\)0.6 \(Z_{\odot}\) in the extended disk, such a gas-to-dust ratio is consistent with the lower limit, but within the scatter found for normal galaxies (e.g., Boissier et al., 2004; Remy-Ruyer et al., 2014). ## 5 Conclusions In this work, we present VLT/MUSE IFU observations of the giant low surface brightness galaxy Malin 1. We extracted several ionized gas emission lines using this data and performed a detailed analysis of the star formation rate, dust attenuation, and gas metallicity within this galaxy. Our main results are summarized as follows. * For the first time, we observe strong H\(\alpha\) emission in numerous regions along the extended disk of Malin 1 up to a radial distance of \(\sim\)100 kpc. Other emission lines ([N ii]\({}_{6583}\), H\(\beta\) and [O iii]\({}_{5007}\)) were also observed, but in fewer regions. This indicates that recent star formation is ongoing in several regions of the large diffuse disk of Malin 1. * We estimated the Balmer decrement and dust attenuation in several regions of the galaxy and found that the Malin 1 disk has a mean Balmer ratio of 3.26 and an H\(\alpha\) attenuation of 0.36 mag, assuming case-B gas conditions. This is also true at several tens of kpc from the galaxy center where we measured it in a bright star-forming region. * Malin 1 has a steep decline in the star formation rate surface density (\(\Sigma_{\rm SFR}\)) within the inner 20 kpc, followed by shallow decline in the extended disk. We also see a peak in \(\Sigma_{\rm SFR}\) around the 60 kpc radius. Our radial average \(\Sigma_{\rm SFR}\) estimates based on H\(\alpha\) are consistent with the measurements from UV as well as other works from the literature. * The gas metallicity in Malin 1 shows a steep decline in the central region, similar to the radial H\(\alpha\) profile. However, we observe a flattening of the metallicity in the extended disk with a rather high value of around 0.6 \(Z_{\odot}\) within a radius range between 20 kpc and 80 kpc. We found that the outer disk abundance gradient in Malin 1, normalized by its scale length has a value close to zero, which is flatter than most normal disk galaxies in the literature. The abundance measurements in these very extended regions confirm that the Figure 9: Star formation rate surface density versus gas surface density. The diamond symbols mark the region along the disk of Malin 1 based on the radial averaged \(\Sigma_{\rm SFR}\) we estimated as in Fig. 5. The \(\Sigma_{\rm gas}\) for Malin 1 is from the eight H i data points of Lelli et al. (2010), after correcting for the Helium by a factor of 1.4. The color bar indicates the radius along the disk of Malin 1. The black open circles are normal spiral galaxies from de los Reyes & Kennicutt 2019 (the \(\Sigma_{\rm gas}\) from de los Reyes & Kennicutt 2019 uses the total atomic and molecular gas mass). The black dashed line and the grey shaded region are the de los Reyes & Kennicutt (2019) best-fit relation and 1\(\sigma\) scatter. The black squares are the LSB galaxies from Wyder et al. (2009), along the Malin 1 marked as the open green star symbol based on the UV \(\Sigma_{\rm SFR}\) estimate from Wyder et al. (2009). gas is not primordial and that the gradient is flatter than expected for very simple models. Together with similar radial trends from photometric colors, this result suggests distinct star formation histories for the inner and outer disks of Malin 1 with little radial migration or interactions at play during the formation of its very extended disk. * Comparison of our estimated star formation rate surface density and the gas surface density shows that, unlike normal spiral galaxies, Malin 1 lies along the regime of very low star formation efficiency, as found in other LSBs, but at the extreme lower limit. ###### Acknowledgements. J and KM are grateful for support from the Polish National Science Centre via grant UMO-2018/30/E/ST900082. PMW gratefully acknowledges support by the German BMBF from the ExtW1 program (project VLT-BlueMUSE, grant 05A20BA). GG, EJJ, and THP gratefully acknowledge support by the ANID BASAL project FB210003. THP gratefully acknowledges support through a FONDECYT Regular grant (No. 1201016). EJJ acknowledges support from FONDECYT Inciacion en investigacion 2020 Project 11200263.
2304.05740
Possibility-theoretic statistical inference offers performance and probativeness assurances
Statisticians are largely focused on developing methods that perform well in a frequentist sense -- even the Bayesians. But the widely-publicized replication crisis suggests that these performance guarantees alone are not enough to instill confidence in scientific discoveries. In addition to reliably detecting hypotheses that are (in)compatible with data, investigators require methods that can probe for hypotheses that are actually supported by the data. In this paper, we demonstrate that valid inferential models (IMs) achieve both performance and probativeness properties and we offer a powerful new result that ensures the IM's probing is reliable. We also compare and contrast the IM's dual performance and probativeness abilities with that of Deborah Mayo's severe testing framework.
Leonardo Cella, Ryan Martin
2023-04-12T09:49:29Z
http://arxiv.org/abs/2304.05740v2
# Possibility-theoretic statistical inference offers performance and probativeness assurances+ ###### Abstract Statisticians are largely focused on developing methods that _perform_ well in a frequentist sense--even the Bayesians. But the widely-publicized replication crisis suggests that these performance guarantees alone are not enough to instill confidence in scientific discoveries. In addition to reliably detecting hypotheses that are (in)compatible with data, investigators require methods that can _probe_ for hypotheses that are actually supported by the data. In this paper, we demonstrate that valid inferential models (IMs) achieve both performance and probativeness properties and we offer a powerful new result that ensures the IM's probing is reliable. We also compare and contrast the IM's dual performance and probativeness abilities with that of Deborah Mayo's severe testing framework. _Keywords and phrases:_ Bayesian; frequentist; imprecise probability; inferential model; p-value; severity; validity. ## 1 Introduction Important decisions affecting our everyday experiences are becoming increasingly data-driven. But is data helping us make better decisions? In many ways, the answer is obviously yes; but in other ways the answer is less clear. The widely-publicized replication crisis in science is one issue that raises serious concerns, so much so that, in 2019, the American Statistical Association's president commissioned a formal _Statement on Statistical Significance and Replicability_ that appeared in 2021.1 As with most official statements, in almost any context, this one says very little, e.g., Footnote 1: [https://magazine.amstat.org/blog/2021/08/01/task-force-statement-p-value/](https://magazine.amstat.org/blog/2021/08/01/task-force-statement-p-value/) _Different measures of uncertainty can complement one another; no single measure serves all purposes._ While this assertion is politically (and perhaps technically) correct, it offers nothing to help improve the state of affairs. The lack of any clear guidance in this official statement reveals that there are important and fundamental questions concerning the foundations of statistics and inductive inference that remain unanswered: _Should probability enter to capture degrees of belief about claims?... Or to ensure we won't reach mistaken interpretations of data too often in the long run of experience?_ (Mayo 2018, p. xi) The two distinct roles of probability highlighted in the quote above correspond to the classical frequentist and Bayesian schools of statistical inference, which have two fundamentally different priorities, referred to here as _performance_ and _probativeness_, respectively. Over the last 50+ years, however, the lines between the two perspectives and their distinct priorities have been blurred. Indeed, both Bayesians and frequentists now focus almost exclusively on performance. These performance considerations are genuinely important for the logic of statistical inference: _even if an empirical frequency-based view of probability is not used directly as a basis for inference, it is unacceptable if a procedure...of representing uncertain knowledge would, if used repeatedly, give systematically misleading conclusions_ (Reid and Cox 2015, p. 295). As the replication crisis has taught us, however, there is more to statistical inference than achieving, say, Type I and II error probability control. Beyond performance, we are also concerned with probativeness, i.e., methods' ability to probe for hypotheses that are genuinely supported by the observed data. Modern statistical methods cannot achieve both performance and probativeness objectives, so a fully satisfactory framework for scientific inferences requires new perspectives. Section 2.1 gives the problem setup and briefly describes the Bayesian versus frequentist _two-theory problem_. There we justify our above claim that modern statistical methods fail to meet both the performance and probativeness objectives. This includes the default-prior Bayes solution that aims to strike a balance between the two theories. What holds the default-prior Bayes solution back from meeting the performance and probativeness objectives is its lack of calibration, which is directly related to the constraint that the posterior distribution be a precise probability. Fortunately, the relatively new, possibility-theoretic _inferential model_ (IM) framework, reviewed in Section 2.2 below, is able to achieve greater flexibility by embracing a certain type and degree of imprecision in its construction. We present here a key result, namely, Theorem 1, that drives the IM's reliability, even into the new probativeness territory considered here. Our main contribution here, in Section 3, is a demonstration of the IM's ability to simultaneously achieve both _performance_ and _probativeness_. On the performance side, we show in Section 3.1 that procedures, e.g., hypothesis tests and confidence sets, derived from the IM's necessity and possibility measure output control the frequentist error rates at the nominal level. Of particular interest is that there are no restrictions on the kinds of questions that the IM can address, so it is at least conceptually straightforward to eliminate nuisance parameters and obtain provably reliable marginal inference. We enter new territory in Section 3.2, where we consider the question of probativeness. First: _what is probing?_ In classical hypothesis testing, typically a null hypothesis is offered and a decision is made to either reject that hypothesis or not. Often this null hypothesis represents a scientific status quo, e.g., that a new mental health treatment program has no effect on patients' well-being. Those who follow the mechanical _NHST_ (null hypothesis significance test) guidelines would believe that all the statistical analysis offers is a reject-or-not decision; in that case, if the investigator's data leads to a reject conclusion, then apparently he/she has made a psychological discovery. Of course, that logic is flawed because all the statistical test has determined is that the data are incompatible with the status quo. More specifically, the test does not imply that the data actually support the complementary hypothesis that there is an appreciable benefit to the new treatment, which is the bar for claiming a scientific discovery. Probing aims to dig deeper than (in)compatibility and look for genuine support. None of the standard statistical tools offer this probing, so something new is needed. Fortunately, possibility measures and imprecise probabilities more generally contain lots of relevant information and, in particular, to each relevant hypothesis it returns a pair of numbers. As discussed in Section 3.1, only one of those numbers is used for the usual performance-focused developments. That is, we reject a hypothesis if its degree of possibility or plausibility is small, since that is an indication of incompatibility. The other number is commonly understood as measuring a degree of necessity, belief, or support, so a natural question is if this feature of the IM output can be used for probing. In Section 3.2 we give an affirmative answer to this question and, furthermore, offer some strong theoretical support for the claim that the IM's probing is provably reliable. The probativeness conclusion is a direct consequence of the IM output's imprecision. That the additional flexibility of imprecision creates opportunities for more nuanced judgments is one of the motivations for accounting for imprecision, so this is no big surprise. But our contribution here is valuable for several reasons. First, the statistical community is aware of this need to see beyond basic performance criteria, but a general and easy-to-follow guidance is still lacking. In Section 4 below we summarize a relatively recent proposal in Mayo (2018) and compare it to what the IM framework offers. There we argue that a difficulty with supplementing the standard testing machinery with a probing add-on, as Mayo and others have proposed, is that frequentism lacks an appropriate language to describe anything beyond (in)compatibility. To clearly articulate what probing or support means, we need a richer language than what frequentist statistics offers. The possibility-theoretic IM formulation allows for this, but without sacrificing on the frequentist-like performance guarantees. That is, IMs offer a simpler interpretation based on possibilistic reasoning, where the necessary but complicated frequentist considerations are hidden under the hood in a calibration engine (Theorem 1) that powers the IM. Second, our contribution showcases the important role played by imprecise probability, by reinforcing the key point that imprecision is _not_ due to an inadequate formulation of the problem, but, rather, is an essential part of a complete and fully satisfactory solution to the statistical inference problem. Following our comparison of IMs and Mayo's theory of severe testing, we provide several illustrations of the IM solution in Section 5, focusing primarily on its probing abilities. This is followed by some concluding remarks in Section 6 and a few relevant details concerning hypothesis testing and connections to the IM theory in Appendix A. ## 2 Background ### Two-theory problem To set the scene, denote the observable data by \(Y\). The statistical model for \(Y\) will be denoted by \(\{\mathsf{P}_{\theta}:\theta\in\mathbb{T}\}\) and the unknown true value of the model parameter will be denoted by \(\Theta\). Note that the setup here is quite general: \(Y\), \(\Theta\), or both can be scalars, vectors, or something else. We focus here on the typical case where _no genuine prior information is available/assumed_. So, given only the model \(\{\mathsf{P}_{\theta}:\theta\in\mathbb{T}\}\) and the observed data \(Y=y\), the goal is to quantify uncertainty about \(\Theta\) for the purpose of making inference. For concreteness, we will interpret "making inference" as making (data-driven) judgments about hypotheses concerning \(\Theta\). In particular, we seek to assign numerical values--could be p-values, posterior probabilities, etc.--to hypotheses \(H\subseteq\mathbb{T}\) concerning \(\Theta\) or some feature thereof. In a nutshell, the two dominant schools of thought in statistics are as follows. **Bayesian.** Uncertainty is quantified directly through specification of a prior probability distribution for \(\theta\), representing the data analyst's _a priori_ degrees of belief. Bayes's theorem is then used to update the prior to a data-dependent posterior distribution for \(\theta\). The posterior probability of a hypothesis \(H\) represents the analyst's degree of belief in the truthfulness of \(H\), given data, and would be essential for inference concerning \(H\). That is, the magnitudes of the posterior probabilities naturally drive the data analyst's judgments about which hypotheses are supported by the data and which are not. **Frequentist.** Uncertainty is quantified indirectly through the use of reliable procedures that control error rates. Consider, e.g., a p-value for testing a hypothesis \(H\). What makes such a p-value meaningful is that, by construction, it tends to be not-small when \(H\) is true. Therefore, observing a small p-value gives the data analyst reason to doubt the truthfulness of \(H\): _The force with which such a conclusion is supported is logically that of the simple disjunction: Either an exceptionally rare chance has occurred, or_ [the hypothesis] _is not true_ (Fisher 1973, p. 42). The p-value _does not_ represent the "probability of \(H\)" in any sense. So, a not-small (resp. small) p-value cannot be interpreted as direct support for \(H\) (resp. \(H^{c}\)) or any sub-hypothesis thereof. So, at least in principle, the Bayesian framework focuses on probativeness whereas the frequentist framework focuses on performance. But the line between frequentist and modern Bayesian practice is not especially clear. Even Bayesians typically assume little or no prior information, as we have assumed here, so default priors are the norm (e.g., Berger 2006; Jeffreys 1946). With an artificial or default prior, however, the "degree of belief" interpretation or the posterior probabilities is lost, [Bayes's theorem] _does not create real probabilities from hypothetical probabilities_ (Fraser 2014, p. 249) and, along with it, the probative nature of inferences based on them, _...any serious mathematician would surely ask how you could use_ [Bayes's theorem] _with one premise missing by making up an ingredient and thinking that the conclusions of the_ [theorem] _were still available_ (Fraser 2011, p. 329). The default-prior Bayes posterior probabilities could still have performance assurances _if_ they were suitably calibrated. But the _false confidence theorem_ (Balch et al. 2019) shows that this is not the case: there exists false hypotheses to which the posterior tends to assign large probabilities. In particular, let \(\mathsf{Q}_{y}\) denote a data-dependent probability distribution for \(\Theta\), e.g., default-prior Bayes, fiducial, etc. Then the false confidence theorem states that, for any \((\rho,\tau)\in(0,1)^{2}\), there exists hypotheses \(H\subseteq\mathbb{T}\) such that \[H\not\ni\Theta\quad\text{and}\quad\mathsf{P}_{\Theta}\{\mathsf{Q}_{Y}(H)>\tau \}>\rho.\] This implies that inferences based on the magnitudes of these probabilities--i.e., if \(\mathsf{Q}_{y}(H)\) is small, then infer \(H^{c}\)--are at risk of being "systematically misleading" (cf. Reid and Cox). This explains why modern Bayesian analysis focuses less on probabilistic reasoning based on the posterior probabilities themselves and more on the performance of procedures (tests and credible sets) derived from the posterior distribution. Hence modern Bayesians and frequentists are not so different. The key take-away message is as follows. Pure frequentist methods focus on detecting incompatibility between data and hypotheses (performance), so they do not offer any guidance on how to identify hypotheses actually supported by the data (proativeness). Default-prior Bayesian methods are effectively no different, so this critique applies to them too. More specifically, the default-prior Bayes posterior probabilities lack the calibration necessary to reliably check for either incompatibility or support. Therefore, at least when prior information is vacuous, neither of the mainstream schools of thought in statistics can simultaneously achieve both the performance and probativeness objectives. ### Inferential models overview The inferential models (IM) framework was first developed in Martin and Liu (2013, 2015) as a fresh perspective on Fisher's fiducial argument (Fisher 1935; Zabell 1992) and on the Dempster-Shafer theory of belief functions (Dempster 1968, 2008, 2014; Shafer 1976) in the context of statistical inference. It aimed to balance the Bayesians' desire for belief assignments and the frequentists' desire for error rate control. A key distinction between IMs and the familiar Bayesian and frequentist frameworks is that the output is an _imprecise probability_ or, more specifically, a _necessity-possibility measure_ pair. _Possibility is an entirely different idea from probability, and it is sometimes, we maintain, a more efficient and powerful uncertainty variable, able to perform semantic tasks which the other cannot_ (Shackle 1961, p. 103). Unlike in other applications of imprecise probability, the imprecision that enters into the picture here is _not_ the result of the data analyst's inability or unwillingness to precisely specify a statistical model, etc., although partially identified models (e.g., Manski 2003) could be a source of additional imprecision. Instead, it has been shown that a certain degree of imprecision is necessary for inference to be _valid_ in a specific statistically-relevant sense that we explain below. Moreover, it has also been shown that a possibility measure is the "right" imprecise probability model for quantifying this unique form of imprecision, as opposed to more general belief functions, lower previsions, etc. that are designed for modeling ignorance-driven imprecision, or Knightian uncertainty. Below is a quick summary of the IM construction and explanation of the claims just made. The IM construction summarized here is the likelihood-driven construction, recently advanced in Martin (2022), which is based on a holistic view of the statistical inference problem, i.e., it aims to answer the question _what does the data have to say about \(\Theta\)?_ This is our preferred construction, but it is worth pointing out here that this is not the only available option. Appendix A outlines an alternative construction, following Martin (2021), that starts with a specific hypothesis testing problem to be solved; since this starts with a specific rather than general question about \(\Theta\) to answer, we consider this test-based construction "less holistic" than the likelihood-based construction mentioned above and described in more detail below. The two different constructions have their advantages and, in particular, the test-based construction helps us make connections to earlier attempts to achieve probativeness. The likelihood-based construction here is motivated by the probability-to-possibility transform in, e.g., Dubois et al. (2004), Hose and Hanss (2021), and Hose (2022), and is driven by the likelihood function of the posited model. Let \(\theta\mapsto L_{y}(\theta)\) denote the likelihood function for \(\Theta\) based on data \(y\), and define the _relative likelihood_ \[R(y,\theta)=\frac{L_{y}(\theta)}{L_{y}(\hat{\theta}_{y})},\quad\theta\in \mathbb{T},\] where \(\hat{\theta}_{y}\) is a maximum likelihood estimator. This relative likelihood has been used by several authors (e.g., Denoeux 2014; Shafer 1982; Wasserman 1990) to construct a plausibility function for \(\Theta\). To achieve the desired performance guarantees, however, we need to go one step further. Next, define the function \[\pi_{y}(\theta)=\mathsf{P}_{\theta}\{R(Y,\theta)\leq R(y,\theta)\},\quad \theta\in\mathbb{T}. \tag{1}\] This is the p-value function for a likelihood ratio test, but it is also a possibility contour, since it attains a maximum value of 1 at \(\hat{\theta}_{y}\). Then the corresponding IM for \(\Theta\) is the possibility and necessity measure pair determined by the contour in (1), i.e., \[\overline{\Pi}_{y}(H)=\sup_{\theta\in H}\pi_{y}(\theta)\quad\text{and}\quad \underline{\Pi}_{y}(H)=1-\overline{\Pi}_{y}(H^{c}),\quad H\subseteq\Theta. \tag{2}\] It is easy to verify from the above definition that \[\underline{\Pi}_{y}(H)\leq\overline{\Pi}_{y}(H),\quad\text{for all $H\subseteq \mathbb{T}$ and all $y\in\mathbb{Y}$}, \tag{3}\] which explains the lower- and upper-bar notation and why, in some cases, these are referred to as lower and upper probabilities. A feature of the IM's output \((\underline{\Pi}_{y},\overline{\Pi}_{y})\), or necessity-possibility measure pairs more generally, is that there some inherent constraints on the values that \(\underline{\Pi}_{y}(H)\) and \(\overline{\Pi}_{y}(H)\) can take for a given \(H\). In particular, \[\overline{\Pi}_{y}(H)<1\implies\underline{\Pi}_{y}(H)=0\quad\text{and}\quad \underline{\Pi}_{y}(H)>0\implies\overline{\Pi}_{y}(H)=1. \tag{4}\] The intuition (cf. Shackle 1961) behind this is as follows: if there is any doubt about \(H\), so that \(\overline{\Pi}_{y}(H)<1\), then there cannot be even a shred of support for \(H\) and, similarly, if there is a shred of support for \(H\), so that \(\underline{\Pi}_{y}(H)>0\), then there can be no doubt that \(H\) is possible. These constraints come from the necessity-possibility pair being determined by maximizing the contour function in (2). Some might view these constraints as a real restriction and, indeed, there are some non-statistical applications in which the _consonance_ structure that necessity-possibility measures have would not be appropriate. However, our views align with those of Shafer: _... specific items of evidence can often be treated as consonant, and there is at least one general type of evidence that seems well adapted to such treatment. This is inferential evidence--the evidence for a cause that is provided by an effect_(Shafer 1976, p. 226). Statistical inference problems, like those in consideration here, are of the form Shafer is referring to, so the adoption of a consonant belief structure for quantifying uncertainty is quite natural. In addition, the particular property (Theorem 1) that we need to ensure both performance and probativeness can only be satisfied if the IM has this consonant structure, i.e., if its output is a necessity-possibility measure pair. Suppose interest is in some feature \(\Phi=f(\Theta)\), where \(f\) is a function defined on \(\mathbb{T}\). We can easily obtain a marginal IM for \(\Phi\) from that for \(\Theta\) using possibility calculus. Indeed, the extension principle of Zadeh (1975) gives the possibility contour function for \(\Phi\): \[\pi_{y}^{f}(\phi)=\sup_{\theta:f(\theta)=\phi}\pi_{y}(\theta),\quad\phi\in f( \mathbb{T}). \tag{5}\] Using this contour, just as before, we can obtain the possibility and necessity measure pair that determines the marginal IM for \(\Phi\): \[\overline{\Pi}_{y}^{f}(K)=\sup_{\phi\in K}\pi_{y}^{f}(\phi)\quad\text{and} \quad\underline{\Pi}_{y}^{f}(K)=1-\overline{\Pi}_{y}^{f}(K^{c}),\quad K\subseteq f (\mathbb{T}).\] This is not the only marginalization strategy. The one above is consistent with our holistic approach to/perspective on statistical inference, but the price paid for its broad flexibility is efficiency. If it were known that _only_ the feature \(\Phi=f(\Theta)\) is of interest, then a different and more efficient marginalization strategy can be carried out, one that is tailored specifically to that feature; see, e.g., Martin (2022). A relevant question is: _how to interpret the IM output?_ Since the IM output corresponds to an imprecise probability, all the standard interpretations of imprecise probabilities can be taken, e.g., degrees of belief, bounds on prices for gambles, etc. In particular, for fixed \(y\), the IM output \((\underline{\Pi}_{y},\overline{\Pi}_{y})\) defines a coherent lower and upper probability for \(\Theta\). While IMs are compatible with the theory developed in Walley (1991), that is not the perspective we take here. We should also emphasize that there does not exist an underlying "true conditional probability distribution of \(\Theta\), given \(Y=y\)," so it would not make sense to think of these imprecise probabilities as bounds on probabilities on some "true" probabilities, or that this "true" probability distribution is contained in the IM output's credal set. Instead, we see the IM output as facilitating what we call _possibilistic reasoning_--a sort of unidirectional version of the more familiar, bidirectional probabilistic reasoning. That is, in the latter case, both small and large probability values carry inferential weight whereas, in the former case, only small possibility values and large necessity values carry inferential weight. To conclude that data \(y\) supports a hypothesis \(H\), it is not enough that \(\overline{\Pi}_{y}(H)\) is large; we need \(\underline{\Pi}_{y}(H)\) to be large, which implies that \(\overline{\Pi}_{y}(H)\) is also large, by (3). In fact, Shafer (1976, Ch. 11) refers to \(H\mapsto\underline{\Pi}_{y}(H)\) as a _support function_, which is how we propose to use it here. This is just a mathematization of the commonsense notion that a lack of support for \(H\) does not imply support for \(H^{c}\). A certain mathematical structure is not enough to give the IM output the aforementioned "inferential weight." Following Cournot's principle (e.g., Shafer 2007), this requires establishing that true hypotheses tend not to be assigned small possibility values; equivalently, false hypotheses tend not to be assigned large necessity values. The following basic but important result establishes a key connection between the IM and the "real world" (relative to the posited model), through the magnitudes of its possibility assignments to true hypotheses. This will serve as the jumping off point for both the performance- and probativeness-specific properties in the coming section. **Theorem 1**.: _An IM for \(\Theta\) whose output takes the form of a necessity-possibility measure pair as described above, determined by a contour function \(\pi_{y}(\theta)\) as in (1), is strongly valid in the sense that_ \[\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\{\pi_{Y}(\Theta)\leq\alpha\} \leq\alpha,\quad\alpha\in[0,1]. \tag{6}\] If one is thinking in terms of p-values, then the result in Theorem 1 will look familiar. It also closely resembles what Walley (2002) calls the _fundamental frequentist principle_, so, despite its familiarity, this result must be important. We will discuss below, in Section 3, the arguably striking implications this has when it comes to the IM's performance and probativeness properties. ## 3 Two P's in a possibility-theoretic pod ### Performance As discussed above, what modern statisticians value most is _performance_, i.e., that the procedures developed for the purpose of making inference-related decisions (e.g., accept or reject a hypothesis) have frequentist error rate control guarantees. These error control properties are genuinely important: if statistical methods are not even reliable, then it is difficult to imagine how they could help advance science. This explains why even the Bayesians are concerned with frequentist properties. Most of the previous IM developments have focused primarily on the performance aspect, so the results presented below are not new. For completeness, however, we give a quick summary of the available results and offer some new perspectives. The two standard procedures found in the statistics literature are hypothesis testing and confidence set procedures. Corollary 1 below describes the corresponding procedures derived from the IM output and the error rate control guarantees they enjoy. **Corollary 1**.: _Consider an IM for \(\Theta\) that, for \(Y=y\), returns the necessity-possibility measure pair \((\underline{\Pi}_{y},\overline{\Pi}_{y})\) determined by a possibility contour function \(\pi_{y}\) as described in Section 2.2. Then the following properties hold for all \(\alpha\in[0,1]\)._ 1. _For any given_ \(H\)_, the test "reject_ \(H\) _if and only if_ \(\overline{\Pi}_{Y}(H)\leq\alpha\)_" has frequentist Type I error probability no more than_ \(\alpha\)_, i.e.,_ \[\sup_{\Theta\in H}\mathsf{P}_{\Theta}\{\overline{\Pi}_{Y}(H)\leq\alpha\}\leq\alpha.\] 2. _The set_ \(C_{\alpha}(Y)=\{\theta:\pi_{Y}(\theta)>\alpha\}\) _has frequentist coverage probability at least_ \(1-\alpha\)_, making it a_ \(100(1-\alpha)\)_% confidence set. That is,_ \[\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\{C_{\alpha}(Y)\not\ni\Theta\} \leq\alpha.\] Proof.: Both of these results are immediate consequences of (6). For Part (a), note that monotonicity of the possibility measure implies that \(\overline{\Pi}_{Y}(H)\geq\pi_{Y}(\Theta)\) for any \(\Theta\in H\). Therefore, combined with (6), we get \[\mathsf{P}_{\Theta}\{\overline{\Pi}_{Y}(H)\leq\alpha\}\leq\mathsf{P}_{\Theta} \{\pi_{Y}(\Theta)\leq\alpha\}\leq\alpha. \tag{7}\] For Part (b), observed that \(C_{\alpha}(Y)\not\ni\Theta\) if and only if \(\pi_{Y}(\Theta)\leq\alpha\). And since the latter event has probability \(\leq\alpha\) by (6), so too does the former. We are not aware of Fisher ever making such a statement, but we can imagine that Fisher's disdain for the Neyman-style behavioral approach to statistical inference at least partially stemmed from the fact that the frequentist error rate control properties would be immediate consequences of the kind of calibration needed to make his "logical disjunction" argument sound. That is, if Fisher's necessary calibration (6) is satisfied, then Neyman's error rate control is a corollary. This is effectively what Corollary 1 shows and, it is in this sense that a strongly valid IM offers performance guarantees. Recall that we explained in Section 2.2 how uncertainty about a relevant feature \(\Phi=f(\Theta)\) could be quantified based solely on the IM for \(\Theta\). An immediate consequence of Corollary 1 and the possibility calculus--the extension principle specifically--is that the corresponding test and confidence set procedures for making decisions pertaining to \(\Phi\) inherit the performance guarantees that the IM for \(\Theta\) enjoys. The kind of performance properties that the IM achieves might remind some readers of confidence distributions (e.g., Nadarajah et al. 2015; Schweder and Hjort 2002; Xie and Singh 2013). For a scalar parameter \(\Theta\), a confidence distribution is a data-dependent cumulative distribution function \(\theta\mapsto G_{y}(\theta)\) such that \[\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\{G_{Y}(\Theta)\leq\alpha\} \leq\alpha,\quad\alpha\in[0,1].\] From here, one can construct hypothesis tests and confidence sets similar to how we did with the IM output above in Corollary 1; see the above references for details. However, the testing error rate control can only be achieved for hypotheses \(H\) that take the form of half-lines, e.g., \((-\infty,\theta]\) or \([\theta,\infty)\). For other kinds of hypotheses, e.g., bounded intervals, the frequentist error rates might not be controlled. Corollary 1 shows that the IM controls error rates for any hypotheses \(H\), and not just for scalar \(\Theta\). For a certain class of models, Fisher's fiducial distribution and the default-prior Bayes posterior distribution are confidence distributions, and Martin (2023) characterizes these as members of the IM output's credal set. This characterization explains why a confidence distribution's probability assignments are calibrated only in the tails, i.e., for half-line hypotheses. ### Probativeness The current literature on IMs has focused largely on the performance-related questions as in the previous subsection. This is understandable given that performance is the top priority for modern statisticians and that other performance-related features (e.g., Theorem 1) are crucial to Fisher's brand of inductive inference. But we claim that the IMs described above have even more to offer, so the goal of this section is to unearth those previously underappreciated features of the IM framework. That the IM output offers more than what has been discussed in the extant literature is obvious: the focus has been on the performance of derived statistical methods, which only involves certain features, such as the contour function, its level sets, and the possibility measure evaluated at pre-determined hypotheses. This is just a small fraction of what a full-blown imprecise probability distribution--or even a necessity-possibility measure pair--can do. What we are particularly interested in here is the use of the IM output to _probe_, that is, to naturally proceed with the analysis, to dig deeper, after an answer to the first (often trivial) question has been answered. On the performance side, one thinks of the test in Corollary 1(a) as a one-and-done prospect: if \(H\) is rejected, then infer \(H^{c}\) and pack up to go home. In reality, such a test is just the first step in the analysis, so we ought to consider the follow-up questions and analyses too. This is especially true in the IM case because there is an opportunity to tap into those aspects of the necessity-possibility measure pair that are currently being ignored. Consider the _common_ situation where the data \(y\) is incompatible with the hypothesis \(H\) in the sense that \(\overline{\Pi}_{y}(H)\) is small; the other case of probing when \(\overline{\Pi}_{y}(H)\) is not small is more more challenging and will be discussed in detail in Section 4. We emphasized "common" because the initial \(H\), or _null hypothesis_, is often an overly simplistic scientific default that isn't expected to be true--otherwise, the resources needed to collect the data \(y\) probably would not have been invested. If \(\overline{\Pi}_{y}(H)\) is small, then we know by the discussion in Section 2.2 that \(\overline{\Pi}_{y}(H^{c})=1\) and, consequently, there is ample room for certain sub-hypotheses in \(H^{c}\) to have non-trivial necessity values, i.e., \(\underline{\Pi}_{y}(A)>0\) for some \(A\subseteq H^{c}\). We follow Shafer and interpret \(\underline{\Pi}_{y}\) as a measure of support, so this probing exercise is about finding sub-hypotheses whose truthfulness is directly supported by data \(y\). To fix ideas, consider the case where \(\overline{\Pi}_{y}(H)\) is small and \(A\) is a fixed sub-hypothesis contained in \(H^{c}\). There are roughly two cases worth exploring: * if \(\underline{\Pi}_{y}(A)\) is large, then we can conclude that \(A\) is supported by data \(y\), and * if \(\underline{\Pi}_{y}(A)\) is small, then data \(y\) is mostly uninformative about \(A\) and no conclusion about \(A\) is warranted; that is, both \(A\) and \(A^{c}\) are compatible with \(y\) or, equivalently, the "don't know probability" (Dempster 2008), \(\overline{\Pi}_{y}(A)-\underline{\Pi}_{y}(A)\), is large. This process can be repeated, in principle, for all sub-hypotheses \(A\) of \(H\). The data analyst will find some \(A\)s that are supported by data and others about which the data are mostly uninformative. Stitching all of these \(A\)-specific analyses together creates a complete IM tapestry that details what the data can reliably say about \(\Theta\). Mayo (2018, p. 13 and elsewhere) argues at a high level that "probabilism" does not imply productiveness. But the shortcoming of probability as a tool for probing also becomes clear here in the mathematical details. Take, for example, a confidence distribution as discussed briefly above. Since the probabilities assigned by the confidence distribution to \(H\) and \(H^{c}\), respectively, must sum to 1, we find that a lack of support for one implies support for the other, which we know is logically incorrect--this is exactly why the probing task is challenging. Therefore, imprecision seems necessary to achieve the probing goal, and below we will argue why the IM framework suggested here is the appropriate formulation. Although Mayo does not specifically mention imprecise probabilities,2 we will show below that the measure she proposes is, in fact, non-additive and agrees with our proposed IM solution in the contexts where she described it; see Section 4. That the IM framework facilitates probing as described above does not directly imply that the probing process we described is _reliable_. We do get some comfort from (7), i.e., the possibilities assigned to true hypotheses do not tend to be small. Thanks to the duality \(\underline{\Pi}_{y}(H)=1-\overline{\Pi}_{y}(H^{c})\) between the two measures, this also implies \[\sup_{\Theta\not\in H}\mathsf{P}_{\Theta}\{\underline{\Pi}_{Y}(H)\geq 1- \alpha\}\leq\alpha,\quad\text{all $\alpha\in[0,1]$, all $H\subseteq\mathbb{T}$}. \tag{8}\] That is, the necessities (or support) assigned to false hypotheses do not tend to be large. It is problematic, however, that probing is dynamic and there is no way to predict what kind of follow-up questions the data analyst might want to ask. In fact, those questions might be determined by the data itself, so the aforementioned comfort--derived from hypothesis-wise error rate control--is not all that comforting. Mayo and Cox (2006, Sec. 4.2) discuss these and related issues concerning selection. Fortunately, there are stronger consequences of Theorem 1 that are not captured by (7) and have not been elucidated in the previous works on IMs and their performance properties. **Corollary 2**.: _An IM for \(\Theta\) whose output \((\underline{\Pi}_{y},\overline{\Pi}_{y})\) is determined by a possibility contour \(\pi_{y}\) via (2) has the following uniform validity property:_ \[\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\{\overline{\Pi}_{Y}(H)\leq \alpha\text{ for some true $H$, i.e., $H\ni\Theta$}\}\leq\alpha,\quad\alpha\in[0,1]. \tag{9}\] _Equivalently, in terms of necessity/support:_ \[\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\{\underline{\Pi}_{Y}(H)\geq 1- \alpha\text{ for some false $H$, i.e., $H\not\ni\Theta$}\}\leq\alpha,\quad\alpha\in[0,1].\] The proof is almost immediate so we get it out of the way now. But the reader might want to first skip ahead to the discussion below to better understand the result. Proof of Corollary 2.: The claim (9) follows from (6) and the fact that there exists an \(H\) such that \(H\ni\Theta\) and \(\overline{\Pi}_{Y}(H):=\sup_{\theta\in H}\pi_{Y}(\theta)\leq\alpha\) if and only if \(\pi_{X}(\Theta)\leq\alpha\). The "for some true \(H\)" statement in (9) is potentially confusing so here is a more detailed explanation. The reader is surely accustomed to interpreting "for some" in terms of a union operation, and that is precisely the interpretation we have in mind here. That is, the event in (9) can be written as \[\bigcup_{H\subseteq\mathbb{T}:H\ni\Theta}\{\overline{\Pi}_{Y}(H)\leq\alpha\}.\] This is clearly a much larger event than \(\{\overline{\Pi}_{Y}(H)\leq\alpha\}\) for a fixed \(H\) and, therefore, the probability bound in (9) is significantly stronger than the analogous bound in (7). Aside from being mathematically stronger, there are key practical implications of this. The result implies that no matter how the data analyst proceeds with his/her probing, the probability that even one of the IM's suggestions is misleading--small \(\overline{\Pi}_{Y}\) to a true hypothesis or large \(\underline{\Pi}_{Y}\) to a false hypothesis--is controlled at the specified level. To put the result more in line with the explanation of probing given above, consider the special case of a fixed \(H\) and then a collection of sub-hypotheses \(\{A_{r}:r\geq 1\}\) of \(H^{c}\). So, if \(\Theta\in H\), then \(\Theta\in A_{r}^{c}\) for \(r\geq 1\). Then Corollary 2 implies that \[\sup_{\Theta\in H}\mathsf{P}_{\Theta}\{\overline{\Pi}_{Y}(H)\leq\alpha\text{ or }\underline{\Pi}_{Y}(A_{1})\geq 1-\alpha\text{ or }\underline{\Pi}_{Y}(A_{2})\geq 1- \alpha\text{ or}...\}\leq\alpha,\] i.e., the probability that the IM points the data analyst in the wrong direction concerning even one of these hypotheses is no more than \(\alpha\). Moreover, that union perspective reveals that the result remains true even if the hypotheses chosen in the probing step happen to be data-dependent in some way that would be too complicated for those of us here on the data analysis sidelines to specify in advance. So, the uniformity baked into the result of Corollary 2 is exactly what is needed to ensure that the commonsense probing that the IM framework suggests can indeed be carried out reliably. ## 4 Comparison with Mayo's severity ### Background In Section 1, we mentioned recent efforts by statisticians to supplement the standard significance tests, etc. with measures designed to _probe_ for hypotheses that are supported by the data. In particular, what Mayo (2018) refers to as _severe testing_ aims to capture this notion of probing. We do not assume that the reader is familiar with Mayo's work, so here we give a relatively brief introduction. Modulo some minor changes in notation and terminology, here is how Mayo (2018, p. 23) explains her notion of severity: _Severity (weak):_ If data \(y\) agree with a claim \(H\) but the method was practically incapable of finding flaws with \(H\) even if they exist, then \(y\) is poor evidence of \(H\). _Severity (strong):_ If \(H\) passes a test that was highly capable of finding flaws or discrepancies from \(H\), and yet none or a few are found, then the passing result, \(y\), is an indication of, or evidence for, \(H\). At least conceptually, we think most readers would find these basic severity principles uncontroversial. The idea goes back to Popper's falsificationist program which says science progresses by subjecting the status quo to severe tests, tests that are capable of detecting departures from the status quo. Hypotheses that are able to withstand a series of severe tests have "proved their mettle" (Popper 1959, p. 10). The challenge in the context of statistical hypothesis testing is that Popper's very strict standard for falsification cannot be met based on a (necessarily limited) set of empirical data \(y\). This is where the work of Fisher, Neyman-Pearson, Mayo-Cox, and others comes in. As we attempt to dig deeper, to get beyond just a high-level conceptual understanding of these ideas in hopes of putting them into practice, things become less clear. Mayo (2018, p. 148-150) presents her two-part _Principle of Frequentist Evidence_ (FEV), whose starting point is a particular null hypothesis \(H_{0}\) and a given procedure for testing that hypothesis based on data \(y\). We present here an inconsequentially modified version of FEV from Mayo and Cox (2006, p. 82-84): _FEV 1._\(y\) is (strong) evidence against \(H_{0}\), i.e., (strong) evidence of a discrepancy from \(H_{0}\), if and only if, where \(H_{0}\) a correct description of the mechanism generating \(y\), then, with (very) high probability, this would have resulted in a less discordant result than is exemplified in \(y\). _FEV 2._ A moderate p-value is evidence of the absence of a discrepancy \(\delta\) from \(H_{0}\), only if there is high probability the test would have given a worse fit with \(H_{0}\) (i.e., smaller p-value) were a discrepancy \(\delta\) to exist. As Mayo (2018, p. 149) admits, "this sounds wordy and complicated." The complication, we claim, stems from trying to justify fixed-data conclusions based on frequentist-style performance probabilities. On the one hand, FEV1 is familiar even if it is not easy to follow: this is just a different way to say that a small p-value is interpreted as evidence in \(y\) against \(H_{0}\). In this case, the data analyst is in a situation like that described in Section 3.2 and the probing question concerns support for subsets of \(H_{0}^{c}\). On the other hand, FEV2 goes in an unfamiliar--but important, challenging, and exciting--direction, towards something far beyond what one finds in the cookbook-style NHST literature. For both parts of FEV, something more than just the p-value associated with the pair \((y,H_{0})\) is needed, something that can probe for genuine support. What this "something more" might look like is discussed below. Note that FEV1 and FEV2 are "if and only if" and "only if" claims, respectively. The point is that effectively the only way we can interpret incompatibility or discordance between \(y\) and \(H_{0}\) is as evidence against \(H_{0}\), but compatibility alone between \(y\) and \(H_{0}\) need not be evidence supporting \(H_{0}\). There are many reasons why a p-value might be large, reasons that do not indicate genuine support in the data for the hypothesis. This aligns with our aforementioned commonsense understanding, which draws a first connection between what Mayo aims to achieve and what our proposed IM offers. After a preview in Chapter 3, Chapter 5 of Mayo (2018) lays out some details of her proposal for what the aforementioned "something else" ought to look like. She explains that the origins of her idea are in the early works on power analysis, in particular, Neyman (1955) and insights sprinkled throughout Cox (2006). Like p-values can be interpreted as an _attained significance level_--we used this interpretation in the procedure-driven IM construction, (13) in Appendix A.1--one can consider a corresponding _attained power_. Mayo's presentation focuses solely on simple, albeit important/common testing scenarios, so we will do the same here, both for concreteness and to avoid potentially misrepresenting Mayo's proposal. Suppose \(\Theta\) is a scalar parameter, and the null hypothesis is \(H_{0}:\Theta\leq\theta_{0}\), for some fixed \(\theta_{0}\) value. The starting point is a particular test of this hypothesis, and suppose that this test rejects \(H_{0}\) if and only if \(S(Y,\theta_{0})\) is large, where \(S\) is a given test statistic that (explicitly) depends on the null value \(\theta_{0}\). More specifically, a test that controls the Type I error at a specified significance level \(\alpha\in(0,1)\) rejects \(H_{0}\) if and only if \(S(Y,\theta_{0})\geq s_{\alpha}\), where the critical value \(s_{\alpha}\) is defined to satisfy the condition \[\sup_{\theta\leq\theta_{0}}\mathsf{P}_{\theta}\{S(Y,\theta_{0})\geq s_{\alpha }\}=\alpha.\] It will often be the case that the probability in the above display is increasing in \(\theta\), which implies the supremum is attained at the boundary \(\theta_{0}\). We will assume that this is the case here, so \(s_{\alpha}\) satisfies \(\mathsf{P}_{\theta_{0}}\{S(Y,\theta_{0})\geq s_{\alpha}\}=\alpha\). Note that the Type I error probability is a property of the test procedure and, therefore, does not depend on the observed data \(Y=y\). The p-value, or attained significance level just replaces the fixed critical value \(s_{\alpha}\) with the value of the test statistic computed based on the observed data \(y\): \[\mathsf{pval}_{y}(\theta_{0};\theta_{0})=\mathsf{P}_{\theta_{0}}\{S(Y,\theta_{ 0})\geq S(y,\theta_{0})\}.\] The first argument to this function is the most important one, it states which probability is used to carry out the calculation, which is the subscript on \(\mathsf{P}_{\theta_{0}}\); the second is just to remind the reader that the hypothesis being tested depends on the fixed \(\theta_{0}\). Next, the power of the test is define as the probability of rejecting \(H_{0}\) when it is actually false, i.e., where the probability is with respect to \(\mathsf{P}_{\theta}\) for some \(\theta>\theta_{0}\): \[\mathsf{pow}(\theta;\theta_{0})=\mathsf{P}_{\theta}\{S(Y,\theta_{0})\geq s_ {\alpha}\},\quad\theta>\theta_{0}.\] Like above, note that the power is a feature of the test procedure itself and, therefore, does not depend on the observed data \(Y=y\). The _attained power_ is defined by replacing the fixed \(s_{\alpha}\) by the value of the test statistic computed based on \(y\). This is closely related to the p-value above so we do not introduce a new notation (yet): \[\mathsf{pval}_{y}(\theta;\theta_{0})=\mathsf{P}_{\theta}\{S(Y,\theta_{0})\geq S (y,\theta_{0})\},\quad\theta>\theta_{0}. \tag{10}\] The attained power can hardly be considered a "new" mathematical or statistical object, so its utility is determined by what it can be successfully used for. Towards this, we consider the two cases determined by FEV below. **Case 1:**: _Small p-value, probing for genuine support in (subsets of) the alternative_. If the p-value is small, then we might be inclined to reject the null. But does the data genuinely support any subsets of the alternative? To answer this question, we would like to probe by considering various subsets of the alternative. Mayo's claim is that a severe test can offer up support for subsets of the alternative in this case. To assess these subsets, Mayo proposes a measure that she calls _severity_ which, as we describe below, is a \(y\)-dependent function that maps subsets of \(H_{0}^{c}\) to values in \([0,1]\). In particular, for the context described above, Mayo defines \[\mathsf{sev}_{y}(\{\Theta>\theta\})=1-\mathsf{pval}_{y}(\theta;\theta_{0}), \quad\theta>\theta_{0}.\] Large values of \(\mathsf{sev}_{y}(H)\) are to be interpreted as stronger support in \(y\) for the hypothesis \(H\). Since the right-hand side of the above display is a decreasing function of \(\theta\), we get the intuitive property that bolder claims, i.e., larger values of \(\theta\), have are given less support from the data. **Case 2:**: _Not-small p-value, probing for potential support in (supersets of) the null_. If the p-value is not small, then we would be inclined to _tentatively_ accept the null. But despite the null or non-significant conclusion, there is still information available in the data about \(\Theta\)--"no evidence of risk is not evidence of no risk" (Mayo 2018, p. 3). In particular, there are hypotheses more inclusive of the null, i.e., \(H\) that are implied by \(H_{0}\), with which the data might be highly compatible with. This high degree of compatibility, Mayo argues, is an indication of support and is therefore useful information. We admit, as does Mayo, that teasing out "support" just from compatibility is shakier logical ground, but we must understand the boundaries of what is possible so that we can push our statistical inference tools to their limits. Towards this, Mayo defines her severity in this case as a function that maps supersets of \(H_{0}\) to numbers in \([0,1]\) as follows: \[\mathsf{sev}_{y}(\{\Theta\leq\theta\})=\mathsf{pval}_{y}(\theta;\theta_{0}), \quad\theta>\theta_{0}.\] Again, large values of \(\mathsf{sev}_{y}(H)\) are to be interpreted as stronger potential support in \(y\) for \(H\supseteq H_{0}\), i.e., a stronger _indication_ for \(H\). The right-hand side above is increasing in \(\theta\), so less-bold claims, corresponding to larger values of \(\theta\), are more strongly indicated/potentially supported by the data. To summarize, Mayo's severity measure is derived from the p-value function determined by the underlying test. One can plot the severity function to visualize the details explained above (see Section 5 below), and Mayo refers to these as _severity curves_. The close connection between Mayo's severity and p-values suggests a similarly close connection between severity and our proposed IM construction, as we explain below. ### IMs versus severity: special case Consider the same testing setup as in the discussion above. For the special class of testing problems involving half-line null and alternative hypotheses like Mayo exclusively considers, it can be shown (see Appendix A.3) that the aforementioned test-based IM construction determines the following possibility contour for \(\Theta\): \[\pi_{y}(\theta)=\mathsf{pval}_{y}(\theta;\theta_{0}), \tag{11}\] the p-value function in (10). Let \(\underline{\Pi}_{y}\) and \(\overline{\Pi}_{y}\) denote the corresponding necessity and possibility measures defined in (2) and reconsider the two cases examined above. **Case 1:**: _Small p-value, probing for genuine support in (subsets of) the alternative_. By monotonicity of the p-value, Mayo's severity measure reduces to \[\mathsf{sev}_{y}(\{\Theta>\theta\})=\underline{\Pi}_{y}(\{\Theta>\theta\}), \quad\theta>\theta_{0},\] which is just the IM's support for the hypothesis "\(\Theta>\theta\)" based on data \(y\). That this is decreasing in \(\theta\), which controls the boldness of the claim, follows immediately from the general monotonicity property of necessity measures. **Case 2:**: _Not-small p-value, probing for potential support in (supersets of) the null_. Mayo's severity measure in this case reduces to \[\mathsf{sev}_{y}(\{\Theta\leq\theta\}) =\overline{\Pi}_{y}(\{\Theta\leq\theta\})\] \[=1-\underline{\Pi}_{y}(\{\Theta>\theta\}),\quad\theta>\theta_{0}.\] We offered two expressions above because there are two equivalent interpretations of Mayo's severity in terms of the IM output. First, severity is measuring the degree of possibility assigned to hypotheses \(H\) that are implied by the compatible-with-\(y\) null \(H_{0}\), and can be treated as an indication or measure of potential support for \(H\) in \(y\). Second, severity is measuring the lack of support for \(H^{c}\), with the interpretation as a double-negative: a lack of support for \(H^{c}\) is an indication of \(H\). The connection between Mayo's severe testing and our IM framework is undeniable. For the testing problems Mayo considers, if the IM construction is based on the same test, then both it and Mayo's severity function are determined by the underlying test's p-value function. This new connection between the two theories is of practical and foundational importance, for several reasons, including the following. * Mayo's objectives are clear, but we know from our own experience that it is easy to get lost on the long winding road from an objective to an actual implementation that achieves the objective. The obstacle is that the frequentist-style reasoning is awkward to apply for single-case use. If Mayo's logic could somehow be cast in terms of (something like) a probability, then the road would be much clearer. As we know, performance and probativeness cannot be simultaneously achieved by a probability, so a more robust uncertainty quantification framework is needed, which is exactly what IMs offer. Instead of justifying severity indirectly through the performance of the test, one can speak directly in terms of (data-dependent) necessity/possibility, confidence/plausibility, or lower/upper probability values assigned to hypotheses, keeping the necessary but potentially confusing frequentist aspects of the logic in the theory behind the scenes (in Theorem 1). * There is appeal in a framework for statistical inference being grounded in a rigorously examined theory of uncertainty quantification. IMs fall under the umbrella of possibility theory, which has a very long history. At least on the surface, severity appears to be built on top of the already not-so-well-understood concept of p-values. But the connection to IMs established here shows that Mayo's framework is also possibility-theoretic, so it too enjoys all the relevant properties. * In light of this connection to IMs, Mayo's proposal inherits some very strong theoretical support as established here in Corollaries 1 and 2. It should also be of general interest to the imprecise probability community that certain kinds of imprecision are necessary in order to achieve the performance and probativeness properties that statisticians and scientists want and need. We should remind the reader that the direct connection just made between the IM formulation and Mayo's approach applies only for the special class of testing problems Mayo considers in her text. It is possible that the the connection can be made more broadly, but--to our knowledge--she has not offered a general description of her severity-based analysis, so we do not want to speculate on how she would address, e.g., a two-sided testing problem like \(H_{0}:\Theta=\theta_{0}\) versus \(H_{1}:\Theta\neq\theta_{0}\). See Section 5.1 for more details. Fortunately, the more holistic likelihood-based IM construction has no such restrictions, and provides reliable probing regardless of if or how closely it lines up with Mayo's proposal. Several illustrations of this are provided in Section 5 below. Illustrations ### Normal mean Mayo (2018, p. 142) describes a hypothetical water plant where the water it discharges is intended to be roughly 150 degrees Fahrenheit. More specifically, water temperature measurements are assumed to be normally distributed with mean \(\Theta\) degrees and standard deviation 10 degrees and, under ideal conditions, \(\Theta\) is no more than 150 degrees. To test the water plant's settings, a sample \(Y=(Y_{1},\ldots,Y_{n})\) of \(n=100\) water temperature measurements are taken. Then the sampling distribution of the sample mean, \(\bar{Y}\), is \(\mathsf{N}(\Theta,1)\). Since water temperatures higher than 150 degrees might harm the ecosystem, of primary interest is testing the null hypothesis \(H_{0}:\Theta\leq\theta_{0}\), where \(\theta_{0}=150\), versus the alternative \(H_{1}:\Theta>\theta_{0}\). After this primary question is addressed, we have the option to prove other hypotheses of the form \((-\infty,\theta]\) or \((\theta,\infty)\), for \(\theta\) near 150. A most powerful test is available in this example, and it rejects \(H_{0}:\Theta\leq\theta_{0}\) when \(\bar{Y}-\theta_{0}\) is large. Then it follows easily that the p-value function is \[\mathsf{pval}_{y}(\theta;\theta_{0})=1-\mathsf{pnorm}(\bar{y}-\theta),\quad \Theta\in\mathbb{R},\] where \(\mathsf{pnorm}\) denotes the standard normal distribution function. Since this does not depend on \(\theta_{0}\), we drop the second argument from the notation. As discussed above, this p-value function determines both the IM and Mayo's severity. Suppose we observe \(\bar{y}=152\), which is potentially incompatible with the null hypothesis \(H_{0}:\Theta\leq 150\). Indeed, a plot of \(\theta\mapsto\mathsf{pval}_{y}(\theta)=\overline{\Pi}_{y}((-\infty,\theta])\) is shown in Figure 1(a) and we see that, at \(\theta=\theta_{0}=150\), the possibility is smaller than 0.05, so we would be inclined to reject the null hypothesis. To probe for support of subsets of the alternative hypothesis, we also plot the severity/necessity \[\underline{\Pi}_{y}\big{(}(\theta,\infty)\big{)}=\mathsf{pnorm}(\bar{y}- \theta),\quad\theta\in\mathbb{R},\] and we see that there is, in fact, non-negligible support in the data for, say, the hypothesis \((151,\infty)\). These results agree exactly with the severity-based analysis presented in Mayo (2018). The claim is that, for those \(\theta\) whose value on red curve in Figure 1(a) is relatively large, e.g., values near \(\theta=151\) and perhaps up to \(\theta=152\), the hypothesis \((\theta,\infty)\) garners non-negligible support from the data. Next, consider the case where \(\bar{y}=151\), which is too small to have grounds for rejecting the null hypothesis. In such cases, the goal would be to probe for potential support for hypotheses implied by the null. Figure 1(b) shows a plot of \(\theta\mapsto\mathsf{pval}_{y}(\theta)=\overline{\Pi}_{y}((\infty,\theta])\), similar to that in Panel (a). The claim is that, for those \(\theta\) whose value on curve in Figure 1(b) is relatively large, e.g., values near \(\theta=151\) and perhaps up to \(\theta=152\), the hypothesis \((-\infty,\theta]\) garners non-negligible potential support or indication from the data. Next, we take a step back from the test-based focus, to approach the problem with a more holistic perspective. This is intended to highlight the differences between the general IM framework and Mayo's severe testing. The likelihood-based IM has possibility contour \[\pi_{y}(\theta)=1-|2\,\mathsf{pnorm}(|\bar{y}-\theta|)-1|,\quad\theta\in \mathbb{R},\] and a plot of this function, based on \(\bar{y}=152\), is shown in Figure 2(a). Note that the maximum value 1 is attained at \(\bar{y}\), the maximum likelihood estimator. Furthermore, intervals that contain \(\bar{y}=152\) will have possibility 1 and necessity \(>0\), and intervals bounded away from \(\bar{y}=152\) will have necessity 0 and possibility \(>0\). In particular, \[\underline{\Pi}_{y}(\Theta>151.5) =0.617 \overline{\Pi}_{y}(\Theta>151.5) =1\] \[\underline{\Pi}_{y}(\Theta\leq 151.5) =0 \overline{\Pi}_{y}(\Theta\leq 151.5) =0.617.\] Since the likelihood-based IM analysis above coincided with a test-based IM analysis when the hypotheses being tested are singletons, it might make sense to compare the results to Mayo's severe testing of the same singleton hypotheses. Based on \(\bar{y}=152\), we would be inclined to reject the point null \(H_{0}:\theta=150\) based on the test statistic \(S(y,\theta_{0})=|\bar{y}-\theta_{0}|\); recall that the standard deviation of \(\bar{Y}\) is 1 in this example. Then we are in "Case 1"as above--and, since \(\bar{y}\gg\theta_{0}\), we want to probe to the right of \(\theta_{0}\)--so the severity function is \[\mathsf{sev}_{y}(\{\Theta>\theta\}) =1-\mathsf{P}_{\theta}\{S(Y,\theta_{0})\geq S(y,\theta_{0})\}\] \[=\mathsf{pnorm}(\theta_{0}-\theta+|\bar{y}-\theta_{0}|)-\mathsf{ pnorm}(\theta_{0}-\theta-|\bar{y}-\theta_{0}|).\] A plot of this severity function is shown in Figure 2(b) along with the corresponding IM necessity function \(\theta\mapsto\underline{\Pi}_{y}(\{\Theta>\theta\})\). Despite the fact that, in the end of the day, these two are IMs, so both enjoy all the properties exposed in Section 3, they cannot be compared directly. This is because the former depends on a specified hypothesis/test combination, while the latter does not. But it still worth seeing the differences between them. In particular, note that the IM is more conservative in how it allocates its support, Figure 1: Results for the normal mean example. Panel (a) shows a case where the null would be rejected and we probe for support in the alternative; black line is the possibility/p-value and the red line is the complementary necessity/severity for probing subsets of the alternative, with the red dots corresponding to the severity values in Table 3.1 of Mayo (2018). Panel (b) shows a case where the null is not rejected, and the curve is the possibility/severity function probing for potential support for supersets of the null; dots are the severity values in Table 3.3 of Mayo (2018). e.g., the IM does not support "\(\Theta>153\)" at all, while Mayo's severity does. This is because the IM must be prepared for many other potential questions the data analyst might ask--not just those that Mayo's severity can probe for--so this conservatism is needed to achieve the strong/uniform error rate control in Corollary 2. ### Binomial proportion Suppose an individual claims to possess psychic abilities. To test the validity of his claim, we set up the following experiment. From a collection of five fixed symbols, a computer will generate one of these at random, and the claimed psychic will be asked to guess which of the five symbols the computer generated. This will be repeated \(n=20\) times and the result is a number \(Y\) of correct guesses. Then \(Y\) has a binomial distribution with parameters \(n=20\) and \(\Theta\in[0,1]\) unknown. Of course, given \(Y=y\) the likelihood function for \(\Theta\) is \(L_{y}(\theta)\propto\theta^{y}(1-\theta)^{n-y}\), for \(\theta\in[0,1]\). As a first step, we can carry out the likelihood-based construction in Section 2.2 to get an IM for \(\Theta\). Figure 3(a) shows a plot of the possibility contour function based on an observed \(y=8\) correct guesses out of \(n=20\) trials. The level set determined by the horizontal line at \(\alpha=0.05\) determines a \(95\%\) confidence interval for \(\Theta\). To test the psychic's claim, the null hypothesis is \(H_{0}:\Theta\leq\theta_{0}\), with \(\theta_{0}=0.2\). We can see from the possibility contour plot that \(\overline{\Pi}_{y}(H_{0})<0.05\), so we are inclined to reject the null. But is there any support in the data for the psychic's claim? For this, we probe hypotheses "\(\Theta>\theta\)" for \(\theta>\theta_{0}\). Figure 3(b) shows a plot of the necessity \(\theta\mapsto\underline{\Pi}_{y}((\theta,1])\), which is akin to Mayo's severity although we are working in a more general context where we have not specified a test upon which the construction is based. In this case, we find that hypotheses consisting of bold claims like "\(\Theta>\theta\)" for \(\theta\) near \(0.4\) or even \(0.5\) are well supported by the data. ### Bivariate normal correlation Suppose that \(Y\) consists of \(n\) independent and identically distributed pairs \(Y_{i}=(Y_{1,i},Y_{2,i})\) having a bivariate normal distribution with zero means, unit variances, and correlation \(\Theta\in[-1,1]\). If there were a hypothesis \(H_{0}:\Theta\leq\theta_{0}\), then an asymptotic pivot based on the maximum likelihood estimator, \(\hat{\theta}\), could be constructed and the corresponding Wald test would look very similar to classical z-test like used in Section 5.1 above. This bivariate normal correlation problem, however, corresponds to a so-called _curved exponential family_, so \(\hat{\theta}\) is not a sufficient statistic and, consequently, some information/efficiency is lost in the aforementioned Wald test for finite \(n\). So we consider the more holistic approach and the likelihood-based IM construction from Section 2.2. The relative likelihood function has no closed-form expression, but it can be readily evaluated numerically. Then the corresponding IM output, which requires probability calculations with respect to the bivariate normal model, can be found numerically using Monte Carlo. As an illustration of the ideas presented above, consider the law school admissions data analyzed in Efron (1982), which consists of \(n=15\) data pairs with \(Y_{1}=\text{LSAT}\) scores and \(Y_{2}=\text{undergrad}\;\text{GPA}\). For our analysis, we standardize these so that the mean zero-unit variance is appropriate. Of course, this standardization has no effect on the correlation, which is our object of interest. In this case, the sample correlation is \(0.776\); the maximum likelihood estimator, which has no closed-form expression, is \(\hat{\theta}=0.789\). A plot of the plausibility contour \(\pi_{y}\) for this data is shown in Figure 4(a). The horizontal line at \(\alpha=0.05\) determines the \(95\%\) plausibility interval for \(\Theta\), which is an exact \(95\%\) confidence interval. Clearly, the data shows virtually no support for \(\Theta=0\), but there is some marginal support for the hypothesis "\(\Theta>0.5\)." To probe this further, consider the class of sub-hypotheses \(H_{\theta}=(\theta,1]\), \(\theta>0.5\). A plot of the function \(\theta\mapsto\underline{\Pi}_{y}(H_{\theta})\) is shown in Figure 4(b). As expected from Panel (a), the latter function is decreasing in \(\theta\) and we clearly see no support for \(H_{\theta}\) as soon as \(\theta\geq\hat{\theta}\). But there is non-negligible support for \(H_{\theta}\) with \(\theta\) less than, say, \(0.65\)-\(0.70\). Figure 3: IM results for the binomial example in Section 5.2. ### Contingency tables Data \(Y=(Y_{00},Y_{01},Y_{10},Y_{11})\) represents the observed frequencies for each of the four combinations of two binary categorical variables \(W\) and \(X\), as shown in the \(2\times 2\) contingency table below. The understanding here is that \(W\in\{0,1\}\) is the response and \(X\in\{0,1\}\)is the explanatory variable. \begin{tabular}{c c|c c|c} & & \multicolumn{2}{c|}{\(W\)} & \\ & & 0 & 1 & Total \\ \hline \(X\) & 0 & \(y_{00}\) & \(y_{01}\) & \(y_{0\cdot}\) \\ & 1 & \(y_{10}\) & \(y_{11}\) & \(y_{1\cdot}\) \\ \hline \multicolumn{2}{c|}{Total} & \(y_{\cdot 0}\) & \(y_{\cdot 1}\) & \(n\) \\ \end{tabular} The goal is to quantify uncertainty regarding the association between \(W\) and \(X\). In other words, to what extent does knowledge of the value of \(X\) help us predict the value of \(W\)? Let \(\Theta=(\Theta_{0},\Theta_{1})\), \(\Theta\in[0,1]^{2}\), denote the conditional probabilities of \(Y=1\) given \(X=0\) and \(X=1\), respectively; that is, \[\Theta_{x}=\mathsf{P}(W=1\mid X=x),\quad x\in\{0,1\}.\] The association between \(W\) and \(X\) can be stated in terms of the difference \(\Theta_{0}-\Theta_{1}\). A difference of zero implies no association, and the bigger the difference, positive or negative, the stronger the association. To construct an IM for \(\Theta_{0}-\Theta_{1}\), we can leverage the marginalization properties of the IM. Our approach involves first constructing an IM for \(\Theta\), and then mapping it to a marginal IM for \(\Phi=f(\Theta)=\Theta_{0}-\Theta_{1}\). It is important to note that the method for collecting data is fundamental to the specification of the likelihood function \(L_{y}(\theta)\) and, consequently, to the holistic IM construction from Section 2.2. Here, we will consider the scenario where the row totals are fixed. This means that random samples of size \(y_{0\cdot}\) and \(y_{1\cdot}\) observations are drawn from the populations corresponding to \(X=0\) and \(X=1\) Figure 4: Summary of the IM analysis of Efron’s law school admissions data. respectively, and then each observation is classified as either \(W=0\) or \(W=1\). This experiment produces two independent sequences of Bernoulli trials, therefore, a product of binomial likelihoods: \[L_{y}(\theta)\propto\theta_{0}^{y_{01}}(1-\theta_{0})^{y_{0}.-y_{01}}\times \theta_{1}^{y_{11}}(1-\theta_{1})^{y_{1}.-y_{11}},\quad\theta=(\theta_{0}, \theta_{1})\in[0,1]^{2}.\] Consider a hypothetical clinical trial in which \(n=50\) participants are randomly and equally divided into two groups. One group receives a drug, the other a placebo. After a year, the participants undergo an evaluation to determine whether a specific aspect of their health has improved. Table 1 shows the data and Figure 5(a) shows a contour plot of the possibility contour for \(\Theta\). The dotted lines correspond to \(\hat{\theta}_{y}\), the maximum likelihood estimator. Recall that a marginal possibility contour for \(\Phi\) can be obtained from the possibility contour for \(\Theta\) through (5). Figure 5(b) shows the corresponding necessity and possibility measures for hypotheses \(H_{\phi}=(-1,\phi]\) and \(H_{\phi}^{c}=(\phi,1)\), respectively. Note that \(\overline{\Pi}_{y}^{f}(H_{0})\) is small, which would make us inclined to reject the hypothesis "\(\Phi\leq 0\)." Non-negligible necessity measures for \(H_{\phi}^{c}\) with \(\phi<0.1\) can be observed. The public-health importance of raw differences like this may be hard to grasp in some applications. In such cases, data analysts often prefer to use the concept of _relative risk_\(\Phi=g(\Theta)=\Theta_{0}/\Theta_{1}\). Once again, it is straightforward to obtain a possibility contour for \(\Phi\) from that for \(\Theta\). Figure 5(c) shows this possibility contour. Agresti (2007) points out the difficulties in deriving confidence intervals for the relative risk because of the highly skewed distribution of the plug-in estimator. An advantage of our marginal IM is that a 95% confidence interval for \(\Phi\) is readily available from the possibility contour in Figure 5(c), being the level set determined by the horizontal line at \(\alpha=0.05\). As expected, this interval does not contain \(\phi=1\), so the null hypothesis of no association can be rejected. Figure 5(d) shows the marginal IM necessity measures for hypotheses \(H_{\phi}=(\phi,\infty)\). Evidently there is reasonably strong support for the alternative sub-hypothesis that the risk of disease is at least 20% higher in the placebo group. ## 6 Conclusion Here we showed that there is more to the IM framework than what has been presented in the existing literature. Specifically, the validity property, together with its inherent imprecision implies not only performance, but also probativeness assurances. These insights position IMs as a compelling solution to the long-standing Bayesian versus frequentist two-theory problem, which will undoubtedly benefit the statistical community. They are also of special interest to the belief function/possibility theory community, as it showcases the fundamental importance of its brand of imprecision. \begin{table} \begin{tabular}{c|c|c|c} & Disease & \\ \hline Group & No & Yes & Total \\ \hline Placebo & 11 & 14 & 25 \\ Drug & 17 & 8 & 25 \\ \hline \end{tabular} \end{table} Table 1: Hypothetical clinical trial data. We also compared IMs and Mayo's severe testing framework, and found an important connection between the two. Despite the fact that this connection is precise only for the special class of testing problems Mayo considers in her text, we believe that the holistic IM construction exposed in Section 2.2 is very beneficial to severe testers. If they accept the IM output as capable of probing for hypothesis that are actually supported by the data, as we believe they will given all the strong theoretical support presented in Section 3.2, they now have a general recipe for assessing severity in a wide range of modern applications. We also find it attractive that the IM framework has this notion of probativeness built in, as opposed to being an add-on to classical testing. Illustrations in cases beyond the simple, low-dimensional problems considered in Section 5 above will be reported elsewhere, and we are hopeful that an extension of the notion of probativeness/severity to model-agnostic statistical learning problems is within reach. ## Acknowledgments This work is partially supported by the U.S. National Science Foundation, SES-2051225. ## Appendix A Hypothesis testing details ### Test-based IM construction As an alternative to the likelihood-based IM construction in Section 2.2, Martin (2021) first showed how to construct an IM driven by a given statistical procedure, i.e., a hypothesis test of a confidence region. Since that statistical procedure is often tailored to a specific task or question, e.g., testing a particular (form of) hypothesis, this construction would tend to be "less holistic" than the likelihood-based construction mentioned above. It can happen, however, that the two constructions agree, as we will demonstrate below. To us, the "more holistic" likelihood-driven construction is preferred, but the procedure-driven strategy has its own advantages. Here we focus the procedure-based construction on cases where a family of hypothesis testing problems are given. Start with a class of hypotheses \(\{H_{\theta}:\theta\in\mathbb{T}\}\) about \(\Theta\) indexed by the parameter space \(\mathbb{T}\). These could be singleton/point-null hypotheses, \(H_{\theta}=\{\theta\}\), half-line hypotheses, \(H_{\theta}=(-\infty,\theta]\), in the case of scalar \(\Theta\), or other things. Next, consider a collection \(\{\delta^{\theta}_{\alpha}:\alpha\in[0,1],\theta\in\mathbb{T}\}\) of decision rules, where \(\delta^{\theta}_{\alpha}:\mathbb{Y}\rightarrow\{0,1\}\), with the interpretation that \(\delta^{\theta}_{\alpha}(y)=1\) means reject \(H_{\theta}\) and \(\delta^{\theta}_{\alpha}(y)=0\) means do not reject. For example, in a simple scalar location parameter setting, where \(H_{\theta}=(-\infty,\theta]\), then the testing rule might take the form \(\delta^{\theta}_{\alpha}(y)=1(y-\theta>c_{\alpha})\), where \(c_{\alpha}\) is a specified threshold and \(1(\cdot)\) is the indicator function. The index \(\alpha\) controls the size or Type I error probability of the test: \[\sup_{\Theta\in H_{\theta}}\mathsf{P}_{\Theta}\{\delta^{\theta}_{\alpha}(Y)=1 \}\leq\alpha,\quad\text{for all $\alpha\in[0,1]$, $\theta\in\mathbb{T}$}. \tag{12}\] The reader might be asking him/herself why a _family_ of tests indexed by \(\theta\in\mathbb{T}\) would be needed. Keep in mind that the IM returns a full-blown imprecise probability defined over the parameter space, so it would be unrealistic to expect that anything meaningful could be obtained based on a test of, say, a single hypothesis. In any case, often there is a structure inherent in the problem that suggests a particular form of hypothesis, e.g., \(H:\Theta\leq\theta_{0}\), and that the same testing procedure would have been used if \(\theta_{0}\) was changed to \(\theta_{0}+\eta\). So even if one has just one specific hypothesis/test procedure in mind, often that one belongs to a family like described above, so it is no additional burden on the data analyst to specify a family. In any case, from here, define the function \[\pi_{y}(\theta)=\inf\{\alpha\in[0,1]:\delta_{\alpha}^{\theta}(y)=1\},\quad \theta\in\mathbb{T}. \tag{13}\] This is just the p-value function corresponding to the collection of tests (Lehmann and Romano 2005, Eq. 3.11). Under certain conditions on the test (see below), \(\theta\mapsto\pi_{y}(\theta)\) is a genuine possibility contour function on \(\mathbb{T}\), i.e., \(\sup_{\theta}\pi_{\theta}(y)=1\) for each \(y\). In that case, the IM's possibility and necessity measures are defined via optimization exactly like in (2) and will enjoy all the same properties, e.g., (3). What conditions are required of the collection of tests to ensure that the function defined in (13) is a possibility contour? Basically, the collection of tests needs to satisfy a certain "nestedness" condition. The concept itself is pretty simple--for each data set \(y\), there is a hypothesis \(H_{\theta}\) that cannot be rejected at any level \(\alpha\)--but a precise mathematical statement is complicated. In the simplest case, suppose that for each \(y\), there exists \(\theta\) such that \(\delta_{\alpha}^{\theta}(y)=0\) for all \(\alpha\in[0,1]\), i.e., that there is a hypothesis \(H_{\theta}\) that cannot be rejected based on data \(y\). For example, if \(Y\) denotes a sample of size \(n\) from a normal distribution with mean \(\Theta\) and known variance \(\sigma^{2}\), and if the hypotheses \(H_{\theta}=\{\theta\}\) are singletons, then the usual z-test cannot reject \(H_{\theta}\) with \(\theta=\bar{y}\) at any significance level \(\alpha\). In general, suppose that for each \(y\), there exists a net \(\alpha\mapsto\theta_{y}(\alpha)\), for \(\alpha\in[0,1]\), such that \(\delta_{\alpha}^{\theta_{y}(\alpha)}(y)=0\) for all \(\alpha\) sufficiently close to \(1\). In the previous normal illustration, for any data \(y\), the half-line hypothesis \(H_{\theta}=(-\infty,\theta]\), with \[\theta=\theta_{y}(\alpha)=\bar{y}-c\,z_{1-\alpha}\,\sigma\,n^{-1/2},\quad \text{any }c\in(0,1],\] would not be rejected for any \(\alpha\in[0,1]\). ### When do the two IM constructions agree? The two IM constructions above will agree when the procedure-driven construction is based on the likelihood ratio test for the class \(H_{\theta}=\{\theta\}\) of singleton hypotheses. In this case, the test procedure could be described by the rule \[\delta_{\alpha}^{\theta}(y)=1\{R(y,\theta)\leq c_{\alpha}(\theta)\},\quad y \in\mathbb{Y},\quad\theta\in\mathbb{T},\quad\alpha\in[0,1],\] where \(1(\cdot)\) is the indicator function and \(c_{\alpha}(\theta)\) is chosen to ensure that (12) holds. As is well known, thanks to the definition of \(c_{\alpha}(\theta)\) through the sampling distribution of \(R(Y,\theta)\) under \(\mathsf{P}_{\theta}\), it follows that the testing rule can be equivalently expressed as \[\delta_{\alpha}^{\theta}(y)=1\{\pi_{y}(\theta)\leq\alpha\},\quad y\in\mathbb{ Y},\quad\theta\in\mathbb{T},\quad\alpha\in[0,1],\] where \(\pi_{y}(\theta)\) is the p-value/contour in (1). It is now clear that \[\inf\{\alpha:\delta_{\alpha}^{\theta}(y)=1\}=\inf\{\alpha:\pi_{y}(\theta)\leq \alpha\}=\pi_{y}(\theta),\] so the contour function defined in (13) agrees with that in (1). ### Simplification in Mayo's context To our knowledge, Mayo's developments focus on a special class of problems involving a scalar location parameter \(\Theta\) and one-sided null hypotheses like \(H_{0}:\Theta\leq\theta_{0}\). There are lots of problems that fit this setting, at least asymptotically, so she has grounds to make these cases her focus. The relevant structure below also holds in more general--but still scalar parameter--cases where the model admits a _monotone likelihood ratio_ property (e.g., Casella and Berger 1990; Karlin and Rubin 1956). Recall the setup described in Section 4.1, where the test procedure rejects the null hypothesis \(H_{0}:\Theta\leq\theta_{0}\) based on data \(y\) if and only if \(S(y,\theta_{0})\) exceeds some specified threshold. What the above structure implies is that the test statistic can be written as \(S(y,\theta_{0})=S(y,0)-\theta_{0}\) and, consequently, the p-value function satisfies \[\mathsf{pval}_{y}(\theta;\theta_{0}):=\mathsf{P}_{\theta}\{S(Y,\theta_{0})\geq S (y,\theta_{0})\}=\mathsf{P}_{\theta}\{S(Y,\theta)\geq S(y,\theta)\}.\] This connects the single test \(\delta_{\alpha}\equiv\delta_{\alpha}^{\theta_{0}}\) of a single null "\(\Theta\leq\theta_{0}\)" to a family of tests \(\delta_{\alpha}^{\theta}\) for a family of nulls "\(\Theta\leq\theta\)" indexed by \(\theta\). From here it is a standard exercise to show that the right-hand side in the above display is the p-value for testing "\(\Theta\leq\theta\)," so it agrees with the possibility contour function (13) corresponding to the test-based IM construction as described above. This justifies the claim (11).
2303.01581
Variable Blue Straggler Stars in Open Cluster NGC 6819 Observed in the Kepler 'Superstamp' Field
NGC 6819 is an open cluster of age 2.4 Gyr that was in the NASA Kepler spacecraft field of view from 2009 to 2013. The central part of the cluster was observed in a 200 x 200 pixel `superstamp' during these four years in 30-minute cadence photometry, providing a unique long time-series high-precision data set. The cluster contains 'blue straggler' stars, i.e., stars on the main sequence above the cluster turnoff that should have left the main sequence to become red giants. We present light curves and pulsation frequency analyses derived from custom photometric reductions for five confirmed cluster members--four blue stragglers and one star near the main-sequence turnoff. Two of these stars show a rich spectrum of $\delta$ Scuti pulsation modes, with 236 and 124 significant frequencies identified, respectively, while two stars show mainly low-frequency modes, characteristic of $\gamma$ Doradus variable stars. The fifth star, a known active x-ray binary, shows only several harmonics of two main frequencies. For the two $\delta$ Scuti stars, we use a frequency separation--mean-density relation to estimate mean density, and then use this value along with effective temperature to derive stellar mass and radius. For the two stars showing low frequencies, we searched for period-spacing sequences that may be representative of gravity-mode or Rossby-mode sequences, but found no clear sequences. The common age for the cluster members, considered along with the frequencies, will provide valuable constraints for asteroseismic analyses, and may shed light on the origin of the blue stragglers.
Joyce A. Guzik, Andrzej S. Baran, Sachu Sanjayan, Péter Németh, Anne M. Hedlund, Jason Jackiewicz, Lori R. Dauelsberg
2023-03-02T21:14:28Z
http://arxiv.org/abs/2303.01581v1
# Variable Blue Straggler Stars in Open Cluster NGC 6819 Observed in the _Kepler_ 'Superstamp' Field ###### Abstract NGC 6819 is an open cluster of age 2.4 Gyr that was in the NASA _Kepler_ spacecraft field of view from 2009 to 2013. The central part of the cluster was observed in a 200 x 200 pixel'superstamp' during these four years in 30-minute cadence photometry, providing a unique long time-series high-precision data set. The cluster contains 'blue straggler' stars, i.e., stars on the main sequence above the cluster turnoff that should have left the main sequence to become red giants. We present light curves and pulsation frequency analyses derived from custom photometric reductions for five confirmed cluster members-four blue stragglers and one star near the main-sequence turnoff. Two of these stars show a rich spectrum of \(\delta\) Scuti pulsation modes, with 236 and 124 significant frequencies identified, respectively, while two stars show mainly low-frequency modes, characteristic of \(\gamma\) Doradus variable stars. The fifth star, a known active x-ray binary, shows only several harmonics of two main frequencies. For the two \(\delta\) Scuti stars, we use a frequency separation-mean-density relation to estimate mean density, and then use this value along with effective temperature to derive stellar mass and radius. For the two stars showing low frequencies, we searched for period-spacing sequences that may be representative of gravity-mode or Rossby-mode sequences, but found no clear sequences. The common age for the cluster members, considered along with the frequencies, will provide valuable constraints for asteroseismic analyses, and may shed light on the origin of the blue stragglers. Stars: \(\delta\) Scuti variables-Stars: \(\gamma\) Doradus variables-Stars: blue stragglers-NGC 6819-Stars: evolution-Stars: pulsation 0000-0002-0002-0002]Joyce A. Guzik 0000-0002-8871-8885]Andrzej S. Baran 0000-0002-4170-3878]Sachu Sanjayan 0000-0002-4170-3878]Peter Nemeth 0000-0002-1881-8885]Anne M. Hedlund 0000-0002-1881-8885]Jason Jackiewicz 0000-0002-1881-8885]Lori R. Dauelsberg ## 1 Introduction NGC 6819 is an open star cluster in the constellation Cygnus discovered by Caroline Herschel in 1784.1 NGC 6819 is about 2.4 billion years old, half the age of the Sun, and around 8000 light years away (Basu et al., 2011; Balona et al., 2013; Brewer et al., 2016). This cluster was in the NASA _Kepler_ spacecraft (Borucki et al., 2010; Gilliland et al., 2010) continuous field of view from 2009-2013 (Fig. 1, left). The central part of the cluster (Fig. 1, right) was observed during these four years in 30-minute cadence photometry, providing a unique long time-series high-precision data set for asteroseismology (Kuehn et al., 2015). Studying clusters is advantageous for asteroseismology because the cluster members formed together, providing additional modeling constraints such as a common age and element abundances. Since the cluster is younger than the Sun, the stars at the cluster main-sequence turnoff are somewhat more massive than the Sun, near the expected mass range for \(\gamma\) Doradus-type pulsating variables, which pulsate in high-order gravity modes with periods of around 1 day (frequencies 0.3-3 c/d; Aerts et al., 2010; Li et al., 2020). This cluster also contains "blue straggler" stars, i.e., stars on the main sequence above the cluster turnoff that should have already left the main sequence to become red giants (see Fig. 2 from Deliyannis et al., 2019). Blue stragglers are believed to have formed either via stellar mergers or mass transfer from a companion sometime in the star's past (Rain et al., 2021). The NGC 6819 blue stragglers have the right temperatures to show \(\delta\) Scuti-type pulsations, i.e., low-order acoustic mode (\(p\)-mode) or gravity-mode (\(g\)-mode) pulsations with periods of around 2 hours (frequencies 5-50 c/d; Aerts et al., 2010; Balona et al., 2015). If pulsations are found, stellar modeling and asteroseismic analysis may help to better understand the origins of these blue stragglers. We discuss light curves derived from _Kepler_ NGC 6819 superstamp data and pulsation frequency analyses for five confirmed cluster members. Four stars are blue stragglers, and one is near the cluster turnoff. ## 2 _Kepler_ Data Analysis and Results The superstamp field centered on NGC 6819 was viewed nearly continuously for the four years (17 quarters) of the original _Kepler_ mission. These data span barycentric Julian Days 131.5 to 1591.0 after Julian Date 2454833.0. Gaps in the data of around 90 days for Quarters 6, 10, and 14 arise because _Kepler_ CCD module #3 and later #7 (out of the total of 21) failed during the mission. We used simple aperture photometry (SAP) pixel data for the light curves and prepared final light curves using our custom scripts and PyKE software (Kinemuchi et al., 2012). See also Sanjayan et al. (2022) for details of choosing the apertures and optimizing them for analysis. We searched for variations in each superstamp pixel showing variability in the right range to be \(\delta\) Scuti or \(\gamma\) Doradus variable star candidates. This search resulted in five cluster members and eight non-members (not discussed here) for follow-up. Cluster membership probabilities were derived using astrometry data from Gaia Data Release 3 (Gaia Collaboration et al., 2021). Table 1 summarizes the parallax, distance, and cluster membership probability Figure 1: (a) Zoom-in on _Kepler_ original mission field of view showing location of NGC 6819 in the lower center CCD ([https://commons.wikimedia.org/wiki/](https://commons.wikimedia.org/wiki/) File: Kepler_FOV_hiRes.jpg, NASA/Ames/JPL-Caltech, Image credit: Software Bisque, Public Domain); (b) 200 \(\times\) 200 pixel _Kepler_ superstamp image of the center of NGC 6819 (Kuehn et al., 2015, reproduced with permission). _Kepler_ pixel sizes are 3.98 arsec per pixel. for each star. We use a 5-D approach, i.e., accounting for proper motion (2x), coordinates (2x), and parallax. See also Sanjayan et al. (2022) for additional details on the method to determine membership probabilities. Radial velocities are not available for our targets from Gaia DR3. Radial velocities are available in the literature that could be used, but because they are not taken with the same instrument, instrumental shifts could result, so it was judged more reliable not to use them. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{ KIC} & Parallax & Distance & Membership \\ & [mas] & [pc] & Probability \% \\ \hline 5024468 & 0.414(12) & 2413(71) & 99.5 \\ 5024084 & 0.384(15) & 2606(104) & 99.3 \\ 5024455 & 0.475(38) & 2106(168) & 98.1 \\ 5113357 & 0.422(16) & 2373(90) & 98.9 \\ 5112843 & 0.451(24) & 2218(117) & 99.1 \\ \hline \end{tabular} \end{table} Table 1: Summary of astrometric analysis for cluster membership and distances using corrected parallaxes. The parallax fractional errors of all targets are less than 10%. Figure 2: Color-magnitude diagram of NGC 6819 from Deliyannis et al. (2019) with overlayed isochrones. © AAAS. Reproduced with permission. Magnitudes and colors have been corrected for interstellar reddening. The blue oval, not in the original figure, encircles stars in the blue straggler region, but these are not necessarily the same stars discussed in our paper. The stars with black symbols are likely field stars and not cluster members. \(\mu\) is the apparent distance modulus. Two of the five member stars discussed here, KIC 5024084 and KIC 5024455, have 14 quarters of _Kepler_ 30-min data, and 1 month of 2-min cadence data that can be found in the Mikulski Archive for Space Telescopes (MAST).2 We compared amplitude spectra produced using light curves from our custom analysis and those using the Pre-search Data Conditioning Simple Aperture Photometry (PDC_SAP) 30-min-cadence light curves from MAST. We find that the signal-to-noise ratio (S/N) is marginally better using the PDC_SAP data, whereas the frequencies found were the same. For thees two stars, we therefore used the PDC_SAP long-cadence data for the amplitude spectra and frequency tables presented here. Footnote 2: MAST, [https://archive.stsci.edu/](https://archive.stsci.edu/) Some of these stars also have been observed by the Transiting Exoplanet Survey Satellite (_TESS_) mission (Ricker et al., 2015). The _Kepler_ pixel dimensions are 4 \(\times\) 4 arcsec, considerably smaller than the 21 arcsec pixels of _TESS_, reducing the risk and consequences of contamination of the light curves by nearby stars in the crowded field in the cluster center. To determine significant frequencies, the light curves were processed by Fourier analysis, and the successive highest-amplitude peaks removed from the light curve by pre-whitening until only noise remained. We used a detection threshold S/N around 5 as discussed by Baran et al. (2015). This level is the average over the entire _Kepler_ frequency spectrum from 0 c/d to the Nyquist frequency limit of 24.4695 c/d for 30-min cadence data. Generally, the noise level is greater for low frequencies than for higher frequencies in this range. For _Kepler_ time series longer than about one year, frequencies higher than the Nyquist limit can be found by taking advantage of the shift in arrival time of the signal caused by the light travel time to the spacecraft in orbit around the Sun (Baran et al., 2012; Murphy et al., 2013). The true frequencies have higher amplitudes than their Nyquist-reflected frequencies in the amplitude spectrum (see examples in sub-sections 2.1 and 2.4). Table 2 summarizes properties of the five cluster member stars discussed in the remainder of this Section. The effective temperature, log surface gravity, radius, mass, and luminosity are from the _TESS_ Input Catalog (TIC) version 8.2 (Stassun et al., 2019) available on MAST. The stellar quantities are approximate as many are derived from color photometry and stellar model grids. Spectroscopy and asteroseismic modeling making use of the stellar pulsation frequencies should improve the accuracy of these quantities. While finalizing this paper, we found that Colman et al. (2022) presented light curves of stars in the NGC 6819 superstamp field using Increased Resolution Image Subtraction photometry. The Ph.D. thesis of Colman (2020) shows amplitude spectra derived for the five stars we discuss here and classifies them as \(\delta\) Sct or \(\gamma\) Dor variables. Our work has been performed independently from Colman et al. (2022) and contains additional analysis, including tables of significant frequencies, new spectroscopic results, and analysis of frequency or period spacings for several stars. We note in Table 2 the variable star classifications of Colman (2020), which in a few cases differ from our own. ### Kic 5024468 KIC 5024468 was known pre-_Kepler_ as a \(\delta\) Sct variable in the blue straggler region of the NGC 6819 color-magnitude diagram (Talamantes et al., 2010). The _Kepler_ light curve for this star is contaminated by light from a nearby eclipsing binary, KIC 5024450, with a period of 3.05 days. Figure 3 (left) shows a 5-day zoom-in on the KIC 5024468 light curve; Figure 3 (center) shows the amplitude spectrum, revealing many modes in the \(\delta\) Sct frequency range; the highest-amplitude modes have frequencies around 12 c/d. Pre-whitening analysis reveals 236 significant frequencies with S/N \(>\) 6, including 7 real frequencies above the Nyquist limit (Table A1). Figure 3 (right) zooms in on the low-amplitude portion of the spectrum, showing several frequencies in the \(\gamma\) Dor frequency range, with frequencies \(<\) 5 c/d. We therefore categorize this star as a \(\delta\) Sct/\(\gamma\) Dor hybrid candidate. Figure 4 shows the light curve and amplitude spectrum of the eclipsing binary KIC 5024450, which is contaminating the light curve of KIC 5024468. As can be seen in Fig. 4(b), the \(\delta\) Sct oscillations of KIC 5024468 around 12 c/d contaminate the KIC 5024450 spectrum; the binary frequency and its harmonics were removed when determining the KIC 5024468 frequencies in Table A1. ### Kic 5024084 KIC 5024084 is listed as a blue straggler in the SIMBAD3 database. The _Kepler_ light curve available via MAST for this star was studied using long-cadence data through Quarter 12 and one month of short-cadence data by Balona et al. (2013). According to their description, "KIC 5024084 shows very clear variations with irregular amplitudes but a distinct period of 2.07 d which is probably the rotation period of the star with spots." Figure 5 (left) shows a 50-day zoom in of the KIC 5024084 light curve extracted from the superstamp pixels. Figure 5 (center) shows the amplitude spectrum, revealing many low-frequency modes, including the two highest-amplitude modes with periods 2.038 and 2.058 days that may correspond to the period identified as a probable rotation period by Balona et al. (2013). Reinhold & Gizon (2015) list KIC 5024084 in their catalog of 12,319 _Kepler_ stars with multiple peaks, which they interpret as differential rotation. Their catalog lists 2.038 c/d and 2.060 c/d as the minimum and maximum period range, coinciding with the two highest-amplitude peaks that we find. Figure 4: (a) Zoom-in on 5-day portion of light curve of eclipsing binary KIC 5024450, showing eclipses with period 3.05 d. This light curve is contaminating the light curve of 5024468, and the binary orbital frequency and its harmonics were removed when determining the frequencies of KIC 5024468 in Table 1. (b) Amplitude spectrum of KIC 5024450, showing characteristic comb-like structure for Fourier analysis of an eclipsing binary. The \(\delta\) Sct oscillations of KIC 5024468 around 12 c/d also contaminate the KIC 5024450 spectrum. Figure 3: (a) Zoom-in on 5-day portion of KIC 5024468 light curve. (b) Amplitude spectrum for KIC 5024468 showing \(\delta\) Sct modes. The Nyquist frequency for 30-min _Kepler_ cadence data is 24.4695 c/d, but the spectrum extends to 28 cycles/day as there are real super-Nyquist frequencies that have higher amplitudes than their reflections. (c) Zoom-in on low-amplitude portion of KIC 5024468 spectrum showing many significant low-amplitude modes in both the \(\gamma\) Dor and \(\delta\) Sct frequency ranges. It is not straightforward to determine the origin of low-frequency peaks in the amplitude spectrum. Saio et al. (2018) offer an alternative explanation for 'hump and spike' features in amplitude spectra seen in some \(\gamma\) Dor stars as a rotation frequency (spike) accompanied by a lower-frequency cluster of global Rossby modes (hump). Other groupings of modes, e.g., some higher than the rotation frequency, may be \(\gamma\) Dor gravity modes. For KIC 5024084, there may also be harmonics of the largest-amplitude modes (around 0.5 c/d) at around 1 c/d and 1.5 c/d. Pre-whitening analysis revealed 53 modes (Table A2). Figure 5 (panel c) shows the low-amplitude portion of the spectrum extended to 12 c/d; the mode visible in the \(\delta\) Sct frequency range at 11.2 c/d has S/N = 25. Three additional \(\delta\) Sct modes with lower S/N ratio are revealed in the pre-whitening analysis and listed in Table A2. We categorize this star as a \(\gamma\) Dor/\(\delta\) Sct hybrid candidate. A very close inspection of Figure 5 (panel c) shows a comb of low-amplitude peaks with frequency spacing around 1/3 c/d. There is a known artifact in the _Kepler_ data at this frequency from thruster firings every 3.0 days to desaturate angular momentum buildup in the reaction wheels (see _Kepler_ Data Release 3 Notes, KSCI-19043-001).4 For this star, we analyzed the PDC_SAP light curve from MAST, and it is possible that this artifact was not completely cleaned from the data. This low-amplitude comb is no longer visible in the residual after pre-whitening the highest amplitude frequency. Footnote 4: [https://ntrs.nasa.gov/api/citations/20100027540/downloads/20100027540.pdf](https://ntrs.nasa.gov/api/citations/20100027540/downloads/20100027540.pdf) ### Kic 5024455 KIC 5024455 is also listed as a blue straggler in SIMBAD. This star's _Kepler_ light curve data were studied using early data releases by Uytterhoeven et al. (2011), who categorize it as a \(\gamma\) Dor star, and later by Balona et al. (2013) using long-cadence data up through Quarter 12 and 1 month of short-cadence data, who categorize it as a suspected \(\gamma\) Dor star. Milliman et al. (2014) list KIC 502445 (also known as WOCS 014012) as a single-line spectroscopic binary with a 762-day orbital period. Figure 6 (left) shows a 20-day zoom-in on a portion of the light curve. The amplitude spectrum (Fig. 6, right) shows only modes with frequencies \(<\) 5 c/d, in the \(\gamma\) Dor frequency range. Pre-whitening analysis reveals 84 frequencies with S/N \(>\) 8.4 (Table A3). There may be additional significant frequencies, but it is difficult to be sure they are real because of higher noise levels at low frequencies. While some of these low-frequency groupings may be gravity modes, some may also be global Rossby modes, and more analysis will be needed in conjunction with stellar models to understand and identify the mode patterns. Pulsation modes cannot be distinguished from signatures of rotation and star spots using the light curve alone. We categorize this star as a \(\gamma\) Dor candidate. Figure 5: (a) Zoom-in on 50-day portion of KIC 5024084 light curve. (b) Low-frequency portion of KIC 5024084 amplitude spectrum, showing many low-frequency modes. (c) Zoom-in on low amplitudes of KIC 5024084 amplitude spectrum, with frequency range extended to 12 c/d. A single 11.2 c/d mode is visible in \(\delta\) Sct frequency range. ### Kic 5113357 KIC 5113357 does not have _Kepler_ data available in MAST, and it is not categorized as a variable star in SIMBAD. Its temperature and luminosity do place it in the blue straggler region for the NGC 6819 cluster. Figure 7 (left) shows a 10-day zoom-in on the KIC 5113357 light curve derived from the superstamp pixel data. There is an overall modulation at 0.24755 c/d (period around 4 days); this low frequency is the 2nd highest peak in the amplitude spectrum (see Fig. 8). This peak could be a binary orbital frequency, and this star could be a contact eclipsing binary. This frequency could also be a rotational frequency. Table A4 lists this frequency first, followed by three harmonics that are also found in the amplitude spectrum. Figure 6: (a) Zoom-in on 20-day portion of KIC 5024455 light curve. (b) KIC 5024455 amplitude spectrum. Figure 7: (a) Zoom-in on 10-day portion of KIC 5113357 light curve. (b) Zoom-in on 2-day portion of KIC 5113357 light curve. Figure 7 (right) shows a 2-day zoom-in on the light curve, revealing higher frequency oscillations in the \(\delta\) Sct frequency range. Figure 8 (left) shows the amplitude spectrum out to 50 c/d. It is evident that most frequencies are reflected about the Nyquist frequency limit of 24.4695 c/d. However, some super-Nyquist frequencies have larger amplitudes than their reflected counterparts, and are the real frequencies. Apart from the 0.24755 c/d period mentioned above, pre-whitening of the spectrum shows 120 additional modes with S/N \(>\) 5.3, most in the \(\delta\) Sct frequency range (Table A4). 36 of the 120 frequencies are above the Nyquist limit, with the highest of these, 36.496 c/d, still in the \(\delta\) Sct range. Figure 8 (right) shows a zoom-in on the low-amplitude and low-frequency portion of the amplitude spectrum. There are a few modes in the \(\gamma\) Dor frequency range. We therefore categorize this star as a \(\delta\) Sct/\(\gamma\) Dor hybrid candidate. ### Kic 5112843 The last of the five NGC 6819 members we discuss is another interesting and mysterious star, KIC 5112843. This star does not have processed _Kepler_ data available in MAST. It can be found in SIMBAD under its TIC catalog number, and it is listed as an eclipsing binary. Talamantes et al. (2010) discuss the ground-based light curve of this star from pre-_Kepler_ data, and they note that sometimes the light curve shows one shallow, almost nonexistent, dip that causes it to resemble a detached eclipsing binary. At other times, the light curve resembles that of a contact binary. Concerning these latter phases, Talamantes et al. (2010) write, "the system showed some of its deepest eclipses and showed variations identifying it as an EW [also known as W UMa] contact binary system, with gravitationally distorted stars of nearly equal temperature." The _Kepler_ light curve derived from the superstamp pixel data shows the signal of two close frequencies beating against each other, combining constructively or destructively to create the pattern seen in Figure 9. The amplitude spectrum shows the two largest-amplitude modes that are close in frequency at 5.2071 and 5.7357 c/d, but also two lower-amplitude frequencies at half these values, 2.6036 and 2.8678 c/d. Pre-whitening analysis of the light curve reveals only harmonics of these two frequencies, six of them of the 2.6036 c/d mode, and nine of the 2.8678 c/d mode (Table A5). Talamantes et al. (2010) identify the binary period as 0.348687 d (frequency 2.8679 c/d), corresponding to one of the lower-amplitude frequencies in the amplitude spectrum. It is not known why there are pairs of frequencies, and why the amplitudes of the parent frequencies are smaller than their first harmonic. KIC 5112843 also was the target of x-ray observations by the XMM-Newton space telescope (Gosnell et al., 2012). Gosnell et al. (2012) find that this star is an x-ray source, and list it as an active binary. Figure 10 from Gosnell et al. (2012) shows the location of NGC 6819 x-ray sources on the color-magnitude diagram, including KIC 5112843 labeled as X9, superimposed on photometry from Kalirai et al. (2001). Gosnell et al. (2012) speculate that this star is a Figure 8: (a) KIC 5113357 amplitude spectrum, extended to 50 c/d. Some frequencies above the Nyquist frequency of 24.4695 c/d have higher amplitudes than their reflections and are real frequencies. (b) Zoom-in on low-amplitude and low-frequency portion of KIC 5113357 amplitude spectrum. A few low-amplitude frequencies are present in the \(\gamma\) Dor frequency range. Figure 10: Figure 5a from Gosnell et al. (2012), which shows NGC 6819 x-ray binaries (red symbols) in the color-magnitude diagram. X9 is KIC 5112843, located near the main-sequence turnoff. ©)AAS. Reproduced with permission. Figure 9: (a) 5-day zoom-in on KIC 5112843 light curve. The dominant property is two close frequencies beating against other. (b) KIC 5112843 amplitude spectrum. possible sub-subgiant binary system, similar to RS CVn binary systems, which are defined as close but detached binaries with active chromospheres that can cause large star spots (Eaton & Hall, 1979). ## 3 Spectroscopy We did not find stellar parameters derived from spectroscopy for these five stars in the literature, for example, from the LAMOST ROTFIT pipeline (Frasca et al., 2022). We have taken new low-resolution spectra (R \(\sim\) 2000) for three of the stars, KIC 5024468, KIC 5113357, and KIC 5112843, using the ALFOSC spectrograph mounted on the 2.56-meter Nordic Optical Telescope (NOT) at the Roque de los Muchachos Observatory on La Palma. \begin{table} \begin{tabular}{l c c c c} \hline \hline & KIC 5024468a & KIC 5024455a & KIC 5113357d \\ \hline RA (deg) & 295.3227668 & 295.2647409 & 295.3210065 & 295.4430993 & 295.3576433 \\ DEC (deg) & 40.18431117 & 40.14515408 & 40.10113114 & 40.2755384 & 40.20623482 \\ TIC ID & 1880383370 & 139109448 & 139109202 & 184010448 & 139154029 \\ V (mag) & 12.983 \(\pm\) 0.046 & 14.874 \(\pm\) 0.15 & 14.943 \(\pm\) 0.046 & 14.971 \(\pm\) 0.183 & 15.772 \(\pm\) 0.126 \\ \(T_{\rm eff}\) (K) & 7059 \(\pm\) 130 & 6501 \(\pm\) 123 & 6701 \(\pm\) 122 & 7328 \(\pm\) 122 & 5493 \(\pm\) 126 \\ \(\log\) (g/cm s\({}^{-2}\)) & 3.442 \(\pm\) 0.096 & 3.802 & 4.246 & 4.142 & 3.777 \\ Radius (R\({}_{\odot}\)) & 3.93 \(\pm\) 0.26 & 2.40 & 1.49 & 1.81 & 2.10 \\ Mass (M\({}_{\odot}\)) & 1.56 \(\pm\) 0.25 & 1.33 & 1.42 & 1.66 & 0.96 \\ Luminosity (L\({}_{\odot}\)) & 34.57 \(\pm\) 4.03 & 9.267 & 4.014 & 8.519 & 3.607 \\ \hline HRD location & Blue straggler & Blue straggler & Blue straggler & Blue straggler & Near main-sequence turnoff \\ Number of Frequencies & 236 & 53 & 84 & 124 & 17 \\ \hline Classification & \(\delta\) Sct/\(\gamma\) Dor & \(\gamma\) Dor/\(\delta\) Sct & \(\gamma\) Dor & \(\delta\) Sct/\(\gamma\) Dor & Eclipsing \\ & hybrid candidate & hybrid candidate & candidate & hybrid candidate & binary \\ \hline \end{tabular} \end{table} Table 2: Summary of properties of five NGC 6819 stars observed in _Kepler_ superstamp field. Effective temperature, log surface gravity, radius, mass, luminosity, and distance are from the _TESS_ Input Catalog (TIC) version 8.2 (Stassun et al., 2019) available on MAST. \begin{table} \begin{tabular}{l c c c} \hline \hline KIC & \(T_{\rm eff}\) (K) & \(\log\) (g/cm s\({}^{-2}\)) & [M/H] \\ \hline 5024468 & 7770 \(\pm\) 90 & 4.381 \(\pm\) 0.081 & -0.196 \(\pm\) 0.306 \\ 5113357 & 7270 \(\pm\) 90 & 3.709 \(\pm\) 0.039 & -0.529 \(\pm\) 0.324 \\ 5112843 & 5600 \(\pm\) 150 & 4.37 \(\pm\) 0.05 & -1.424 \(\pm\) 0.174 \\ \hline \end{tabular} \end{table} Table 3: Summary of spectroscopic results. The observed spectra were reduced using standard longslit and echelle data processing techniques and IRAF packages. All spectra were modeled with interpolated local thermal equilibrium (LTE) synthetic spectra drawn from the BOSZ (Bohlin et al., 2017) spectral library to determine the fundamental atmospheric parameters. The BOSZ library was calculated for scaled solar metallicity with carbon and \(\alpha\)-element enhancement; therefore, individual abundance patterns cannot be investigated with our method. Table 3 summarizes the results for \(T_{\rm eff}\), log g, and [M/H] from the spectral analysis. Our fitting procedure (XTgrid; Nemeth et al., 2012) is based on a steepest-gradient chi-square minimizing method, which was developed to model hot stars. To improve its performance for cool stars, we added a grid-search preconditioning to the procedure. We step through a set of models to search for the best starting model for the steepest-descent part. Next, the descent part takes over in driving the fit and converges on the best solution. Once a convergence is achieved, the procedure explores the parameter errors by stepping through a set of points around the best solution. If a better solution is found during error calculations, then the procedure returns to the descent part, hence pushing the solution towards the global minimum. XTgrid fits the radial velocity and projected rotation velocity of each spectra along with the stellar surface parameters, such as the effective temperature (\(T_{\rm eff}\)), surface gravity (log g), and [M/H]. In addition, the procedure acquires photometric data from the VizieR Photometry Viewer5, distance data from the Gaia EDR3 database, and extinction values from the NED online services. The spectroscopic surface parameters combined with these measurements allow us to reduce systematics and derive absolute stellar parameters, such as mass, radius, and luminosity. An anti-correlation is observed between \(T_{\rm eff}\) and [Fe/H]. Fortunately, the spectral energy distribution (SED) helps in resolving this bias by restricting the \(T_{\rm eff}\). Another bias is observed in surface gravity, in particular at low temperature, where the spectrum is insensitive to the surface gravity. Therefore, the derived log g for KIC 5112843, having the lowest \(T_{\rm eff}\), is particularly uncertain. Footnote 5: [http://vizier.u-strasbg.fr/vizier/sed/](http://vizier.u-strasbg.fr/vizier/sed/) ## 4 Steps toward Asteroseismology for NGC 6819 Blue Stragglers Although we have found a rich spectrum of modes in the _Kepler_ data for NGC 6819 blue stragglers, and these stars have the added constraints of common age and possibly initial element abundances, asteroseismology for blue stragglers in general, and for these stars, in particular, is difficult in practice. The pulsation modes in these blue stragglers are not obviously amenable to mode identification. In addition, modeling of binary interactions and mergers with unknown history requires exploration of many additional parameters. We summarize some efforts from the literature toward asteroseismology of blue stragglers and take a few first steps toward deriving properties of the NGC 6819 blue straggler stars using asteroseismic techniques. ### Blue straggler modeling and asteroseismology Photometrically variable cluster and field blue stragglers were known before the contributions of space missions such as _Kepler_. Blue straggler asteroseismology attempts have focused on high-amplitude \(\delta\) Sct (HADS) or SX Phe stars, for which the radial fundamental and/or first-overtone modes are expected. Mode identification can sometimes be confirmed from period ratios; the expected first-overtone to fundamental period ratio is \(\sim 0.77\)(see, e.g., Gilliland et al., 1998). Mateo (1993) reviews known photometrically variable blue stragglers in stellar systems older than 2-3 Gyr. Mateo estimates masses for cluster Dwarf Cepheid (a.k.a. HADS) blue stragglers, given as a ratio of the blue-straggler mass to that of the cluster RR Lyr stars having similar color. Mateo also estimates masses of blue stragglers in eclipsing binary systems based on their light curve, and outlines expectations for blue stragglers forming as a result of stellar mergers. Gilliland et al. (1998) calculate evolution and linear nonadiabatic pulsation models to derive theoretical relationships for evolutionary and pulsation masses for SX Phe variable and apply these to estimate masses for four double-mode SX Phe blue stragglers in the globular cluster 47 Tuc. Templeton et al. (2002) evolve grids of single-star normal-helium and high-helium models for blue-straggler SX Phe stars with metallicity representative of globular cluster M55. They find that period-luminosity relations are unaffected by blue-straggler formation if blue stragglers are fully mixed stellar mergers. Footnote 5: [https://www.stsci.edu/~staff/](https://www.stsci.edu/~staff/) Footnote 6: doi: 10.5281/zenodo.523, [https://zenodo.org/record/5234616#.Y63zRsHMId0](https://zenodo.org/record/5234616#.Y63zRsHMId0) Fiorentino et al. (2014) use Hubble Space Telescope images to characterize SX Phe variability in the Galactic globular cluster NGC 6541. They estimate pulsation masses using linear nonadiabatic models, finding good agreement with predictions of single-star evolution tracks. Fiorentino et al. (2015) calculate a grid of nonlinear radial pulsation models with metallicities representative of SX Phe variables in Galactic globular clusters and dwarf spheroidal galaxies, and use these to investigate the topology of the SX Phe instability strip. Bruntt et al. (2007) discuss results of a multi-site campaign to identify \(\delta\) Sct pulsations in the open cluster M67, and find two blue stragglers, EW Cnc and EX Cnc, with 46 and 21 frequencies, respectively. They calculate a grid of pulsation models taking into account rotation, and compare frequency predictions with observations, but conclude that further progress cannot be made without mode identification from spectroscopy or multicolor photometry. ### Blue straggler asteroseismology using Kepler or TESS data Asteroseismology has been attempted using _Kepler_ or _TESS_ data for only a handful of blue stragglers. Hatta et al. (2021) calculate nonstandard models for the field \(\delta\) Sct star KIC 11145123, which is a possible blue straggler. They explore models with artificially modified envelope composition representing the effects of interaction with another star in forming the blue straggler. They find the best fit to observed frequencies for models with enhanced envelope helium abundances. Hatta et al. (2021) make use of rotational frequency splittings seen in \(p\), \(g\), and mixed modes for this star to constrain mode identification, and point out that only two other main-sequence stars studied using _Kepler_ data have been found so far that show such well-resolved frequency splittings. Leiner et al. (2016) and Leiner (2018) use solar-like oscillations found in _Kepler K2_ data and asteroseismic scaling relations to determine the mass and radius of S1237, a 'yellow' stragger in M67 which presumably evolved from a blue straggler. The derived mass of S1237 is 2.9 \(\pm\) 0.2 M\({}_{\odot}\), more than twice the mass of the main-sequence turnoff stars. Antoci et al. (2019) use _TESS_ data and single-star evolution models to derive stellar parameters of the low-metallicity high-amplitude \(\delta\) Sct star SX Phe that is a possible field blue straggler. The likely 1st-overtone mode frequency was identified by its characteristic ratio with the fundamental mode. The _TESS_ data enable frequencies to be measured to high accuracy. The frequency ratio was fit to the 5th decimal place by a model of 1.05 M\({}_{\odot}\), initial hydrogen mass fraction \(X_{o}\)=0.667, \(Z\)=0.002, and age 2.8 Gyr. This initial hydrogen abundance is lower than normally used for single-star evolution models, indicating that SX Phe could have experienced a prior stellar interaction or merger event. ### Blue straggler formation and evolution modeling Many researchers have explored blue straggler formation scenarios, including mass transfer from a companion or binary merger, without consideration of asteroseismic constraints. Sills (2015) introduces methods to model blue straggler evolution and discusses success of various approaches in describing observations Sandquist et al. (1997) use a smooth particle hydrodynamics code to compare the properties of blue stragglers formed by direct collision with those resulting from binary merger and examine subsequent evolution of the merger products, including mass loss. They conclude that color distribution is an important constraint for blue-straggler formation scenarios, and that observed color distributions appear to rule out fully mixed models. Gosnell et al. (2019) use the binary modeling capabilities in the MESA (Paxton et al., 2013, 2015) stellar evolution code to constrain mass-transfer scenarios for two blue straggler + white dwarf binary systems in NGC 188. Portegies Zwart and Leigh (2019) use MESA and other codes to model stellar mergers and investigate the origin of two populations of blue stragglers in the globular cluster M30. Portegies Zwart and Leigh (2019) suggest that the redder population is a result of continuous formation of blue stragglers during 10 Gyr via mass transfer and mergers, while a bluer population is the result of stellar collisions during cluster core collapse around 3.2 Gyr ago. Y. Gotberg, in a module for the 2021 MESA summer school available at Zenodo,6 outlines methods inspired by Renzo et al. (2020) to use MESA binary capabilities (Paxton et al., 2015) to model mass transfer, merger, envelope ejection, and subsequent stellar evolution. This treatment should be applicable for generating blue straggler models and subsequently calculating their pulsations properties to compare with observations. Footnote 6: doi: 10.5281/zenodo.523, [https://zenodo.org/record/5234616#.Y63zRsHMId0](https://zenodo.org/record/5234616#.Y63zRsHMId0) ### Outline for evolution modeling of NGC 6819 blue stragglers While it is beyond the scope of this paper to calculate asteroseismic models for the four NGC 6819 blue stragglers pulsators discussed here, we attempt to outline to scope of the evolution modeling problem. One procedure, following Gotberg (2021), would be to: 1) Use the MESA binary module to model the evolution of detached stars in a binary system; initial masses and separations are free parameters; 2) evolve the binary through the mass transfer phase and up to the common envelope phase, at which point the code will stop; 3) estimate how much mass is ejected during merger (another free parameter); 4) decide the entropy profile of the merged component (there are many reasonable choices); 5) calculate the remaining evolution to 2.4 Gyr, the age of NGC 6819; 6) calculate the pulsation frequencies of the resulting models and compare to observations. Steps 1 and 2 are done using the MESA binary module (Paxton et al., 2015). For steps 3-5 above, given a number of assumptions, MESA can build the merged relaxed model for further evolution. Discussion and example MESA inlists are available at either the Gotberg Zenodo link or at the Zenodo link in the Appendix of Renzo et al. (2020). The evolution of the merger product can be continued with MESA (step 5). The model frequencies can be calculated (step 6) with the GYRE in MESA capabilities, or using the GYRE stellar oscillation code (Townsend and Teitler, 2013) separately. There are of course many choices and settings in MESA to consider, for example initial metallicity and helium abundance, initial stellar rotation rates, convection and convective overshooting treatment, opacities and abundance mixture, etc. Besides a binary merger, there are other possible formation scenarios for the NGC 6819 blue stragglers. For example, these stars could have accreted mass from an undetected binary companion, or from a close interaction or collision with a passing star. There is evidence that at least one of the stars discussed here, KIC 5113357, may show a binary orbital period in its spectrum. There is also the remote possibility that these blue stragglers are interlopers to the cluster that happen to have the nearly the same kinematic properties as the other cluster members. In the latter case, it might be appropriate to model the stars using single-star evolution, ignoring the cluster age constraint. Perhaps such models would help to rule out this scenario. ### Applying asteroseismic scaling relations to NGC 6819 \(\delta\) Scuti blue stragglers We apply asteroseismic'scaling relations' developed for \(\delta\) Sct stars that point to a mean density and effective temperature for KIC 5024468 and KIC 5113357, which can then be used to estimate other stellar properties. For the two \(\gamma\) Dor candidates, KIC 5024084 and KIC 5024455, we search for period spacing sequences, which may hold information on near-core rotation rates. #### 4.5.1 Frequency separation-mean-density relation Unlike the case for solar-like oscillators, \(\delta\) Sct pulsations have low radial order and are not in the asymptotic regime, and therefore do not show uniform frequency separations between modes of the same angular degree \(\ell\) and consecutive radial order \(n\) as do solar-like oscillators. Nevertheless, regularities in frequency separations have been found in \(\delta\) Sct stars (see, e.g., Suarez et al., 2014; Paparo et al., 2016, 2020; Bedding et al., 2020) that could be associated with a large separation. Suarez et al. (2014) developed a frequency separation (\(\Delta\nu\))-mean-density relation for \(\delta\) Sct stars based on stellar modeling. Garcia Hernandez et al. (2015) confirmed this relation observationally using \(\delta\) Sct stars in eclipsing binaries: \[\frac{\bar{\rho}}{\bar{\rho}_{\odot}}=(1.55^{+1.07}_{-0.68})(\frac{\Delta\nu} {\Delta\nu_{\odot}})^{2.035\pm 0.095} \tag{1}\] The relation is normalized to solar values, with \(\Delta\nu_{\odot}=134.8\)\(\mu\)Hz (Kjeldsen, 2008). For the two \(\delta\) Sct stars with many significant frequencies, KIC 5024468 and KIC 5113357, we calculated the frequency difference between each mode and all of the other modes and created a histogram of frequency spacings. This method was used to find characteristic spacings for \(\delta\) Sct stars by, e.g., Breger et al. (2008), Breger et al. (2009), and Zwintz et al. (2011). We excluded from this procedure frequencies less than 3 c/d that are not likely to be \(\delta\) Sct modes, and also experimented with thresholds for exclusions of modes below a S/N threshold, which may be higher-degree modes or less-reliable detections. Figure 11 shows the results for these two stars, excluding modes with S/N \(<\) 10. Ignoring the peak near 0 c/d, which is a consequence of modes that are closely spaced that may occur for several reasons (e.g., modes of different angular degree with coincidentally the nearly the same frequency, or rotational splitting), we find peaks at 4.0-4.5 c/d for KIC 5024468, and 5.5-6 c/d for KIC 5113357. To estimate a common frequency spacing among modes, we also applied the Kolmogorov-Smirnov (K-S) test as discussed by Kawaler (1988) to test the significance of a perceived uniform spacing. We plotted the results in Figure 12, again excluding frequencies with S/N \(<\) 10. Non-random periods spacings will appear as minima in Q, where Q is the probability that spacings are randomly distributed. The 'confidence level' that a given spacing is significant is (1-Q) \(\times\) 100%. There are minima in these two plots coinciding well with the histogram results, giving 4.4 c/d for KIC 5024468 and 5.5 c/d for KIC 5113357. Note that for KIC 5113357 the deepest minima occur at around 12 c/d, or around twice the value of the selected frequency spacing value, but physically we expect that the smallest frequency spacing is the actual one, and that multiples of this spacing should also appear in the analysis. Likewise, for KIC 5024468, while the deepest minimum in the K-S test is at 4.4 c/d, there is a shallower but significant minimum around half this value at 2.3 c/d, and so this value is more likely to be the actual frequency spacing corresponding to the mean density. We also do not expect the spacing to be exact for these modes in the non-asymptotic regime, so there should be a spread in frequency separations, which will make the dips shallower. Using these frequency spacings in the asteroseismic scaling relation (Equation 1) results in a mean density of 0.0571 \(\bar{\rho}_{\odot}\) for KIC 5024468 and 0.3367 \(\bar{\rho}_{\odot}\) for KIC 5113357. Taking into account the uncertainties in the asteroseismic scaling relation, the mean densities of the two stars are 0.0.037-0.083 \(\bar{\rho}_{\odot}\), and 0.203-0.530 \(\bar{\rho}_{\odot}\), respectively. Figure 11: Distribution of frequency spacings for \(\delta\) Sct stars KIC 5024468 (a) and KIC 5113357 (b). Figure 12: Kolmogorov-Smirnov test results for frequency separations of \(\delta\) Sct stars KIC 5024468 (a) and KIC 5113357 (b). #### 4.5.2 \(\bar{T}_{\rm eff}-\nu_{max}\) scaling relation Barcelo Forteza et al. (2018) derive a mean \(T_{\rm eff}\)-\(\nu_{max}\) scaling relation for \(\delta\) Sct stars based on data from over 1000 stars observed during the _CoRoT_ (Baglin et al., 2006) and _Kepler_ missions: \[\bar{T}_{\rm eff}(K)=(2.94\pm 0.24)\nu_{max}(\mu Hz)+(6980\pm 50) \tag{2}\] The frequencies with maximum amplitude for KIC 5024468 and KIC 5113357 are 147.05 \(\mu\)Hz and 198.93 \(\mu\)Hz, respectively, giving \(T_{\rm eff}\) values, according to this scaling relation, of 7412 \(\pm\) 85 K and 7565 \(\pm\) 98 K. For KIC 5024468, the scaling-relation \(T_{\rm eff}\) is between the higher \(T_{\rm eff}\) obtained from spectroscopy in Table 3 and the lower \(T_{\rm eff}\) of the _TESS_ Input Catalog in Table 2. For KIC 5113357, the \(T_{\rm eff}\) from the scaling relation is slightly hotter than, but not inconsistent with, the \(T_{\rm eff}\) values in Table 2 and Table 3. The \(T_{\rm eff}\) and mean density derived from these asteroseismic scaling relations can be used in principle to calculate the stellar mass, radius, and luminosity of these two stars. These estimates can be approached in two ways: 1) First, adopting a log g value, the mass and radius are constrained by the mean density. Then, using \(T_{\rm eff}\), the luminosity can be obtained. This method is very sensitive to the value of surface gravity, g, because mass is proportional to g\({}^{3}\). 2) Alternatively, assuming the TIC luminosity value (Table 2) is reasonably accurate, one could use \(T_{\rm eff}\) to derive the radius, and use radius and mean density to derive stellar mass. This second method does not make use of the surface gravity value. We can compare the TIC luminosity values with those found in the Gaia DR2 archive (Gaia Collaboration et al., 2016, 2018): The Gaia DR2 luminosities for KIC 5024468 and KIC 5113357 are 32.3 L\({}_{\odot}\) and 7.2 L\({}_{\odot}\), respectively, very close to the TIC values of 34.6 \(\pm\) 4.0 L\({}_{\odot}\) and 8.5 L\({}_{\odot}\). The uncertainties in derived mean density and measured log g are large enough that the parameters of the two stars are not well constrained. Nevertheless, we can adopt the mean density and \(T_{\rm eff}\) from the asteroseismic scaling relations and apply both methods, adjusting log g to attain consistency with the TIC luminosity value. The results of this exercise are summarized in Table 4. For KIC 5024468, the derived luminosity can be made consistent with the TIC luminosity value if a log g= 3.72 is adopted. This value is slightly higher than that of the TIC, but lower than the value obtained from the analysis of the low-resolution spectroscopy of Table 3. The resulting stellar mass is 2.15 M\({}_{\odot}\), reasonable for a blue straggler \(\delta\) Sct star cluster member. If we had instead adopted a mean frequency separation \(\Delta\nu\) = 4.4 c/d, a high log g value of 4.3, consistent with the value in Table 3, would have been required to attain consistency with the TIC luminosity. The inferred stellar mass, however, would have been 8.4 M\({}_{\odot}\), much higher than the estimate in the TIC, and unreasonably high for a \(\delta\) Sct star and probably also for a blue straggler cluster member resulting from stellar interactions or a merger. \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ KIC} & 5024468 & 5113357 \\ \hline \(\Delta\nu\) (c/d) & 2.3 & 5.5 \\ \(\nu_{max}\) (\(\mu\)Hz) & 147.05 & 198.93 \\ \hline Mean Density (\(\bar{\rho}_{\odot}\)) & 0.0571 & 0.3367 \\ \(T_{\rm eff}\) (K) & 7412 & 7565 \\ Adjusted log g & 3.72 & 4.19 \\ Mass (M\({}_{\odot}\)) & 2.15 & 1.59 \\ Radius (R\({}_{\odot}\)) & 3.35 & 1.68 \\ Resulting Luminosity (L\({}_{\odot}\)) & 30.4 & 8.28 \\ \hline \end{tabular} \end{table} Table 4: Parameters inferred for NGC 6819 blue straggler \(\delta\) Sct stars from \(\Delta\nu-\bar{\rho}\) and \(T_{\rm eff}\)–\(\nu_{max}\) scaling relations. For KIC 5113357, the derived luminosity can made approximately consistent with the TIC luminosity value if a log g of 4.19 is adopted, which is close to that of the TIC value, but higher than the value from spectroscopy in Table 3. If the Table 3 log g value, 3.7, is adopted instead, the derived stellar mass is unreasonably low, 0.054 M\({}_{\odot}\). Breger et al. (2009) use theoretical models to correlate mean separations of \(\delta\) Sct radial modes with log g values. According to their Figure 8, models with a mean separation of 2.3 c/d have log g = 3.7, and models with a mean separation of 5.5 c/d have log g = 4.2, confirming our log g inferences in Table 4, columns 2 and 3. However, there is a caveat that the Breger et al. (2009) results are based on single-star evolution models. ### Period spacings for \(\gamma\) Doradus stars As discussed by, e.g., Van Reeth et al. (2015a,b, 2016, 2018); Saio et al. (2018); Li et al. (2019, 2020), gravity-mode and global Rossby-mode period-spacing patterns in \(\gamma\) Dor variables can be used to probe internal rotation and even to detect differential rotation. Li et al. (2020) found clear period spacing sequences in 611 out of 2085 \(\gamma\) Dor variables observed for four years by _Kepler_. The slope of Rossby-mode period spacing vs. period is positive as these modes propagate retrograde to the rotation, while the slope of of prograde and zonal gravity modes is negative. We attempted to identify period spacings for the blue straggler \(\gamma\) Dor candidates KIC 5024084 and KIC 5024455. Figure 13 shows period spacing vs. period for consecutive modes for these two stars. The two plots appear by eye as scatter plots, and clear sequences with positive or negative slope are not evident. Perhaps this result is not unexpected, because Li et al. (2020) were only able to find clear sequences in 30% of their original sample. Also, Li et al. (2020) retained frequencies with S/N \(>\) 3 in their light-curve analysis, whereas we have retained frequencies with S/N \(>\) 25 for KIC 5024084 and S/N \(>\) 8.4 for KIC 5024455. In addition, sequences can deviate from a linear fit because of differential rotation or mode trapping, making them more difficult to identify. We ended pre-whitening for low-frequency peaks at S/N \(\sim\) 25 for KIC 5024084 and S/N = 8.4 for KIC 5024455 because it became unreliable to pre-whiten additional peaks in the low-frequency region where the noise level increases. Nevertheless, we attempted further pre-whitening for these two stars to identify additional low-frequency modes. This process increased the number of low-frequency modes from 49 to about 130 for KIC 50244084, and from 84 to 210 for KIC 5024455. Figure 14 shows the resulting period spacings vs. period for these two stars, zooming in on the shorter-period region. No clear sequences are discernible, although there are some sequences of more than 10 points that could represent partial sequences. Even though this procedure is very subjective, and intended to be illustrative only, we proceeded to identify some hypothetical sequences and consider implications for mode identification and near-core rotation rates. For KIC 5024084, we chose a hypothetical positive-slope sequence represented by the light-blue points in Figure 14(a). This sequence has 18 points; a linear regression fit gives slope = 647.3 s (0.00749 d), \(y\)-intercept = -235.34 s, Figure 13: Spacing between consecutive periods vs. period for \(\gamma\) Dor candidates KIC 5024084 (a) and KIC 5024455 (b). and correlation coefficient R = 0.990. The mean period of this sequence is 1.115 d, and the average period spacing is 487 s. Comparing to the values for the 611 stars in Figures 8 and 14 of Li et al. (2020), these values are similar to those of stars showing \(k\) = -2, \(m\)=-1 Rossby-mode sequences, although the mean spacing is a little low, and the slope a little shallow compared to the stars in the Li et al. (2020) sample. The implied near-core rotation rate is around 0.5 d\({}^{-1}\). Interestingly, this rotation rate coincides with the highest-amplitude frequency suggested as the surface rotation rate by Balona et al. (2013). Li et al. (2020) were able to identify surface rotation rates for 58 of the 611 stars in their sample, and found that the difference between near-core and surface rotation rates is no larger than 5%. For KIC 5024455, we highlighted two hypothetical negative-slope sequences represented by the light-blue and red points in Figure 14 (b). The first sequence, with 11 points marked in red, has slope = -1748.4 s (-0.0265 d), \(y\)-intercept = 1125.2 s, and correlation coefficient R = 0.992. The mean period of the sequence is 0.459 d, and average period spacing is 323 s. These values are consistent with an \(\ell\)=2 \(g\)-mode sequence according to Figure 8 of Li et al. (2020). The second sequence, with 12 points marked in light blue, has slope = -2466.1 s (-0.0285 d), \(y\)-intercept = 2879 s, and correlation coefficient R = 0.994. The mean period of the sequence is 0.869 d, and average period spacing is 736 s. These values are consistent with an \(\ell\)=1, \(m\)=1 \(g\)-mode sequence according to Figures 8 and 14 of Li et al. (2020), and imply a near-core rotation rate of around 0.6 d\({}^{-1}\). ## 5 Conclusions Our search for variable stars using pixel data from the _Kepler_ NGC 6819 superstamp field resulted in identification of five stars that were determined to be cluster members, showing multimode variability in the frequency ranges characteristic of \(\gamma\) Dor (\(<\) 5 c/d) and/or \(\delta\) Sct (\(>\)5 to \(\sim\)40 c/d) stars. Two of these stars, KIC 5024468 and KIC 5113357, show a rich spectrum of \(\delta\) Scuti pulsation modes, with 236 and 124 significant frequencies identified, respectively, while two stars show mainly low-frequency modes characteristic of either \(\gamma\) Dor gravity-mode pulsations or global Rossby modes. Low-frequency variability caused by rotation and star spots cannot be ruled out. The fifth star has an unusual spectrum with several harmonics of two main frequencies. This star shows x-ray activity and may be an RS CVn variable. We identified frequency separations in the two \(\delta\) Sct stars KIC 5024468 and KIC 5113357 that are likely associated with the large frequency spacing \(\Delta\nu\) between modes of the same angular degree \(\ell\) and consecutive radial order \(n\). Making use of \(\delta\) Sct \(\Delta\nu-\bar{\rho}\) and \(\bar{T}_{\rm eff}-\nu_{max}\) asteroseismic scaling relations, we were able to estimate mean density and \(T_{\rm eff}\). We then used these values in conjunction with log g and luminosity values to estimate stellar masses and radii. For the two stars showing many low frequencies, KIC 5024084 and KIC 5024455, we could not identify clear sequences of consecutive period spacings as were found by Li et al. (2020) for many \(\gamma\) Dor variables. Nevertheless, we analyzed Figure 14: Spacing between consecutive periods vs. period for KIC 5024084 (a) and KIC 5024455 (b) resulting from further pre-whitening of the light curve to identify additional low frequencies. The points in light blue or red represent hypothetical spacing sequences. The lines show linear regression fits to these sequences. a hypothetical Rossby-mode sequence for KIC 5024084 and two hypothetical \(g\)-mode sequences for KIC 5024455, and compared properties of these sequences with those of the Li et al. sample, deriving near-core rotation rates of 0.5 d\({}^{-1}\) and 0.6 d\({}^{-1}\), respectively. We have presented new low-resolution spectroscopic results for only three of the five stars discussed here. However, some of these spectroscopic results are inconsistent with determinations from asteroseismic inferences for log g and \(T_{\rm eff}\), as well as with the estimates found in the _TESS_ Input Catalog. Further analysis, as well as additional spectra, particularly high-resolution spectra, would be useful to resolve these discrepancies and to constrain other parameters, for example, stellar rotation rates. Time-series spectra could be examined for radial-velocity variations and used to detect binarity and characterize binary orbits for KIC 5024455, KIC 5113357, and KIC 5112843, possibly providing additional stellar modeling constraints. Time-series spectroscopy and color photometry could also be of use to distinguish frequencies resulting from rotation and star spots from pulsations. These results from long time-series _Kepler_ photometry, combined with the common age and element abundances of the cluster members, should provide useful constraints for stellar modeling. These data could be useful for determining the origins and internal structure of the four blue straggler stars. We thank the anonymous reviewer for suggestions which greatly improved this paper. We are grateful for data from the NASA _Kepler_ spacecraft. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and the Mikulski Archive for Space Telescopes (MAST). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. J.G. and A.H. acknowledge a Los Alamos National Laboratory Center for Space and Earth Sciences grant CSES XX8P and support from LANL, managed by Triad National Security, LLC for the U.S. DOE's NNSA, Contract #89233218CNA000001. J.G. thanks the Society for Astronomical Sciences for the opportunity to present results at their 2022 Symposium, and the American Association of Variable Star Observers for the opportunity to present results at their 110th Annual Meeting. Funding from the National Science Centre in Poland No. UMO-2017/26/E/ST9/00703 and UMO-2017/25/B/ST9/02218 is acknowledged. P.N. acknowledges support from the Grant Agency of the Czech Republic (GACR 22-34467S). The Astronomical Institute in Ondrejov is supported by the project RVO:67985815.
2306.03548
Learning Dynamical Systems from Noisy Data with Inverse-Explicit Integrators
We introduce the mean inverse integrator (MII), a novel approach to increase the accuracy when training neural networks to approximate vector fields of dynamical systems from noisy data. This method can be used to average multiple trajectories obtained by numerical integrators such as Runge-Kutta methods. We show that the class of mono-implicit Runge-Kutta methods (MIRK) has particular advantages when used in connection with MII. When training vector field approximations, explicit expressions for the loss functions are obtained when inserting the training data in the MIRK formulae, unlocking symmetric and high-order integrators that would otherwise be implicit for initial value problems. The combined approach of applying MIRK within MII yields a significantly lower error compared to the plain use of the numerical integrator without averaging the trajectories. This is demonstrated with experiments using data from several (chaotic) Hamiltonian systems. Additionally, we perform a sensitivity analysis of the loss functions under normally distributed perturbations, supporting the favorable performance of MII.
Håkon Noren, Sølve Eidnes, Elena Celledoni
2023-06-06T09:50:38Z
http://arxiv.org/abs/2306.03548v1
# Learning Dynamical Systems from Noisy Data with Inverse-Explicit Integrators ###### Abstract We introduce the mean inverse integrator (MII), a novel approach to increase the accuracy when training neural networks to approximate vector fields of dynamical systems from noisy data. This method can be used to average multiple trajectories obtained by numerical integrators such as Runge-Kutta methods. We show that the class of mono-implicit Runge-Kutta methods (MIRK) has particular advantages when used in connection with MII. When training vector field approximations, explicit expressions for the loss functions are obtained when inserting the training data in the MIRK formulae, unlocking symmetric and high-order integrators that would otherwise be implicit for initial value problems. The combined approach of applying MIRK within MII yields a significantly lower error compared to the plain use of the numerical integrator without averaging the trajectories. This is demonstrated with experiments using data from several (chaotic) Hamiltonian systems. Additionally, we perform a sensitivity analysis of the loss functions under normally distributed perturbations, supporting the favorable performance of MII. ## 1 Introduction Recently, many deep learning methodologies have been introduced to increase the efficiency and quality of scientific computations [1; 2; 3; 4]. In physics-informed machine learning, deep neural networks are purposely built to enforce physical laws. As an example, Hamiltonian neural networks (HNNs) [5] aim at learning the Hamiltonian function from temporal observations. The Hamiltonian formalism was derived from classical mechanics for modeling a wide variety of physical systems. The temporal evolution of such systems is fully determined when the Hamiltonian function is known, and it is characterized by geometric properties such as the preservation of energy, the symplectic structure and the time-reversal symmetry of the flow [6; 7]. Numerical integrators that compute solutions preserving such properties are studied in the field of geometric numerical integration [7; 8]. Thus, deep learning, classical mechanics and geometric numerical integration are all relevant to the development of HNNs. In this work, we try to identify the optimal strategy for using numerical integrators when constructing loss functions for HNNs that are trained on noisy and sparse data. Generally, we aim at learning autonomous systems of first-order ordinary differential equations (ODE) \[\frac{d}{dt}y=f(y(t)),\quad y:[0,T]\rightarrow\mathbb{R}^{n}. \tag{1}\] In the traditional setting, solving an initial value problem (IVP) means computing approximated solutions \(y_{n}\approx y(t_{n})\) when the vector field \(f(y)\) and an initial value \(y(t_{0})=y_{0}\) are known. The focus of our study is the corresponding inverse problem; assuming knowledge of multiple noisy samples of the solution, \(S_{N}=\{\tilde{y}_{n}\}_{n=0}^{N}\), the aim is to approximate the vector field \(f\) with a neural network model \(f_{\theta}\). We will assume that the observations originate from a (canonical) Hamiltonian system, with a Hamiltonian \(H:\mathbb{R}^{2d}\rightarrow\mathbb{R}\), where the vector field is given by \[f(y)=J\nabla H(y(t)),\quad J:=\begin{bmatrix}0&I\\ -I&0\end{bmatrix}\in\mathbb{R}^{2d\times 2d}. \tag{2}\] This allows for learning the Hamiltonian function directly by setting \(f_{\theta}(y)=J\nabla H_{\theta}(y)\), as proposed initially in [5]. Recently, many works highlight the benefit of using symplectic integrators when learning Hamiltonian neural networks [9; 10; 11; 12]. Here, we study what happens if, instead of using symplectic methods, efficient and higher-order MIRK methods are applied for inverse problems. We develop different approaches and apply them to learn highly oscillatory and chaotic dynamical systems from noisy data. The methods are general, they are not limited to separable Hamiltonian systems, and could indeed be used to learn any first-order ODE. However, we focus our study on Hamiltonian systems, in order to build on the latest research on HNNs. Specifically, we compare our methods to the use of symplectic integrators to train Hamiltonian neural networks. Our contributions can be summarized as follows: * We introduce the **mean inverse integrator** (MII), which efficiently averages trajectories of MIRK methods in order to increase accuracy when learning vector fields from noisy data (Definition 5.1). * We present an **analysis of the sensitivity** of the loss function to perturbations giving insight into when the MII method yields improvement over a standard one-step scheme (Theorem 5.2). * We show that **symplectic MIRK** methods have at most order \(p=2\) (Theorem 4.4). Particularly, the second-order implicit midpoint method is the symplectic MIRK method with minimal number of stages. Finally, numerical experiments on several Hamiltonian systems benchmark MII against one-step training and symplectic recurrent neural networks (SRNN) [10], which rely on the Stormer-Verlet integrator. The structural difference between these three approached is presented in Figure 2. Additionally, we demonstrate that substituting Stormer-Verlet with the classic Runge-Kutta method (RK\(4\)) in the SRNN framework yields a significant reduction in error and allows accurate learning of non-separable Hamiltonian systems. ## 2 Related work Hamiltonian neural networks was introduced in [5]. The numerical integration of Hamiltonian ODEs and the preservation of the symplectic structure of the ODE flow under numerical discretization have been widely studied over several decades [8; 7]. The symplecticity property is key and could inform the neural network architecture [13] or guide the choice of numerical integrator, yielding a theoretical guarantee that the learning target is actually a (modified) Hamiltonian vector field [14; 9], building on the backward error analysis framework [8]. Discrete gradients is an approach to numerical integration that guarantees exact preservation of the (learned) Hamiltonian, and an algorithm for training Hamiltonian neural networks using discrete gradient integrators is developed in [15] and extended to higher order in [16]. Since we for the inverse problem want to approximate the time-derivative of the solution, \(f\), using only \(\tilde{y}_{n}\), we need to use a numerical integrator when specifying the neural network loss function. For learning dynamical systems from data, explicit methods such as RK\(4\) are much used [5; 17; 18]. However, explicit methods cannot in general preserve time-symmetry or symplecticity, and they often have worse stability properties compared to implicit methods [19]. Assuming that the underlying Hamiltonian is separable allows for explicit integration with the symplectic Stormer-Verlet method, which is exploited in [10; 20]. Symplecticity could be achieved without the limiting assumption of separability by training using the implicit midpoint method [12]. As pointed out in [12], this integrator could be turned into an explicit method in training by inserting sequential training data \(\tilde{y}_{n}\) and \(\tilde{y}_{n+1}\). In fact, the MIRK class [21; 22] contains all Runge-Kutta (RK) methods (including the midpoint method) that could be turned into explicit schemes when inserting the training data. This is exploited in [23], where high-order MIRK methods are used to train HNNs, achieving accurate interpolation and extrapolation of a single trajectory with large step size, few samples and assuming zero noise. The assumption of noise-free data limits the potential of learning from physical measurements or applications on data sets from industry. This issue is addressed in [10], presenting symplectic recurrent neural networks (SRNN). Here, Stormer-Verlet is used to integrate multiple steps and is combined with initial state optimization (ISO) before computing the loss. ISO is applied after training \(f_{\theta}\) a given number of epochs and aims at finding the optimal initial value \(\hat{y}_{0}\), such that the distance to the subsequent observed points \(\tilde{y}_{1},\ldots,\tilde{y}_{N}\) is minimized when integrating over \(f_{\theta}\). While [10] is limited by only considering separable systems, [24] aims at identifying the optimal combination of third-order polynomial basis functions to approximate a cubic non-separable Hamiltonian from noisy data, using a Bayesian framework. ## 3 Background on numerical integration Some necessary and fundamental concepts on numerical integration and the geometry of Hamiltonian systems are presented below to inform the discussion on which integrators to use in inverse problems. Further details could be found in Appendix C. **Fundamental concepts:** An important subclass of the general first-order ODEs (1) is the class of Hamiltonian systems, as given by (2). Often, the solution is partitioned into the coordinates \(y(t)=[q(t),p(t)]^{T}\), with \(q(t),p(t)\in\mathbb{R}^{d}\). A separable Hamiltonian system is one where the Hamiltonian could be written as the sum of two scalar functions, often representing the kinetic and potential energy, that depends only on \(q\) and \(p\) respectively, this means we have \(H(q,p)=H_{1}(q)+H_{2}(p)\). The \(h\) flow of an ODE is a map \(\varphi_{h,f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) sending an initial value \(y(t_{0})\) to the solution of the ODE at time \(t_{0}+h\), given by \(\varphi_{h,f}(y(t_{0})):=y(t_{0}+h)\). A numerical integration method \(\Phi_{h,f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a map approximating the exact flow of the ODE, so that \[y(t_{1})\approx y_{1}=\Phi_{h,f}(y_{0}).\] Here, \(y(t_{n})\) represents the exact solution and we denote with \(y_{n}\) the approximation at time \(t_{n}=t_{0}+nh\). It should be noted that the flow map satisfies the following group property: \[\varphi_{h_{1},f}\circ\varphi_{h_{2},f}\big{(}y(t_{0})\big{)}=\varphi_{h_{1}, f}\big{(}y(t_{0}+h_{2})\big{)}=\varphi_{h_{1}+h_{2},f}(y(t_{0})). \tag{3}\] In other words, a composition of two flows with step sizes \(h_{1},h_{2}\) is equivalent to the flow map over \(f\) with step size \(h_{1}+h_{2}\). This property is not shared by numerical integrators for general vector fields. The order of a numerical integrator \(\Phi_{h,f}\) characterizes how the error after one step depends on the step size \(h\) and is given by the integer \(p\) such that the following holds: \[\|y_{1}-y(t_{0}+h)\|=\|\Phi_{h,f}(y_{0})-\varphi_{h,f}(y(t_{0}))\|=\mathcal{O} (h^{p+1}).\] **Mono-implicit Runge-Kutta methods:** Given vectors \(b,v\in\mathbb{R}^{s}\) and a strictly lower triangular matrix \(D\in\mathbb{R}^{s\times s}\), a MIRK method is a Runge-Kutta method where \(A=D+vb^{T}\)[25; 26] and we assume that \([A]_{ij}=a_{ij}\) is the stage-coefficient matrix. This implies that the MIRK method can be written on the form \[\begin{split} y_{n+1}&=y_{n}+h\sum_{i=1}^{s}b_{i}k_ {i},\\ k_{i}&=f\big{(}y_{n}+v_{i}(y_{n+1}-y_{n})+h\sum_{j= 1}^{s}d_{ij}k_{j}\big{)}.\end{split} \tag{4}\] Specific MIRK methods and further details on Runge-Kutta schemes is discussed in Appendix C.2. **Symplectic methods:** The flow map of a Hamiltonian system is symplectic, meaning that it's Jacobian \(\Upsilon_{\varphi}:=\frac{\partial}{\partial y}\varphi_{h,f}(y)\) satisfies \(\Upsilon_{\varphi}^{T}J\Upsilon_{\varphi}=J\), where \(J\) is the same matrix as in (2). As explained in [8; Ch. VI.2], this is equivalent to the preservation of a projected area in the phase space of \([q,p]^{T}\). Similarly, a numerical integrator is symplectic if its Jacobian \(\Upsilon_{\Phi}:=\frac{\partial}{\partial y_{n}}\Phi_{h,f}(y_{n})\) satisfies \(\Upsilon_{\Phi}^{T}J\Upsilon_{\Phi}=J\). It is possible to prove [8; Ch. VI.4] that a Runge-Kutta method is symplectic if and only if the coefficients satisfy \[b_{i}a_{ij}+b_{j}a_{ji}-b_{i}b_{j}=0,\quad i,j=1,\ldots,s. \tag{5}\] Numerical integration schemes for solving inverse problems We will now consider different ways to use numerical integrators when training Hamiltonian neural networks and present important properties of MIRK methods, a key component of the MII that is presented in Chapter 5. **Inverse ODE problems in Hamiltonian form:** We assume to have potentially noisy samples \(S_{N}=\{\tilde{y}\}_{n=0}^{N}\) of the solution of an ODE with vector field \(f\). The inverse problem can be formulated as the following optimization problem: \[\operatorname*{arg\,min}_{\theta}\sum_{n=0}^{N-1}\Big{\|}\tilde{y}_{n+1}- \Phi_{h,f_{\theta}}(\tilde{y}_{n})\Big{\|}, \tag{6}\] where \(f_{\theta}=J\nabla H_{\theta}\) is a neural network approximation with parameters \(\theta\) of a Hamiltonian vector field \(f\), and \(\Phi_{h,f_{\theta}}\) is a one-step integration method with step length \(h\). In the setting of inverse ODE problems, the availability of sequential points \(S_{N}\) could be exploited when a numerical method is used to form interpolation conditions, for \(f_{\theta}\approx f\) for each \(n\) in the optimization problem (6). For example, \(\tilde{y}_{n}\) and \(\tilde{y}_{n+1}\) could be inserted in the implicit midpoint method, turning a method that is implicit for IVPs into an explicit method for inverse problems: \[\Phi_{h,f_{\theta}}(\tilde{y}_{n},\tilde{y}_{n+1})=\tilde{y}_{n}+hf_{\theta} \big{(}\frac{\tilde{y}_{n}+\tilde{y}_{n+1}}{2}\big{)}. \tag{7}\] We denote this as the inverse injection, which defines an inverse explicit property for numerical integrators. **Definition 4.1** (Inverse injection).: Assume that \(\tilde{y}_{n},\tilde{y}_{n+1}\in S_{N}\). Let the _inverse injection_ for the integrator \(\Phi_{h,f}(y_{n},y_{n+1})\) be given by the substitution \((\tilde{y}_{n},\tilde{y}_{n+1})\rightarrow(y_{n},y_{n+1})\) such that \[\hat{y}_{n+1}=\Phi_{h,f}(\tilde{y}_{n},\tilde{y}_{n+1}).\] **Definition 4.2** (Inverse explicit).: A numerical one-step method \(\Phi\) is called _inverse explicit_ if it is explicit under the inverse injection. This procedure is utilized successfully by several authors when learning dynamical systems from data, see e.g. [12, 27]. However, this work is the first attempt at systematically exploring numerical integrators under the inverse injection, by identifying the MIRK methods as the class consisting of inverse explicit Runge-Kutta methods. **Proposition 4.3**.: _MIRK-methods are inverse explicit._ Proof.: Since the matrix \(D\) in (4) is strictly lower triangular, the stages are given by \[k_{1} =f(y_{n}+v_{i}(y_{n+1}-y_{n}))\] \[k_{2} =f(y_{n}+v_{i}(y_{n+1}-y_{n})+hd_{21}k_{1})\] \[\vdots\] \[k_{s} =f(y_{n}+v_{i}(y_{n+1}-y_{n})+h\sum_{j=1}^{s-1}d_{sj}k_{j})\] meaning that if \(y_{n}\) and \(y_{n+1}\) are known, all stages, and thus the next step \(\hat{y}_{n+1}=y_{n}+h\sum_{i=1}^{s}b_{i}k_{i}\), could be computed explicitly. Because of their explicit nature when applied to inverse ODE problems, MIRK methods are an attractive alternative to explicit Runge-Kutta methods; in contrast to explicit RK methods, they can be symplectic or symmetric, or both, without requiring the solution of systems of nonlinear Figure 1: Venn diagram of Runge–Kutta (RK) subclasses: explicit RK (ERK), symplectic RK (SympRK), mono-implicit RK (MIRK) and symmetric RK (SymRK). equations, even when the Hamiltonian is non-separable. Figure 1 illustrates the relation between various subclasses and the specific methods are described in Table 1 in Appendix C. In addition, for \(s\)-stage MIRK methods, it is possible to construct methods of order \(p=s+1\)[22]. This is in general higher order than what is possible to obtain with \(s\)-stage explicit Runge-Kutta methods. Further, computational gains could also be made by reusing evaluations of the vector field between multiple steps, which using MIRK methods allow for, as explained in Appendix I. The dependency structure on the data \(S_{N}\) of explicit RK (ERK) methods, MIRK methods and the SRNN method [10] is illustrated in Figure 2. **Maximal order of symplectic MIRK methods:** From the preceding discussion, it is clear that symplectic MIRK methods are of interest when learning Hamiltonian systems from data, since they combine computational efficiency with the ability to preserve useful, geometric properties. Indeed, symplectic integrators in the training of HNNs have been considered in [9, 10, 11, 12, 13]. The subclass of symplectic MIRK methods is represented by the middle, dark blue field in the Venn diagram of Figure 1. The next result gives an order barrier for symplectic MIRK methods that was, to the best of our knowledge, not known up to this point. **Theorem 4.4**.: _The maximum order of a symplectic MIRK method is \(p=2\)._ Proof.: This is a shortened version of the full proof, which can be found in Appendix F. A MIRK method is a Runge-Kutta method with coefficients \(a_{ij}=d_{ij}+v_{i}b_{j}\). Requiring \(d_{ij},b_{i}\) and \(v_{i}\) to satisfy the symplecticity conditions of (5) in addition to \(D\) being strictly lower triangular, yields the following restrictions \[b_{i}d_{ij}+b_{i}b_{j}(v_{j}+v_{i}-1) =0,\quad\text{ if }i\neq j,\] \[b_{i}=0\;\;\text{ or }\;v_{i} =\frac{1}{2},\quad\text{if }i=j, \tag{8}\] \[d_{ij} =0,\quad\text{if }i>j.\] These restrictions result in an RK method that could be reduced to choosing a coefficient vector \(b\in\mathbb{R}^{s}\) and choosing stages on the form \(k_{i}=f\big{(}y_{n}+\frac{h}{2}\sum_{j}^{s}b_{j}k_{j}\big{)}\) for \(i=1,\ldots,s\). It is then trivial to check that this method can only be of up to order \(p=2\). Note that for \(s=1\) and \(b_{1}=1\) we get the midpoint method. **Numerical integrators outside the RK class:** While this paper is mainly concerned with MIRK methods, several other types of numerical integrators could be of interest for inverse problems. _Partitioned Runge-Kutta methods_ are an extension and not a subclass of RK methods, and can be symplectic and symmetric, while also being explicit for separable Hamiltonian systems. The Stormer-Verlet integrator of order \(p=2\) is one example. Higher order methods of this type are derived in [28] and used for learning Hamiltonian systems in [29, 30]. _Discrete gradient methods_[31, 32] are inverse explicit and well suited to train Hamiltonian neural networks using a modified automatic differentiation algorithm [15]. This method could be extended to higher order methods as shown in [16]. In contrast to symplectic methods, discrete gradient methods preserve the Hamiltonian exactly up to machine precision. A third option is _elementary differential Runge-Kutta methods_[33], where for instance [34] show how to use backward error analysis to construct higher order methods from modifications to the midpoint method. This topic is discussed further in Appendix H, where we also present a novel, symmetric discrete gradient method of order \(p=4\). ## 5 Mean inverse integrator for handling noisy data **Noisy ODE sample:** It is often the case that the samples \(S_{N}\) are not exact measurements of the system, but are perturbed by noise. In this paper, we model the noise as independent, normally Figure 2: Differences of observation dependency, assuming \(N=2\) for explicit and mono-implicit one-step training, and explicit multi-step training with initial state optimization (green node \(\hat{y}_{0}\)). distributed perturbations \[\tilde{y}_{n}=y(t_{n})+\delta_{n},\quad\delta_{n}\sim\mathcal{N}(0,\sigma^{2}I), \tag{9}\] where \(\mathcal{N}(0,\sigma^{2}I)\) represents the multivariate normal distribution. With this assumption, a standard result from statistics tells us that the variance of a sample-mean estimator with \(N\) samples converges to zero at the rate of \(\frac{1}{N}\). That is, assuming that we have \(N\) samples \(\tilde{y}_{n}^{(1)},\ldots,\tilde{y}_{n}^{(N)}\), then \[\text{Var}[\overline{y}_{n}]=\text{Var}\bigg{[}\frac{1}{N}\sum_{j=1}^{N}\tilde {y}_{n}^{(j)}\bigg{]}=\frac{\sigma^{2}}{N}.\] Using the inverse injection with the midpoint method, the vector field is evaluated in the average of \(\tilde{y}_{n}\) and \(\tilde{y}_{n+1}\), reducing the variance of the perturbation by a factor of two, compared to evaluating the vector field in \(\tilde{y}_{n}\), as is done in all explicit RK methods. Furthermore, considering the whole data trajectory \(S_{N}\), multiple independent approximations to the same point \(y(t_{n})\) can enable an even more accurate estimate. This is demonstrated in the analysis presented in Theorem 5.2 and in Figure 4. **Averaging multiple trajectories:** In the inverse ODE problem, we assume that there exists an _exact_ vector field \(f\) whose flow interpolates the discrete trajectories \(S_{N}\), and the flow of this vector field satisfies the group property (3). The numerical flow \(\Phi_{h,f}\) for a method of order \(p\) satisfies this property only up to an error \(\mathcal{O}(h^{p+1})\) over one step. In the presence of noisy data, compositions of one-step methods can be used to obtain multiple different approximations to the same point \(y(t_{n})\), by following the numerical flow from different nearby initial values \(\tilde{y}_{j},j\neq n\), and thus reduce the noise by averaging over these multiple approximations. Accumulation of the local truncation error is expected when relying on points further away from \(t_{n}\). However, for sufficiently small step sizes \(h\) compared to the size of the noise \(\sigma\), one can expect increased accuracy when averaging over multiple noisy samples. As an example, assume that we know the points \(\{\tilde{y}_{0},\tilde{y}_{1},\tilde{y}_{2},\tilde{y}_{3}\}\). Then \(y(t_{2})\) can be approximated by computing the mean of the numerical flows \(\Phi_{h,f}\) starting from different initial values: \[\begin{split}\overline{y}_{2}&=\frac{1}{3}\big{(} \Phi_{h,f}(\tilde{y}_{1})+\Phi_{h,f}\circ\Phi_{h,f}(\tilde{y}_{0})+\Phi_{-h,f} ^{*}(\tilde{y}_{3})\big{)}\\ &\approx\frac{1}{3}\big{(}\tilde{y}_{0}+\tilde{y}_{1}+\tilde{y} _{3}+h(\Psi_{0,1}+2\Psi_{1,2}-\Psi_{2,3})\big{)},\end{split} \tag{10}\] where we by \(\Phi^{*}\) mean the adjoint method of \(\Phi\), as defined in [8, Ch. V], and we let \(\Psi_{n,n+1}\) be the increment of an inverse-explicit numerical integrator, so that \[\Phi_{h,f}(\tilde{y}_{n},\tilde{y}_{n+1})=\tilde{y}_{n}+h\Psi_{n,n+1}.\] For example, for the midpoint method, we have that \(\Psi_{n,n+1}=f\big{(}\frac{\tilde{y}_{n}+\tilde{y}_{n+1}}{2}\big{)}\). When stepping in negative time in (10), we use the adjoint method in order to minimize the number of vector field evaluations, also when non-symmetric methods are used (which implies that we always use e.g. \(\Psi_{1,2}\) and not \(\Psi_{2,1}\)). Note that in order to derive the approximation in (10), repeated use of the inverse injection allows the known points \(\tilde{y}_{n}\) to form an explicit integration procedure, where the composition of integration steps are approximated by summation over increments \(\Psi_{n,n+1}\). This approximation procedure is presented in greater detail in Appendix D. **Mean inverse integrator:** The mean approximation over the whole trajectory \(\overline{y}_{n}\), for \(n=0,\ldots,N\), could be computed simultaneously, reusing multiple vector field evaluations in an efficient manner. This leads to what we call the mean inverse integrator. For example, when \(N=3\) we get \[\begin{bmatrix}\overline{y}_{0}\\ \overline{y}_{1}\\ \overline{y}_{2}\\ \overline{y}_{3}\end{bmatrix}=\frac{1}{3}\begin{bmatrix}0&1&1&1\\ 1&0&1&1\\ 1&1&0&1\\ 1&1&1&0\\ \end{bmatrix}\begin{bmatrix}\tilde{y}_{0}\\ \tilde{y}_{1}\\ \tilde{y}_{2}\\ \tilde{y}_{3}\end{bmatrix}+\frac{h}{3}\begin{bmatrix}-3&-2&-1\\ 1&-2&-1\\ 1&2&-1\\ 1&2&3\end{bmatrix}\begin{bmatrix}\Psi_{0,1}\\ \Psi_{1,2}\\ \Psi_{2,3}\end{bmatrix},\] and the same structure is illustrated in Figure 3. **Definition 5.1** (Mean inverse integrator).: For a sample \(S_{N}\) and an inverse-explicit integrator \(\Psi_{n,n+1}\), the mean inverse integrator is given by \[\overline{Y}=\frac{1}{N}\bigg{(}U\tilde{Y}+hW\Psi\bigg{)} \tag{11}\] where \(\tilde{Y}:=[\tilde{y}_{0},\dots,\tilde{y}_{N}]^{T}\in\mathbb{R}^{(N+1)\times m}\), \(\Psi:=[\Psi_{0,1},\dots,\Psi_{N-1,N}]^{T}\in\mathbb{R}^{N\times m}\). Finally, \(U\in\mathbb{R}^{(N+1)\times(N+1)}\) and \(W\in\mathbb{R}^{(N+1)\times N}\) are given by \[[U]_{ij}:=\begin{cases}0&\text{if}\quad i=j\\ 1&\text{else}\end{cases}\qquad\text{and}\qquad[W]_{ij}:=\begin{cases}j-1-N& \text{if}\quad j\geq i\\ j&\text{else}\end{cases}.\] By substituting the known vector field \(f\) with a neural network \(f_{\theta}\) and denoting the matrix containing vector field evaluations by \(\Psi_{\theta}\) such that \(\overline{Y}_{\theta}:=\frac{1}{N}(U\tilde{Y}+hW\Psi_{\theta})\), we can formulate an analogue to the inverse problem (6) by \[\operatorname*{arg\,min}_{\theta}\big{\|}\tilde{Y}-\overline{Y}_{\theta}\big{\|}. \tag{12}\] **Analysis of sensitivity to noise:** Consider the optimization problems using integrators either as one-step methods or MII by (6) resp. (12). We want to investigate how uncertainty in the data \(\tilde{y}_{n}\) introduces uncertainty in the optimization problem. Assume, for the purpose of analysis, that the underlying vector field \(f(y)\) is known. Let \[\mathcal{T}_{n}^{\text{OS}} :=\tilde{y}_{n}-\Phi_{h,f}(\tilde{y}_{n-1},\tilde{y}_{n}),\] \[\mathcal{T}_{n}^{\text{MI}} :=\tilde{y}_{n}-[\overline{Y}]_{n}\] be the _optimization target_ or the expression one aims to minimize using a one-step method (OS) and the MII, where \(\overline{Y}\) is given by Definition 5.1. For a matrix \(A\) with eigenvalues \(\lambda_{i}(A)\), the spectral radius is given by \(\rho(A):=\max_{i}|\lambda_{i}(A)|\). An analytic expression that approximates \(\rho(\mathcal{T}_{n}^{\text{OS}})\) and \(\rho(\mathcal{T}_{n}^{\text{MI}})\) by linearization of \(f\) for a general MIRK method is provided below. **Theorem 5.2**.: _Let \(S_{N}=\{\tilde{y}_{n}\}_{n=0}^{N}\) be a set of noisy samples, equidistant in time with step size \(h\), with Gaussian perturbations as defined by (9) with variance \(\sigma^{2}\). Assume that a MIRK integrator \(\Phi_{h,f}\) is used as a one-step method. Then the spectral radius is approximated by_ \[\rho_{n}^{\text{OS}} :=\rho\bigg{(}\text{Var}\big{[}\mathcal{T}_{n}^{\text{OS}}\big{]} \bigg{)}\approx\sigma^{2}\bigg{\|}2I+hb^{T}(1-2v)\big{(}f^{\prime}+f^{\prime T }\big{)}+h^{2}Q^{\text{OS}}\bigg{\|}_{2}\,, \tag{13}\] \[\rho_{n}^{\text{MI}} :=\rho\bigg{(}\text{Var}\big{[}\mathcal{T}_{n}^{\text{MI}}\big{]} \bigg{)}\approx\frac{\sigma^{2}}{N}\bigg{\|}(1+N)I+hP_{nn}+\frac{h}{N}\sum_{ \begin{subarray}{c}j=0\\ j\neq n\end{subarray}}^{s}P_{nj}+\frac{h^{2}}{N}Q^{\text{MI}}\bigg{\|}_{2}\,, \tag{14}\] _where \(f^{\prime}:=f^{\prime}(y_{n})\) and \(P_{nj},Q^{\text{OS}}\) and \(Q^{\text{MI}}\) (defined in (24) in Appendix G) are matrices independent of the step size \(h\)._ The proof is found in Appendix G. Let \(\alpha:=b^{T}(1-2v)\) denote the coefficients of the first order term in \(h\) of Equation (13). For any explicit RK method we have that \(v=0\) and since \(b^{T}1=1\) (method of at least order one) we find that \(\alpha_{\text{ERK}}=1\). Considering the Butcher tableau of MIRK4 in Figure 9 we find that \(\alpha_{\text{MIRK4}}=0\). Thus, as \(h\to 0\) we would expect quadratic convergence of MIRK4 and linear convergence of RK4 for \(\rho_{n}^{\text{OS}}\) to \(2\sigma^{2}\). Considering MII (14) one would expect linear convergence for \(\rho_{n}^{\text{MI}}\) to \(\sigma^{2}\) if \(N\) is large, as \(h\to 0\). A numerical approximation of \(\rho_{n}^{\text{OS}}\) and \(\rho_{n}^{\text{MI}}\) could be realized by a Monte-Carlo estimate. We compute the spectral radius \(\hat{\rho}_{n}\) of the empirical covariance matrix of \(\mathcal{T}_{n}^{\text{OS}}\) and \(\mathcal{T}_{n}^{\text{MI}}\) by sampling \(5\cdot 10^{3}\) normally distributed perturbations \(\delta_{n}\) with \(\sigma^{2}=2.5\cdot 10^{-3}\) to each point \(y_{n}\) in a trajectory of \(N+1\) points and step size \(h\). We then compute the Figure 4: Average of \(\overline{\rho}\) over \(10\) trajectories. The shaded area represent one standard deviation. Figure 3: Illustration of the structure of the mean inverse integrator for \(N=3\). trajectory average \(\overline{\rho}=\frac{1}{N+1}\sum_{n=0}^{N}\hat{\rho}_{n}\), fix the end time \(T=2.4\), repeat the approximations for decreasing step sizes \(h\) and increasing \(N\) and compute the average of \(\overline{\rho}\) for \(10\) randomly sampled trajectories \(S_{N}\) from the double pendulum system. The plot in Figure 4 corresponds well with what one would expect from Theorem 5.2 and confirms that first MIRK (with \(v\neq 0\)) and secondly MII reduces the sensitivity to noise in the optimization target. ## 6 Experiments Methods and test problems: We train HNNs using different integrators and methods in the inverse problem (6). We use MIRK4 together with the MII method and compare to the implicit midpoint method, RK4 and MIRK4 applied as one-step methods, as well as ISO followed by Stormer-Verlet and RK4 integrated over multiple time-steps. The latter strategy, illustrated in Figure 2, was suggested in [10], where Stormer-Verlet is used. Separable networks \(H_{\theta}(q,p)=H_{1,\theta}(q)+H_{2,\theta}(p)\) are trained on data from the Fermi-Pasta-Ulam-Tsingou (FPUT) problem and the Henon-Heiles system. For the double pendulum, which is non-separable, a fully connected network is used for all methods except Stormer-Verlet, which requires separability in order to be explicit. The Hamiltonians are described in Appendix A and all systems have solutions \(y(t)\in\mathbb{R}^{4}\). After using the specified integrators in training, approximated solutions are computed for each learned vector field \(f_{\theta}\) using the Scikit-learn implementation of DOP853 [35], which is also used to generate the training data. The error is averaged over \(M=10\) points and we find what we call the flow error by \[e(f_{\theta}) =\frac{1}{M}\sum_{n=1}^{M}\|\hat{y}_{n}-y(t_{n})\|_{2},\quad y(t_{ n})\in S_{M}^{\text{test}}, \tag{15}\] \[\hat{y}_{n+1} =\Phi_{h,f_{\theta}}(y_{n}).\] Training data: Training data is generated by sampling \(N_{2}=300\) random initial values \(y_{0}\) requiring that \(0.3\leq\|y_{0}\|_{2}\leq 0.6\). The data \(S_{N_{1},N_{2}}=\{\hat{y}_{n}^{(j)}\}_{n=0,j=0}^{N_{1},N_{2}}\) is found by integrating the initial values with DOP853 with a tolerance of \(10^{-15}\) for the following step sizes and number of steps: \((h,N_{1})=(0.4,4),(0.2,8),(0.1,16)\). The points in the flow are perturbed by noise where \(\sigma\in\{0,0.05\}\). Error is measured in \(M=10\) random points in the flow, within the same domain as the initial values. Furthermore, experiments are repeated with a new random seed for the generation of data and initialization of neural network parameters five times in order to compute the standard deviation of the flow error. The flow error is shown in Figure 6. Additional results are presented in Appendix B. Neural network architecture and optimization: For all test problems, the neural networks have \(3\) layers with a width of \(200\) neurons and \(\tanh(\cdot)\) as the activation function. The algorithms are implemented using PyTorch [36] and the code for performing ISO is a modification of the implementation by [10]1. Training is done using the quasi-Newton L-BFGS algorithm [37] for \(20\) epochs without batching. Further details are provided in Appendix E and the code could be found at github.com/hakonnonr/learning_hamiltonian_noise. Footnote 1: [https://github.com/zhengdao-chen/SRNN](https://github.com/zhengdao-chen/SRNN) (CC-BY-NC 4.0 License) Results: As observed in Figure 6 and supported by the analytical result illustrated in Figure 4, the MII approach facilitates more accurate training from noisy data than one-step methods. However, training with multiple integration steps in combination with ISO yields lower errors when RK4 is used for the Henon-Heiles problem and similar performance as MII on the double pendulum. We Figure 5: Roll-out in time obtained by integrating over the learned vector fields when training on data from the double pendulum Hamiltonian. notice that the SRNN approach, i.e. ISO with Stormer-Verlet, is improved when switching to RK4, which means sacrificing symplecticity to achieve higher order. The results for FPUT stand out in Figure 6, since both ISO methods have large errors here. The roll-out in time of the learned vector fields is presented in Figure 8 in Appendix B, where the same can be observed. As also could be seen here, the FPUT Hamiltonian gives rise to highly oscillatory trajectories, and the errors observed in Figure 6 might indicate that ISO is ill-suited for this kind of dynamical systems. Two observations could be made regarding the one-step methods without averaging or ISO. First, it is likely that the midpoint method has weaker performance for large step sizes due to its lower order, compared to both RK4 and MIRK4, despite the fact that it is a symplectic method. The same is clear from Figure 7 in Appendix B, which display the flow error when training on data without noise. Secondly, building on the sensitivity analysis, we observe that MIRK4 consistently attains higher accuracy than RK4, as expected from the Monte-Carlo simulation found in Figure 4. ## 7 Conclusion In this work we present the mean inverse integrator, which allows both chaotic and oscillatory dynamical systems to be learned with high accuracy from noisy data. Within this method, integrators of the MIRK class are a key component. To analyse how noise is propagated when training with MII and MIRK, compared to much used explicit methods such as RK4, we developed a sensitivity analysis that is verified both by a Monte-Carlo approximation and reflected in the error of the learned vector fields. Finally, we build on the SRNN [10] by replacing Stormer-Verlet with RK4, and observer increased performance. When also considering the weak performance of the implicit midpoint method, this tells us that order might be of greater importance than preserving the symplectic structure when training HNNs. Both the MIRK methods, the mean inverse integrator and initial state optimization form building blocks that could be combined to form novel approaches for solving inverse problems and learning from noisy data. **Limitations:** The experiments presented here assume that both the generalized coordinates \(q_{n}\) and the generalized momenta \(p_{n}\) could be observed. In a setting where HNNs are to model real and not simulated data, the observations might lack generalized momenta [38] or follow Cartesian coordinates, requiring the enforcement of constraints [17; 39]. Combining approaches that are suitable for data that is both noisy and follow less trivial coordinate systems is a subject for future research. Figure 6: The flow error when learning vector fields using one-step methods directly (Midpoint, RK4 and MIRK4), ISO and multiple time-steps (ISO Störmer and ISO RK4) and MII (MII MIRK4). The error bars display the standard deviation after rerunning 5 experiments on data with \(\sigma=0.05\). The right subplot shows the computational time used in training against the flow error.
2303.11792
Searching for Time-Dependent Axion Dark Matter Signals in Pulsars
Axion dark matter can be converted into photons in the magnetospheres of neutron stars leading to a spectral line centred on the Compton wavelength of the axion. Due to the rotation of the star and the plasma effects in the magnetosphere the signal is predicted to be periodic with significant time variation - a unique smoking gun for axion dark matter. As a proof of principle and to develop the methodology, we carry out the first time domain search of the signal using data from PSR J2144$-$3933 taken as part of the MeerTIME project on MeerKAT telescope. We search for specific signal templates using a matched filter technique and discuss when a time-domain analysis (as is typically the case in pulsar observations) gives greater sensitivity to the axion-coupling to photons in comparison to a simple time-averaged total flux study. We do not find any candidate signals and, hence, impose an upper limit on the axion-to-photon coupling of $g_{a\gamma\gamma}<4\times 10^{-11}\,{\rm GeV}^{-1}$ over the mass range $m_{\rm a}=3.9-4.7\,\mu{\rm eV}$ using this data. This limit relies on PSR J2144$-$3933 not being an extremely aligned rotator, as strongly supported by simple arguments based on the observed pulse profile width. We discuss the possibilities of improving this limit using future observations with MeerKAT and also SKA1-mid and the possibility of using other objects. Finally, to evade modelling uncertainties in axion radio signals, we also carry out a generic ``any periodic-signal search" in the data, finding no evidence for an axion signal.
R. A. Battye, M. J. Keith, J. I. McDonald, S. Srinivasan, B. W. Stappers, P. Weltevrede
2023-03-21T12:22:31Z
http://arxiv.org/abs/2303.11792v2
# Searching for Time-Dependent Axion Dark Matter Signals in Pulsars ###### Abstract Axion dark matter can be converted into photons in the magnetospheres of neutron stars leading to a spectral line centred on the Compton wavelength of the axion. Due to the rotation of the star and the plasma effects in the magnetosphere the signal is predicted to be periodic with significant time variation - a unique smoking gun for axion dark matter. As a proof of principle and to develop the methodology, we carry out the first time domain search of the signal using data from PSR J2144\(-\)3933 taken as part of the MeerTIME project on MeerKAT telescope. We search for specific signal templates using a matched filter technique and discuss when a time-domain analysis (as is typically the case in pulsar observations) gives greater sensitivity to the axion-coupling to photons in comparison to a simple time-averaged total flux study. We do not find any candidate signals and, hence, impose an upper limit on the axion-to-photon coupling of \(g_{a\gamma\gamma}<4\times 10^{-11}\,\mathrm{GeV}^{-1}\) over the mass range \(m_{\mathrm{a}}=3.9-4.7\,\mathrm{\mu eV}\) using this data. This limit relies on PSR J2144\(-\)3933 not being an extremely aligned rotator, as strongly supported by simple arguments based on the observed pulse profile width. We discuss the possibilities of improving this limit using future observations with MeerKAT and also SKA1-mid and the possibility of using other objects. Finally, to evade modelling uncertainties in axion radio signals, we also carry out a generic "any periodic-signal search" in the data, finding no evidence for an axion signal. Axions; Dark matter; Neutron stars pacs: 95.35.+d; 14.80.Mz; 97.60.Jd + Footnote †: preprint: IRMP-CP3-23-14 + Footnote †: preprint: IRMP-CP3-23-14 ## I Introduction The search for dark matter in the form of axions [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] is continuing to gather momentum at a dramatic pace. Of particular interest is the mass range \(m_{\mathrm{a}}\approx 0.1-1000\,\mathrm{\mu eV}\) where plausible scenarios [11; 12; 13; 14; 15; 16; 17; 18] have been proposed to realise the Cold Dark Matter abundance \(\Omega_{\mathrm{c}}h^{2}\approx 0.12\) that is found to be compatible with cosmological observations, for example, those of the CMB [19]. There are a number of haloscope experiments [20] which have placed constraints on the axion coupling to photons. These include early cavity experiments [21; 22; 23] and their modern incarnations including ADMX [24; 25; 26; 27; 28] and its various upgrades and pathfinders [29; 30; 31]. This has spawned a plethora of active and proposed experiments including CAPP [32; 33; 34; 35], HAYSTAC [36; 37; 38] QUAX [39; 40], ORGAN [41; 42], CAST-RADES [43], TASEH [44] and GrA-Hal [45] aiming to detect axions using similar techniques. Complementing this, there are also a number of proposed experiments which go beyond the cavity paradigm, allowing laboratory searches to achieve broad frequency coverage across previously challenging frequency ranges. These include plasma haloscope designs like ALPHA [46], broadband reflectors envisaged in the BREAD [47] collaboration, and dielectric haloscopes such as MADMAX [48]. There are also novel designs which aim to detect magnetic fields induced by axion sources such as DMRdio [49], which will operate at lower frequencies (\(m_{\mathrm{a}}\lesssim\mu\)eV) and those seeking to using novel materials at higher frequencies [50; 51; 52] The largest magnetic fields currently used in laboratory searches for axion dark matter typically do not exceed \(\sim 10^{5}\mathrm{G}\,(10\,\mathrm{T})\), a limiting factor in such searches. By contrast, astrophysical magnetic fields in neutron stars can be as high as \(\sim 10^{15}\mathrm{G}\,(10^{11}\,\mathrm{T})\), making them excellent targets for indirect searches of axion dark matter [53; 54; 55]. In addition, neutron stars are surrounded by a magnetosphere whose varying plasma frequency matches the axion mass across a broad range of masses. This degeneracy leads to a dramatic resonant enhancement of the signal emanating from regions with \(m_{a}\simeq\omega_{\mathrm{p}}\), where \(\omega_{\mathrm{p}}\) is the plasma frequency. As a result, neutron stars can act as broadband axion dark matter detectors. Based on a simple but representative model for a neutron star magnetosphere and the density of axions around the star, Refs. [54; 55; 56] predicted signals that could be easily detected using current and future telescopes operating in the radio-mm waveband, which corresponds to the Compton wavelength of the dark matter axion scenarios referred to above. Spurred on by this, great progress has recently been made in characterising the signal properties using sophisticated ray-tracing methods [57; 58; 59] which are capable of computing the line width induced from plasma effects and the precise time-variation and angu lar dependence of the signal. Early attempts have also been made to address axion-photon mixing in 3D [60; 61], though this remains an ongoing area of research. Various searches have been carried out to detect radio signals produced by axion dark matter converting into photons in the magnetospheres of neutron stars using the Goldreich-Julian (GJ) model [62] for the magnetosphere and estimates of the local density of dark matter, extrapolated to the location of the star in question. These searches have either looked for a background excess near the Galactic centre from populations of neutron stars [63; 64], or have focused on single objects such as the Galactic centre magnetar [65; 66; 67] or isolated neutron stars [68; 69], and have established bounds on the axion-to-photon coupling, \(g_{a\gamma\gamma}\), which are better than the bounds from the axion helioscope CAST [70]. Now that the first wave of searches has been carried out, a natural question to ask is what improved observational strategies are available to increase our sensitivity to the axion photon coupling and boost our chances of detecting dark matter axions from neutron stars. In this vein, one might also ask how our newly attained understanding of precise signal properties (including time and frequency information) might be leveraged to increase the power of such searches. To date, all the searches for axions using neutron stars have focused on looking for a spectral line in the frequency domain. The goal of the present work is twofold: (i) to establish a framework of time-domain searches for axion dark matter signals in radio data and (ii) to demonstrate this technique by carrying out time-domain observations of pulsars. Our goal is to understand under what circumstances augmenting these searches to include time-domain information of the signal can improve sensitivity to \(g_{a\gamma\gamma}\) and by injecting signal templates into radio data, demonstrate a practical route to obtaining limits on \(g_{a\gamma\gamma}\) which out-perform a simple line-search in the frequency domain. The structure of the paper is organised as follows. In sec II we review the mechanism for photon production from axion dark matter, describe our ray-tracing procedure for modelling the radio signal and discuss the time-dependence of the expected signal. In sec. III we describe a procedure to search for time-dependent signals in data using a matched filter, and use this to outline what types of periodic signals lead to a strong gain from time-domain information. In sec. IV we apply our pipeline to MeerKAT observations of PSR J2144\(-\)3933 to search for axion dark matter. Our null result is used to place limits on the axion coupling to photons. In sections V and VI we explore possible future targets and perform a generalized search for periodic signals. In section VII we offer our conclusions. ## II Modelling the signal due to axions The conversion between axions and photons in strong magnetic fields was laid out in the classic reference [71]. It was pointed out in [53] that this mixing could convert dark matter axions into radio photons1 in the strongly magnetised plasmas which surround neutron stars. In recent years, as axions have moved to the forefront as dark matter candidates, these ideas have been pursued with renewed vigour [73; 55] and this programme has lead to a variety of observations [74; 75; 76; 64; 67] searching for radio lines from dark matter axions. These observational efforts have been accompanied by a more concerted effort to improve the modelling of the signal itself [77; 58; 78; 59; 61] which consists primarily of developing ray-tracing packages to precisely track the photons from their point of emission to the observer, thereby allowing one to derive signal templates which could be detected by a radio telescope. We now briefly review the basic features of the production mechanism and ray-tracing routine. More details can be found in [77; 58; 79; 59]. Footnote 1: See [72] for a similar mechanism with dark photons. ### Ray-tracing photons in magnetised plasmas Resonant conversion between axions and photons occurs at points where \(k_{\gamma}^{\mu}=k_{a}^{\mu}\) where \(k_{\gamma}^{\mu}\) and \(k_{a}^{\mu}\) are the photon and axion 4-momentum, respectively. An attempt was made to understand the conversion probability for axions to photons \(p_{a\gamma}\) in [61] leading to \[p_{a\gamma\gamma}=\frac{\pi}{2}\frac{g_{a\gamma\gamma}^{2}\sin^{2}\theta_{B} \left|\mathbf{B}\right|^{2}}{\left|\mathbf{k}_{\gamma}\right|\left|\omega_{p} ^{\nu}\right|}\cdot\frac{m_{a}^{5}}{\left(\left|\mathbf{k}_{\gamma}\right|^{2} +m_{a}^{2}\sin^{2}\theta_{B}\right)^{2}}, \tag{1}\] which attempts to incorporate 3D effects into the conversion probability. This characterises the ratio of the energy density between an axion wavepacket and a photon wavepacket, the latter being subsequently transported out of the magnetosphere along geodesics determined by the photon dispersion relation in the strongly magnetised plasma. Note in the present work we include simultaneously the effects of gravity (by incorporating the curved spacetime metric of the neutron star) and strongly magnetised fields. This results in a covariant dispersion relation for photons in a magnetised plasma which have a dispersion relation [80; 81] \[D(k)=g^{\mu\nu}k_{\mu}k_{\nu}\\ -(\omega^{2}-k_{\parallel}^{2})\sum_{s}\frac{4\pi q_{s}^{2}n_{s}}{ \gamma_{s}^{2}[\mu_{s}(\omega-k_{\parallel})^{2}-c_{s}^{2}(\omega v_{s}-k_{ \parallel})^{2}]}. \tag{2}\] Here, the sum \(s\) is over different charge carrier species, \(\gamma_{s}\) is a generalised Lorentz factor, and \(k_{\parallel}\) gives the 4-momentum projected onto the magnetic field and \(v_{s}\) corresponds to the velocity of charge carriers and \(\mu_{s}\) the energy per particle. The number density of each species \(s\) is given by \(n_{s}\) and the charge by \(q_{s}\). Full definitions and detailed explanations of the various terms can be found in [80]. We will take the non-relativistic limit, setting \(v_{s}=0\), \(\gamma=1\), \(c_{s}=0\) and \(\mu_{s}=m_{s}\). We also consider a purely electron-positron plasma so that \(q_{s}=e\). Setting \(D(k)=0\) then gives the dispersion relations for photons in a non-relativistic plasma. The equations of motion for the photon rays are then given by Hamilton's equations \[\frac{dx^{\mu}}{d\lambda}=\frac{\partial D}{\partial k_{\mu}},\qquad\frac{dk_{ \mu}}{d\lambda}=-\frac{\partial D}{\partial x^{\mu}}. \tag{3}\] To compute the power, we back-trace from the observer to the point of emission, following the equations of motion (3). This is analogous to the procedure used in [57; 58]. In addition, we also now include multiple axion-photon conversions arising from multiple reflections off the critical surface, as happens within "throats" of the plasma distribution around the neutron star. These throats are partially enclosed regions near the charge separation gap off whose walls the photon can be multiply-reflected due to plasma gradients. This can enhance the power of the signal relative to not including such effects as was done in [57; 58]. An extensive analysis of ray-tracing techniques which combines the physical effects considered across [58] and [59] is currently underway and will appear in a companion paper [79], where the full details of our scheme will be presented. This will include a systematic study of anisotropic plasmas. We do not consider the effects of so-called "de-phasing" conjectured in [59] which awaits a more robust physical description to see if the effect persists under more mathematically rigorous formulation. We do not need to consider non-linear effects arising from very large conversion probabilities where photons may convert back into axions. This is safe for PSR J2144\(-\)3933 on which we performed our observations, which has sufficiently low magnetic fields that the conversion probability remains small. ### Signal templates Having outlined the basic details of our ray-tracing scheme. This can now be used to begin deriving signal templates. In particular, these simulations allow one to model the radio signal as a function of pulsar and axion input parameters. In particular, one can compute the profile of the signal in frequency and time. The frequency dependence of the profile is determined by the mass of the axion - which sets the central frequency of the radio line. The width is set by a combination of the velocity dispersion of dark matter and by line-broadening induced by the time-dependent nature of the plasma, which modifies photon frequencies as they move through the magnetosphere. The time variation of the signal arises from the fact that the plasma surrounding the neutron star is not axisymmetric. In the present approximation we assume an electron-positron plasma which co-rotates with the star with regions of positive and negative charge separated according to the Goldreich-Julian density [62] \[n_{\rm GJ}=\frac{B_{0}\Omega}{2\,e}\left(\frac{R}{r}\right)^{3} \Big{[} \cos\alpha+3\cos\alpha\cos(2\theta)\] \[+3\sin\alpha\cos(\phi-\Omega t)\sin 2\theta\Big{]}\,, \tag{4}\] with the plasma frequency given by \(\omega_{\rm p}=\sqrt{4\pi|n_{\rm GJ}|/m_{e}}\). Here, \(\Omega\) is the frequency of the pulsar, \(\alpha\) is the angle between the magnetic axis of the co-rotating dipole and the rotation axis of the star, \(R\) is the stellar radius, and \(B_{0}\) is the magnetic field strength on the surface at the magnetic poles. The polar coordinate \(\theta\) and azimuthal angle \(\phi\) are defined with respect to the rotational axis of the star. It is obvious that whenever \(\alpha\neq 0\), the plasma is time-dependent with respect to a non-rotating observer. This results in time-dependent radio signals, as illustrated in Fig. 1. For a given pulsar, the remaining input parameters to determine the axion dark matter radio signal are then the distance to the pulsar \(D\), and the dark matter density \(\rho_{\rm DM}\) at the position of the pulsar. In our analysis, we will take \(P\), \(B_{0}\) and \(D\) to be their quoted measured values. In principle there are some extra uncertainties which would need to be taken into account. Pulsar periods \(P\) are one of the best-measured quantities in astronomy, while the magnetic field strength of the pulsar is inferred from measurements of \(P\) and \(\dot{P}\) the spin-down rate of the pulsar combined with model-dependent parameters including the moment of inertia of the pulsar and its radius [82]. This calculation is standard but assumes that the energy released by the pulsar in the form of radio emission comes from the loss of rotational energy calculated from the spin-down rate. There is an unquantifiable uncertainty associated with it. For example, the comparatively large values of \(P\dot{P}\) observed for magnetars form the basis for their large inferred values of \(B_{0}\), but it is also known that the magnetars are known to emit large X-ray fluxes whose luminosity cannot be explained by spin-down alone. The distance to the pulsar is inferred from the dispersion of the pulse as a function of frequency, since the photons emitted in the main beam of the pulsar traverse through the galactic electron density along the line-of-sight. Given a model for the galactic electron density, one can estimate the distance to a pulsar. Galactic dark matter profiles allow one to predict the dark matter density at the position of the pulsar, but these models become highly uncertain as one gets closer to the galactic centre, where some models predict a spike in the density, while others predict a more cored profile. Based on these arguments, we conclude that the completely unknown quantities which parameterise the signal templates are \((\alpha,\theta)\). We, therefore, generate a simulated database2 of periodic flux profiles as a function of \((\alpha,\theta)\). Some of these profiles are displayed in Fig. 1 which indicates the \((\alpha,\theta)\) dependence of the time-variability of the signal. Footnote 2: Formally we generate templates for discrete \((\alpha_{i},\theta_{i})\) and numerically interpolate to generate a template for a continuous range of \(\alpha\) and \(\theta\). See appendix A for a description of this procedure and an illustration of its accuracy. In the next section, we describe how to harness the information and the larger time-variability of the signal to improve the prospects of detecting axion dark matter. ## III Searching for time-dependent signals In order to search for time dependent signals we will employ a matched-filter template-fitting approach similar to that used in Gravitational Wave Astronomy to detect the waveforms of the late stages of binary black hole inspirals [83, 84]. In that case, once a detection of gravitational waves was made, this allows estimates of physical parameters such as the black hole masses. Formally, if an axion were detected, one could use axion radio signals to fit model parameters of the pulsar magnetosphere. However, our ambition at this stage is much more conservative: we will use the signal-to-noise estimate from the matched filter, \(\hat{q}\), defined below, to quantify the likelihood of detection. In this sense, \(\hat{q}\) acts as a statistical test for whether the data is distinguishable from noise. Values of \(\hat{q}\) above a threshold then constitute a detection. Conversely, for values below this, by injecting would-be signals into the data, we obtain the expected value \(q_{\rm exp}=<\hat{q}>\) (see eq. (15)) from an axion signal. By comparing this to the measured value \(\hat{q}\), we can exclude regions of axion parameter space. This procedure provides a means to derive limits on \(g_{a\gamma\gamma}\) as a function of \(m_{\rm a}\), and importantly allows us to take into account that for a fixed value of \(m_{\rm a}\) there are a wide range of templates for the expected signal due to the parameters of the particular neutron star system under consideration. These are the period of the pulsar, \(P\), its surface magnetic field flux density, \(B_{0}\), the radius of the neutron star, \(R\), the angle \(\alpha\) between the magnetic axis and spin axis of the star, and the angle \(\theta\) between the line of sight and the spin axis. As in previous attempts to derive constraints on \(g_{a\gamma\gamma}\) using neutron stars [64, 65, 66, 67, 68] for simplicity, as a demonstration of the filter, we do not, for instance consider uncertainties in \(B_{0}\) or \(R\) (the period, \(P\) is of course measured with tremendous accuracy). We leave a computationally intensive parameter scan for future work, but this would be a straightforward extension of the existing framework. Instead, our main focus here is on the sensitivity of the time-dependence of the signal to pulsar parameters, which is especially sensitive to the value of \(\alpha\) and \(\theta\). For each value of \(m_{\rm a}\), we therefore obtain a constraint on the value of \(g_{a\gamma\gamma}\) for every pair (\(\alpha\),\(\theta\)). We can then exclude certain ranges of \(\alpha\) and \(\theta\) with further modelling and observations of the pulsar signal, notably the pulse width. We then use the value of \((\alpha,\theta)\) from this remaining subset which gives the most conservative constraints on \(g_{a\gamma\gamma}\). ### Derivation of matched filter and mathematical properties Matched filters are a standard technique in signal processing and they are often used in astronomy to search for signals with a known, or parameterizable profile. This can be done in the spatial, frequency or time domains. In this section, we will derive the standard matched filter before discussing some of its properties. In section III.2 we use it to quantify the pros and cons of time domain observations and in section III.3 we will show how it can be used to recover an injected signal in simulated radio data. There are a number of ways of formulating the matched filter. Here, we will use a discrete matched filter in which the data is represented by a vector of finite length. This can be generalised to continuous functions in which the vector inner products become convolution integrals over functions (see [84]). The discrete formulation has the advantage of simplifying notation. Our starting point is the so-called "data vector", \(\mathbf{d}\). We will assume the data is the sum of a signal \(\mathbf{S}=S_{0}\mathbf{F}(\mathbf{p})\) and some additive noise \(\hat{\mathbf{n}}\) \[\mathbf{d}=S_{0}\mathbf{F}(\mathbf{p})+\hat{\mathbf{n}}\,. \tag{5}\] Figure 1: Time-dependence of radio templates from axion dark matter as a function of pulse phase \(\phi\) for different values of \(\theta\) with fixed value of \(\alpha=20^{\circ}\) with \(g_{a\gamma\gamma}=10^{-10}\)GeV\({}^{-1}\). As expected there is little time variation for \(\theta=10^{\circ}\), but it can be substantial for intermediate angles. When \(\theta=90^{\circ}\) the “throats” of the GJ model never cross the line-of-sight so, although there is some emission, it is relatively weak compared to cases where they are visible to the observer. Here, we have decomposed the signal according to \[|\mathbf{F}|\equiv\sqrt{\mathbf{F}\cdot\mathbf{F}}=1 \tag{6}\] so that \(S_{0}\), gives the root mean squared flux density of the signal \[S_{0}=\sqrt{\mathbf{S}\cdot\mathbf{S}}\,. \tag{7}\] The signal is further characterised by a "parameter vector" \(\mathbf{p}\). In section II we will compute a number of templates \(\mathbf{F}(\mathbf{p})\) for the signal where \(\mathbf{p}=(m_{\mathrm{a}},\alpha,\theta)\). The noise vector \(\hat{\mathbf{n}}\) is assumed to be Gaussian with \(\langle\mathbf{n}\rangle=0\) and \(\langle\mathbf{n}\mathbf{n}^{T}\rangle=C\) where \(\langle..\rangle\) denotes an ensemble average of noise realizations and \(C_{ij}\) is the covariance matrix with \(i,j=1,..,n_{\mathrm{d}}\) where \(n_{d}\) is the total number of data points. Assuming a Gaussian likelihood, \(-2\log\mathcal{L}=\chi^{2}\), one can calculate the maximum likelihood estimate \(\hat{S}_{0}\) by minimizing \[\chi^{2}=(\mathbf{d}-S_{0}\mathbf{F})^{T}C^{-1}(\mathbf{d}-S_{0}\mathbf{F})\,. \tag{8}\] In the above equation, we assume the data vector \(\mathbf{d}\) contains the true signal \(\mathbf{S}=\mathbf{S}_{\mathrm{true}}\) so that minimising \(\chi^{2}\) above can be thought of as minimising a generalised least-squares difference (relative to the noise in each channel - hence the factor \(C^{-1}\)) between the true signal and possible templates \(S_{0}\mathbf{F}\). Viewing \(\chi^{2}\) as an unknown function of \(S_{0}\), we can find the minimising value, \(\hat{S}_{0}\) given by \[\hat{S}_{0}=\frac{\mathbf{F}^{T}C^{-1}\mathbf{d}}{\mathbf{F}^{T}C^{-1}\mathbf{ F}}\,. \tag{9}\] One can also deduce the "matched filter noise" \[\sigma_{\mathrm{MF}}=\left(\mathbf{F}^{T}C^{-1}\mathbf{F}\right)^{-1/2} \tag{10}\] and the signal-to-noise estimate \(\hat{q}\) may then be written as \[\hat{q}=\frac{\hat{S}_{0}}{\sigma_{\mathrm{MF}}}=\frac{\mathbf{F}^{T}C^{-1} \mathbf{d}}{\left(\mathbf{F}^{T}C^{-1}\mathbf{F}\right)^{1/2}}\,. \tag{11}\] It is important to understand the difference between the noise in the data, characterised by \(C\), and the "matched filter noise", \(\sigma_{\mathrm{MF}}\). They are related, but \(\sigma_{\mathrm{MF}}\) also depends on the filter. We return to this issue in the next section. In order to understand properties of the matched filter we will assume diagonal covariance matrix \[C_{ij}=\sigma_{N}^{2}\delta_{ij} \tag{12}\] where \(\delta_{ij}\) is the Kronecker delta and \(\sigma_{N}\) is the noise in each channel - all of what said here can be adapted to the case of a general covariance matrix, but it is less simple to see. In general, the data has a number of dimensions, for example, space, time and/or frequency, then \(\mathbf{d}\) has \(n_{\mathrm{d}}=n_{1}\times...\times n_{k}\) entries where \(n_{i}\) for \(i=1,...,k\) are the number of points in each of the dimensions. In our case we will search in the frequency and time directions so the number of entries in the data vector will be \(n_{\mathrm{d}}=n_{\mathrm{f}}\times n_{\mathrm{t}}\) where \(n_{\mathrm{f}}\) is the number of frequency channels and \(n_{\mathrm{t}}\) is the number of time samples. In that case, the data vector could be written as \[\mathbf{d}=\Big{(}d_{t_{1}}^{\omega_{1}},\ldots,d_{t_{n_{\mathrm{t}}}}^{\omega _{1}},\ldots,d_{t_{1}}^{\omega_{n_{\mathrm{f}}}},\ldots,d_{t_{n_{\mathrm{f}}}} ^{\omega_{n_{\mathrm{f}}}}\Big{)}, \tag{13}\] where \(d_{t_{\mathrm{f}}}^{\omega_{i}}\) labels the data vector in the \(i\)th frequency bin and \(j\)th time-channel. The covariance matrix of the form (12) is then nothing more than the statement that the noise between all possible pairs of time and frequency bins is totally uncorrelated. Returning to our main discussion, it follows that for a covariance matrix of the form (12), we have \[\hat{q}=\frac{\mathbf{F}\cdot\mathbf{d}}{\sigma_{N}}\,, \tag{14}\] that is, the dot product of the filter \(\mathbf{F}\) with the data vector. When \(\mathbf{F}\) matches that in the true signal, we have \(\langle\hat{q}\rangle=S_{0}/\sigma_{N}\) (from eq. (5)). Now assume that \(\mathbf{p}_{\mathrm{true}}\) has entries which are the true parameters. The ensemble average of \(\hat{q}\) for a filter with arbitrary parameter, \(\mathbf{p}\) is \[\langle\hat{q}\rangle=\mathbf{F}(\mathbf{p})\cdot\mathbf{F}(\mathbf{p}_{ \mathrm{true}})\frac{S_{0}}{\sigma_{N}}\,. \tag{15}\] Next, we note that as a trivial consequence of the Cauchy-Schwarz inequality, we have \(\mathbf{F}(\mathbf{p})\cdot\mathbf{F}(\mathbf{p}_{\mathrm{true}})\leq| \mathbf{F}(\mathbf{p})||\,|\mathbf{F}(\mathbf{p}_{\mathrm{true}})|\) with equality if and only if \(\mathbf{F}(\mathbf{p})=\mathbf{F}(\mathbf{p}_{\mathrm{true}})\). Assuming that the template is non-degenerate with respect to the values of \(\mathbf{p}\), this occurs only for \(\mathbf{p}=\mathbf{p}_{\mathrm{true}}\) for which \(\hat{q}\) is then maximal. Thus \(\hat{q}\) acts as a likelihood test for the values of \(\mathbf{p}\). In what follows, since the line width of the axion is typically less than the width of our frequency channels, the vector (13) is sparse for a given value of \(m_{a}\), with non-vanishing entries in only one frequency channel where \(\omega=m_{a}\). This means filters with different values of \(m_{\mathrm{a}}\) are orthogonal. More generally, with higher frequency resolution, we would expect to be able to probe both the time and frequency structure of the signal. By contrast, filters with different values of \(\alpha\) and \(\theta\) are not orthogonal and will in general have overlap such that \(\mathbf{d}(\theta,\alpha,m_{a})\cdot\mathbf{d}(\theta^{\prime},\alpha^{\prime},m _{a})\neq 0\). In principle, this means an axion detection would allow us to determine likely values of the pulsar parameters in analogy to the way in which observable gravitational wave signals allow inference of the mass and spin of their associated black holes. However, our present approach will be to minimise the expected signal over a conservative subset of values of \((\alpha,\theta)\), thereby obtaining the most conservative constraints on \(g_{a\gamma\gamma}\) for allowed values of angles. In order to turn our continuous axion signals into discrete vectors we must perform some kind of coarse-graining. We therefore define a binning scheme for time-channels centered on the points \(t_{i}=(i-1/2)\Delta t\) where \(i=1,..,N\) and \(\Delta t=1/N\) so that the discretized signal is given by \[S_{t_{i}}^{\omega}=\frac{1}{\Delta t}\int_{t_{i}-\Delta t/2}^{t_{i}+\Delta t/2 }dtS(t,\omega), \tag{16}\] where \(S(t,\omega)\) is the flux density of the axion signal as a function of time (and frequency) that we derive using our radio signal models. ### Why do a time domain analysis? In this section we will discuss the advantages of doing a time domain analysis in terms of increasing the chances of detecting axions. We will also discuss the issue of whether subtraction of the pulse-average from the time-domain data (as is often the case in pulsar observations and as we have done in our data) will have significant impact. In order to do this we will investigate some properties of the matched filter. Consider applying the matched filter to a given frequency channel whose signal consists of \(N\) time-channels is \[\mathbf{S}=(S_{1},\ldots,S_{N}). \tag{17}\] Then according to Eqs (5)-(7) and (14), the matched filter will return \[\langle\hat{q}\rangle=\frac{\sqrt{\mathbf{S}\cdot\mathbf{S}}}{\sigma_{N}}, \tag{18}\] where \(\sigma_{N}\) is the noise on each of the \(n_{t}=N\) time-channels. The noise amplitude \(\sigma_{N}\) averaged across all times is then given by \(\bar{\sigma}=\sigma_{N}/\sqrt{N}\). We can therefore re-write (18) as \[\langle\hat{q}\rangle_{\text{time}}=\frac{\left(\sigma_{S}^{2}+\mu_{S}^{2} \right)^{1/2}}{\bar{\sigma}} \tag{19}\] where \(\mu_{S}=\Sigma_{i}S_{i}/N\) and \(\sigma_{S}=\sqrt{\Sigma_{i}S_{i}^{2}/N-\mu_{S}^{2}}\) are the average and standard deviation of \(\mathbf{S}\), respectively. Now let us consider carrying out a measurement with no time resolution. This is the case when one simply uses the telescope to make a total flux measurement over a long observing time. In this case the noise is again \(\bar{\sigma}\) given by averaging the noise over all integration time, the signal \(\sqrt{\mathbf{S}\cdot\mathbf{S}}\) is simply given by the mean \(\mu_{S}\) so that \[\langle\hat{q}\rangle_{\text{flux-avg.}}=\frac{\mu_{S}}{\bar{\sigma}}. \tag{20}\] This can be thought of as a trivial matched filter with a single time channel, in which any fine-grained time information has been lost. This is in effect how all previous single pulsar observations for axion dark matter have been carried out [65; 66; 67]. By comparing the cases of a time-domain analysis (19) with a total-flux measurement (20) it becomes immediately apparent that since \(\sigma_{S}^{2}\geq 0\), the time-domain analyses will always equal or outperform the time-averaged measurement. Thus time-domain information increases the potential to detect axions. In particular, when the relative time-variation is large (\(\sigma_{S}/\mu_{S}\gg 1\)), the time-domain search provides a gain in sensitivity by increasing the signal to noise by a factor \(\simeq\sqrt{\sigma_{S}/\mu_{S}}\), i.e. the square-root of the relative variance. This is a simple consequence of the fact that \(\hat{q}\) is proportional to the root-square of the signal, and so it implicitly encodes information about its variability. The same observation was also made in [57] but the matched filter allows this to be justified from first-principles. Although this is in general not necessary, the observations used later in the paper have had the average of the signal subtracted 3. Thus while we retain time-domain information, the baseline \(\mu_{S}\) of the signal is essentially re-normalised to zero. In this case, the time-domain analysis gives a baseline subtracted (BS) value Footnote 3: Observations of pulsars are often taken in this way since they are probing the time-variable signal from the rotation of the neutron star. Ultimately, there is nothing to prevent the use of the averaged signal as well, but it can require extra work to calibrate it. Here, we see that if the expected time variation is significant, removing the average does not significantly affect sensitivity to axions. \[\langle\hat{q}\rangle_{\text{time-BS}}=\frac{\sigma_{S}}{\bar{\sigma}}. \tag{21}\] Clearly, when there is only a small time variation, (\(\mu_{S}\gg\sigma_{S}\)), subtracting the baseline leads to a lower value of \(\langle\hat{q}\rangle\), relative to retaining it as in eq (19). This is for the simple reason that if the average signal is large, one loses a lot of signal power by removing the average. Conversely, in the regime where there is large time variation, \(\mu_{S}\ll\sigma_{S}\) the time-domain analysis _with_ baseline subtraction performs almost as well as eq. (19). This is again the statement that high peaks above the noise in the time-domain still allow for good discrimination from noise. ### Demonstration on simulated data In the previous subsections we have derived the main properties of the matched filter estimate of the signal-to-noise ratio. This is not particularly new to those with a background in astronomy, but may not be familiar to a general reader. The matched filter is the optimal filter for a Gaussian likelihood and would be the clearest way to identify a signal in the data with \(\hat{q}\) being a proxy for the signal to noise of detection. In this subsection, to aid understanding, we will demonstrate the performance of the matched filter in a toy example by injecting a signal for a neutron star with the same physical characteristics as PSR J2144\(-\)3933 into simulated data with similar noise to the observations that we have in hand, described in sections IV and IV.1. We will inject signals with a mass of \(m_{\rm a}=4.2\,\mu\)eV which corresponds to an observing frequency \(f_{\rm obs}\simeq 1.0\)GHz and \(g_{a\gamma\gamma}=10^{-10}\)GeV\({}^{-1}\). We will consider two different choices for the angles \(\alpha\) and \(\theta\). Case A with \((\alpha,\theta)=(0^{\circ},60^{\circ})\) is close to an aligned rotator and hence we would expect no time variation, whereas case B with \((\alpha,\theta)=(40^{\circ},60^{\circ})\) has a very strong time variation. We will consider two observations: one in which the pulse-averaged signal power is retained, and another where it is removed which is more common in pulsar observations as we have explained earlier. We have already discussed the pros and cons of the two approaches in section III.2 and this is just an illustration of the specific point. The full results of these test cases are shown in Fig. 2. For a nearly aligned rotator (\(\alpha=0^{\circ}\)), the pulsar is axisymmetric about its rotation axis, and the signal has no time-dependence. Therefore, in this case, if one searches for the signal with the pulse-average removed, there is by definition no signal present in the effective data vector, leading to a non-detection. The filter is essentially scanning a particular noise realisation with zero signal. In the bottom panel, the input signal contains significant time-dependence (roughly an order of magnitude). Therefore, the input signal is detected with a SNR of the same order of magnitude as in the total power case, in accordance with the discussion comparing (19) and (20). In all the cases except the baseline-subtracted \(\alpha=0\) case, the filter successfully returns the maximal SNR for the input value of \(\theta\). ## IV Observations of PSR J2144\(-\)3933 with Meerkat In order to test this idea we selected PSR J2144\(-\)3933. We did this by considering the list of observed neutron stars [85]4 which provides estimates for \(B_{0}\), \(P\) and the pulsar distance \(D\). In order to make an estimate of the strength of the signal expected for a particular pulsar we use an analytic formula based on the radial trajectories approach [55]. Although this assumption has been shown to be not sufficiently correct to provide accurate predictions [58; 59] it is likely that it gives a reasonable figure of merit since it should have the correct scaling with the important parameters. We will discuss the issue of what is the optimal target in more detail again in section V in light of what we have learnt. Specifically, we have used the figure of merit Footnote 4: [https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/) \[{\rm FOM}=\rho_{\rm DM}\frac{B_{0}^{2/3}P^{7/3}}{D^{2}}\,, \tag{22}\] where \(\rho_{\rm DM}\) is the density of dark matter expected in the vicinity of the pulsar, to create a ranked list of pulsars. This formula can be derived from results presented in [55; 60] We presume that all the dark matter in the Galactic halo is in the form of axions, the standard assumption when obtaining limits, and extrapolate the local density of \(\rho_{\rm DM}\approx 0.45\,{\rm GeV}\,{\rm cm}^{-3}\) using an NFW profile for dark matter in the galaxy. Except in the very centre, near the location of the Galactic Centre Magnetar (GCM) PSR J1745\(-\)29005, this is likely to give a reasonable estimate of the trade-off between \(\rho_{\rm DM}\) and \(D\) in the FOM. Footnote 5: Most attempts to constrain axions using neutron stars have used the GCM as there target, attracted by the large magnetic field, and indeed we re-visit using it for this type of analysis in section V. Depending on the parameters of the NFW profile we used it varied from around 10th in our list to 1st. A clear reason to not use it is that there is an additional uncertainty created by the lack of knowledge of the dark matter density in the centre of galaxy. The PSR J2144\(-\)3922 which has \(B_{0}\approx 2.1\times 10^{12}\,{\rm G}\) estimated from the \(P\) and \(\dot{P}\) based on electromagnetic spin down, \(P=8.51\,{\rm sec}\) and \(D=0.16\,{\rm kpc}\) came third on the list6 and seems an ideal object. This object has a long period, and hence a strong axion signal, but is otherwise Figure 2: An illustration of using the matched filter SNR to search for the signal. \(\hat{q}\) is shown as function of observing angle \(\theta\) returned by the matched filter (14) with simulated Gaussian noise similar to that expected for the observations of PSR J2144\(-\)3933 discussed in this paper. We display the SNR for two scenarios for the input signal, one with \((\alpha,\theta)=(0^{\circ},60^{\circ})\) (case A, top panel) and another with \((\alpha,\theta)=(40^{\circ},60^{\circ})\) (case B, bottom panel). In both cases we choose a noise amplitude of \(\sim 2.5\,{\rm mJy}\) consistent with that observed in our data for \(m_{\rm a}=4.2\,\mu{\rm eV}\). We use an axion mass of \(3.7\,\mu{\rm eV}\) and \(g_{a\gamma\gamma}=10^{-10}\,{\rm GeV}^{-1}\). unremarkable. The fact that it is very nearby is also a significant advantage since it means that we can be more sure about the local value of \(\rho_{\rm a}\) used in our predictions. Footnote 6: The \(\rho_{\rm a}\) is the \(\rho_{\rm a}\) value of \(\rho_{\rm a}\), which is the most important quantity in our calculations. Within the GJ model there is a maximum axion mass [67] given by \[m_{\rm a}^{\rm max}\approx 85\,\mu{\rm eV}\left(\frac{B_{0}}{10^{ 14}{\rm G}}\right)^{1/2}\left(\frac{P}{1\,{\rm sec}}\right)^{-1/2}\] \[\times\left(1+\frac{1}{3}\cos\alpha\right)^{1/2}\,, \tag{23}\] which is \(\approx 4.7\,\mu{\rm eV}\) corresponding to \(f_{\rm obs}\approx 1.15\,{\rm GHz}\) for this object. It is not possible to use any observations above this frequency in obtaining a limit using the GJ model predictions, but we do use the data in our search for generalised periodic signals in section VI. The specific observation of PSR J2144\(-\)3933 used in this work was taken at 2020-07-13 02:20:47 as part of the MeerTime Large Survey Programme on MeerKAT. The observation was recorded as part of the Thousand Pulsar Array [86] census observations and hence used the 'full' MeerKAT array, specifically in this instance 58 of the antennas were used to form a single tied array beam pointed at the pulsar. For long-period pulsars the Thousand Pulsar Array census aims to record \(\gtrsim 512\) pulses from each pulsar, and hence the total observing duration was 4416 s, much longer than typical Thousand Pulsar Array observations. The data produced by the MeerKAT beamformer are processed in real time by the PTUSE instrument [87], folding the data with the known period of the pulsar. Post processing, including initial automated cleaning of radio frequency interference and flux calibration is carried out on the Swinburne OzStar supercomputer using the MeerPipe pipeline developed by MeerTime. The calibration and cleaning procedure used for the Thousand Pulsar Array data is described in [88]. The output data have 1024 rotational phase bins and 928 frequency channels, each of width 0.8359375 MHz (total bandwidth 775.75 MHz), and centred at 1283.58203125 MHz. ### Modelling the noise The observed data has already been processed to remove the effects of Radio Frequency Interference (RFI). This is sufficient to locate the main peak of the pulsar pulse, which is typically much stronger than the axion signal. The first thing we do is remove the pulsar main beam signal from the time-domain, so that the remaining data is in the off-phase of the pulsar. We do this by excising 20 time channels from our data. In the top panel of Fig. 3, for the remaining data we present the average over the pulsar phase for, \(\mu_{S}\), and the standard deviation, \(\sigma_{S}\) of the data as a function of the frequency. It is clear that, despite this procedure, there remains some low amplitude RFI in certain frequency channels and it is clear that the frequency channels affected by this must be discarded for the purposes of locating axion signals. This RFI is typically due to mobile phone, \(f\sim 0.95\,{\rm GHz}\), and Global Navigation Satellite System (GNSS), \(f\sim 1.2\,{\rm GHz}\) and \(f\sim 1.6\,{\rm GHz}\)7. Footnote 7: We note that Karoo site where the MeerKAT telescope is situated, is one of the cleanest radio observation sites in the world and still there is low-level RFI in these bands which would likely This residual RFI can be removed by excising any data Figure 3: Measurements of the mean, \(\mu_{S}\) and \(\sigma_{S}\) standard deviations of the observed off-pulse flux density of PSR J2144\(-\)3933 used in this paper. The average is over pulsar phase for a fixed frequency. In the top panel, we show the \(\sigma_{S}\) and \(\mu_{S}\) for the full data-set. In the bottom panel, we show the equivalent after employing a cut of \(\sigma_{N}=3.0\,{\rm mJy}\) but with a different scale. As demonstrated in the figure this cut allows us to excise the frequency channels that are dominated by RFI contamination, with the remaining channels being compatible with \(\mu\approx 0\) and \(\sigma_{N}\approx 2.8\,{\rm mJy}\). with \(\sigma_{N}>3\,\)mJy as seen Fig. 3 with the excised data presented using a narrower flux scale. This data appears to be relatively clean and free from obvious terrestrial RFI since it removes the regions known to be affected by known irreducible interference. Once this is done it seems reasonable to try to model the noise in the data to be an uncorrelated Gaussian random process with zero mean within each channel and standard deviation \(\sigma_{N}(f)\) which is given by that measured in a given channel. This is a small variation - by allowing the standard deviation to vary with frequency - to the approach we have used in section III.1. It is unlikely that the assumption of exact Gaussianity and zero-correlations is entirely perfect, and indeed in the subsequent sections we find some evidence to suggest that there may be weak correlations in the noise as a function of the pulsar phase \(\phi\). Nonetheless, we will argue that this just leads to conservative constraints and therefore, we will proceed to use this model. We note that if we are able to accurately model the correlations in the data, this could be handled by the match filter approach. In what follows we will use this noise model to obtain limits on axion signal and, therefore, we should examine to what extent our data resembles Gaussian noise for the full dynamical spectrum, which is the term used to talk about the data as a function of frequency, \(f\), and pulsar phase, \(\phi\). As a self-consistency test of this noise model, we compare the real data set with that randomly generated from this distribution and this is presented in Fig. 4. Visually, the two datasets appear to be very similar and on that basis we conclude that the models are compatible with each other. ### Constraining the magnetic orientation \(\alpha\) and observing angle \(\theta\) We have already pointed out that the amplitude of the time dependence of the signal depends on the values of \(\alpha\) and \(\theta\) - this is also an issue when using the time averaged signal (see [67], for example). In particular, we have seen that there can be very little time variation when these angles are small. Therefore, we will need some further information on the pulsar geometry to enforce a constraint on \(g_{a\gamma\gamma}\). Obtaining precise values for the pulsar geometry relies in general on strong assumptions on the observed properties of the neutron star's radio pulse (e.g. [89]). Fortunately, from the point of view of the present discussion, we only need to rule out small angles, and when one observes a narrow pulse profile - which is the case here - it is unlikely that the magnetic and observation axes are aligned with the spin axis. In what follows we will describe a simple model for the pulsar beam geometry with very conservative assumptions and use it in the case of PSR J2144\(-\)3933 to argue that one can ignore the region of parameter space around \(\theta\approx\alpha\approx 0\). Let \(W\) be the pulse width corresponding to a fully illuminated circular radiation beam with half-opening angle \(\rho\). These parameters can be related to the parameters in our misaligned rotator model for the neutron star (\(\alpha,\theta\)) using [90; 91] \[\cos\rho=\cos\alpha\cos\theta+\sin\alpha\sin\theta\cos(W/2)\,. \tag{24}\] The width of the profile for PSR J2144\(-\)3933 is measured at 10% of the amplitude is \((2.1\pm 0.2)^{\circ}\)[92]. It is possible that the profile is asymmetric and hence assuming that the full open field line region is active is not necessarily true. This means that the \(W\) in (24) should be interpreted as the pulse width that would be observed if the full beam is active. In rare cases, the middle of the open Figure 4: Dynamical spectra (i.e. the data as function of frequency and pulsar phase) with the RFI dominated channels excised and also the pulse (around \(\phi=0\) and \(\phi=1\)) removed. The top panel is our noise model described in the text, while the bottom panel is the actual data. field line region is centred in between one of the profile peaks, and part of the otherwise maybe double profile is missing. Based on these two caveats, we conservatively take \(W\) to be in the range \(1.5^{\circ}\)-\(5.0^{\circ}\) for this object. We now turn to the estimation of \(\rho\), whose uncertainty mainly stems from a lack of knowledge of the height \(h_{\rm em}\) at which the emission occurs. The beam is bounded by the tangents to the last open magnetic field lines. Assuming that the field is dipolar, one finds that (e.g. [93]) \[\rho\approx\frac{3}{2}\sqrt{\frac{h_{\rm em}}{R_{\rm c}}}=\sqrt{\frac{9\pi h _{\rm em}}{2cP}}. \tag{25}\] where we have used the small angle approximation for \(\rho\). In the second expression we have replaced the light cylinder radius \(R_{\rm c}\) with the pulse period \(P=2\pi R_{\rm c}/c\). Estimation of \(h_{\rm em}\) is complicated by the fact that the beam is not necessarily filled, but across the pulsar population \(h_{\rm em}\) at 1.4 GHz has been constrained to be in the range of 200-400 km irrespective of pulse period [94]. We take a more conservative range of \(100\leq h_{\rm em}\leq 1000\) km, so as to ensure that we are not strongly wedded to the modelling assumptions in the pulse-beam simulations. Since parameters for a given pulsar are uncertain, values of \(\alpha\) all the way down to zero are allowed by (24), for which there would be no time-variation. We, therefore, appeal to further arguments which allow us to place a lower bound on \(\alpha\) by excluding implausible geometries. A problem with very small \(\alpha\) geometries is that in order to explain the very narrow \(W\) the line-of-sight needs to graze the very outer part of the beam such that most of the beam is invisible to us. This requires fine tuning which is not only unlikely [95], but is also contrived for two reasons. First of all, a small change in emission height as expected for different observing frequencies [96] would lead to a drastic change in the observed pulse width, which is not observed [92]. Secondly, a narrow pulse not only requires a grazing line of sight, but also a circular beam with a hard edge. In reality the pulsar beam does not have a hard edge, and hence the observed pulse shape from a grazing line of sight will be dominated by the intrinsic smoothness of the beam which will be much wider than predicted from the circular model. To quantify why small \(\alpha\) geometries are unlikely, we construct an effective probability distribution for \(\alpha\) and \(\beta\) which essentially measures the number of beam realisations associated to each pair \((\alpha,\beta)\) assuming a uniform distribution for \(W\). This is shown in Fig. 5. We constructed this distribution according to the following algorithm (i) uniformly sample W between \(1.5^{\circ}\) and \(5.0^{\circ}\) in 100 steps. (ii) For each given W, scan over a discrete grid of \((\alpha,\beta)\) values between 0 and \(\pi/2\). (iii) For each point \((\alpha,\beta)\) calculate \(\rho(W,\alpha,\beta)\) with Eq. (24). For each \((\alpha,\beta)\) (for the specific \(W\) under consideration) if the resulting \(\rho\) satisfies \(100\leq h_{\rm em}\leq 1000\) km and \(|\beta/\rho|<0.95\) record a value of 1. Otherwise assign it 0. Note the second constraint is designed to exclude a line-of-sight with an impact parameter \(\beta\simeq\rho\), which is both unlikely and implausible for the reasons above. At the end of this process, for each \(W\), one has an \(\alpha\)-\(\beta\) grid with entries that are 1 (implying an acceptable beam geometry exists) or 0. Since \(W\) is sampled with 100 steps, there are 100 grids. (iv) Fig. 5 then shows the sum of these grids (appropriately normalised). The main features that stand out are that \(|\beta|\) needs to be small in order for the line of sight to intersect the beam and small \(\alpha\) solutions are excluded. No solutions exist for \(\alpha\lesssim 10^{\circ}\). Furthermore, one can see in the top panel that solutions \(\alpha\lesssim 20^{\circ}\) are unlikely geometrical solutions, which is because they require a fine tuned (large) \(\beta\). It should be stressed that \(\alpha\lesssim 20^{\circ}\) geometries are not just unlikely, we have also argued them to be contrived8. In what follows we will impose \(\alpha>20^{\circ}\) and \(|\beta|<4^{\circ}\). We would expect to be able to apply similar arguments to a large fraction of pulsars that we might want to use to constrain \(g_{a\gamma\gamma}\). Footnote 8: A less conservative limit on the acceptable solutions such that \(|\beta/\rho|<0.90\) would lead to \(\alpha\gtrsim 25^{\circ}\), demonstrating that the conclusions are relatively insensitive to the precise limits chosen. ### Constraints on \(g_{a\gamma\gamma}\) from PSR J2144\(-\)3933 We have shown in section III how one can compute the signal-to-noise (SNR) parameter \(q\) as a function of the input parameters \((m_{\rm a},\theta,\alpha)\) using the matched filter. In order to now derive constraints, we have carried out a Figure 5: The constraint on the \(\alpha-\beta\) plane based on the geometry of the beam where \(\beta=\theta-\alpha\). As explained in the text, we define the likelihood to be the percentage of possible values of \(W\) for which solutions exist within the range of \(h_{\rm em}\) we have allowed. Improbable solutions with \(|\beta/\rho|>0.95\) are rejected. The top panel shows the vertical integration of the bottom panel and it demonstrates that solutions with \(\alpha<20^{\circ}\) are very unlikely because they would require fine tuning of \(\beta\). Therefore, we rule out \(\alpha<20^{\circ}\) and \(|\beta|>4^{\circ}\) as statistically unlikely, indicated by the red-dashed lines, when we calculate our limit on \(g_{a\gamma\gamma}\). parametric search for all possible profiles in our interpolated library (see Fig. 1) using the pulsar data presented in Fig. 4. This procedure then gives us a distribution of SNR values \(q_{\text{meas.}}\) associated to each profile. We could repeat this process, but this time using our noise model which by definition has no signal present. Remember that it assumes uncorrelated Gaussian noise which simplifies the matched filter. This means that the values of \(q_{\text{meas.}}\) should be Gaussian distributed with zero mean and unit variance. We do find that they are compatible with a Gaussian distribution that has zero mean. However, we find that the standard deviation is \(\approx 1.45\) somewhat higher than the expected value which points to the fact that the noise model we are using is not optimal 9. We find that, out of the \(\approx 3\times 10^{4}\) templates, there are three that have \(q>5\) which, if the noise model were perfect would suggest candidate detections, but in order to assess their statistical significance they should probably be scaled down by \(1/1.45\) reducing them to \(\approx 3.5\). This suggests that they are chance alignments with the templates; a conclusion that is further strengthened by the observation that they, and indeed the other higher values of \(q_{\text{meas.}}\), appear to be randomly distributed with \(m_{\text{a}}\). We are satisfied that our data are compatible with a null detection. Footnote 9: We have assumed that he noise is uncorrelated in phase which is is unlikely to be precisely true and this could easily lead to more structure than would be expected for a totally random realization of the noise. It is clear that one could achieve slightly tighter constraints by improvement of the noise model. In the case where the baseline has been subtracted from the data, which is the case for the data being considered here, the constraining power of the matched filter is determined by (21). For the pulsar we have chosen, Fig. 1 shows the time-dependence of the profiles. Note the relative variance vanishes at \(\alpha=0\), but can be larger than one for a range of values of \((\alpha,\theta)\), but that we have argued that regions with \(\alpha<20^{\circ}\) are unlikely and contrived in section IV.2. In the absence of a detection, one can derive constraints on \(g_{a\gamma\gamma}\) by comparing the measured value of \(q_{\text{meas.}}\) for a particular template, with the expected value \(\langle q\rangle\) for that same template. The measured value is defined by \[q_{\text{meas.}}(\theta,\alpha)=\frac{\mathbf{F}(\theta,\alpha)\cdot\mathbf{d }}{\sigma_{N}}. \tag{26}\] This must be compared to the expected value \(\langle q(g_{a\gamma\gamma},\theta,\alpha)\rangle\) if a signal were present in the data \[\langle q(g_{a\gamma\gamma},\theta,\alpha)\rangle=\frac{\sqrt{\mathbf{S}(g_{a \gamma\gamma},\theta,\alpha)\cdot\mathbf{S}(g_{a\gamma\gamma},\theta,\alpha)} }{\sigma_{N}}. \tag{27}\] In the perturbative limit of the conversion probability, which we have checked is always the case here, this is \(\propto g_{a\gamma\gamma}\). Therefore, in order to impose a limit we calculate \(\langle q(g_{a\gamma\gamma}^{\text{fid}},\theta,\alpha)\rangle\) for \(g_{a\gamma\gamma}^{\text{fid}}=10^{10}\,\text{GeV}^{-1}\) and exclude any value such that \[g_{a\gamma\gamma}>g_{a\gamma\gamma}^{\text{fid}}\sqrt{\frac{2q_{\text{meas. }}(\theta,\alpha)}{\langle q(g_{a\gamma\gamma}^{\text{fid}},\theta,\alpha) \rangle}}. \tag{28}\] The factor of two in (28) corresponds to the observed noise level being twice the equivalent signal for \(g_{a\gamma\gamma}\) meaning that this will be a 2 (\(\approx 95\%\) confidence) upper limit on coupling constant. We have explained above that the typical values of \(q_{\text{meas.}}\) are higher than they should be due to the noise model not being perfect. This will mean that the upper limits we compute using (28) are not optimal and hence are slightly conservative. We show the constraints from this procedure in the left panel of Fig. 6 for \(m_{\text{a}}=4\,\mu\text{eV}\) along with the regions preferred by our analysis of the orientation angles. In the right panel, we quantify the advantage of the purely time-domain analysis compared to the purely frequency-domain analysis as a function of \((\alpha,\theta)\). We do this by taking the ratio of theoretical signal-to-noise in the case where the baseline is subtracted, \(q_{\text{time}}\), with respect to the case where the time profile is averaged over the pulse-period and integrated, \(q_{\text{freq.}}\). When this ratio is larger than 1, the time-domain analysis leads to stronger constraints on \(g_{a\gamma\gamma}\). We have performed the same analysis as presented in Fig. 6 over the mass range \(3.9\,\mu\text{eV}\leq m_{\text{a}}\leq 4.7\,\mu\text{eV}\) and then searched for the weakest upper limit in the range \(\alpha>20^{\circ}\) and \(|\beta|<4^{\circ}\). The results of this are presented in Fig. 7 and on the basis of this we conclude that we can exclude dark matter axions forming in the entire Galactic halo with \(g_{a\gamma\gamma}>4\times 10^{-11}\,\text{GeV}^{-1}\). ## V Future Searches In the previous section we have shown how one can derive a limit on \(g_{a\gamma\gamma}\) from baseline subtracted radio pulsar data using the variation in the time domain calculated by our ray-tracing algorithm. There we used a specific pulsar and telescope. An obvious question is what can be gained in future observations. In general, the limits obtained will scale as \[g_{a\gamma\gamma}^{\text{lim}}\propto\frac{1}{(\mu_{S}^{2}+\sigma_{S}^{2})^{1 /4}}\left(\frac{A_{\text{eff}}}{T_{\text{sys}}}\right)^{-1/2}t_{\text{obs}}^{- 1/4}. \tag{29}\] Hence, any improvement on \(g_{a\gamma\gamma}^{\text{lim}}\) will come from two avenues. The first is better observations: lower system temperature \(T_{\text{sys}}\), larger collecting area \(A_{\text{eff}}\) and increased observing time, \(t_{\text{obs}}\). The second is a better target: one with larger mean-signal power \(\mu_{S}\) or greater time variability as measured by \({}_{S}\). Leveraging the latter is of course the key point of the present paper. Let us now come to each of these factors in turn. In terms of improved observations with the current target, we can imagine future observations of PSR J2144\(-\)3933 with both MeerKAT or other similar telescopes in the short term and the Square Kilometre Array (SKA) in the longer term. All other things being equal, eq. (29) implies that an increase in observation time from \(\approx 1\) hour, as we have now, to 100 hours would improve the limit by a factor \(\sim 3\). Moreover, going from MeerKAT with \(A_{\rm eff}/T_{\rm sys}\approx 450\) to \(A_{\rm eff}/T_{\rm sys}\approx 1800\) for SKA1-mid will yield a limit which is a factor \(\sim 2\) better. Combining these together one might be able to obtain a limit of \(g_{a\gamma\gamma}<6\times 10^{-12}\,{\rm GeV}^{-1}\). PSR J2144\(-\)3944 only allows constraints to be imposed for \(m_{\rm a}<4.7\,\mu{\rm eV}\) within the framework of the GJ model. It might be interesting to perform lower frequency observations of it, but perhaps it is more interesting to find a source with a larger value of \(m_{\rm a}^{\rm max}\) to probe mass ranges less accessible to terrestrial haloscopes. An interesting point to clarify would be how the signal scales as a function of pulsar parameters, in particular the period \(P\) and the magnetic field \(B_{0}\). The figure of merit based on radial trajectories is \(\propto B_{0}^{2/3}P^{7/3}\), while the maximum mass probed is \(\propto(B_{0}/P)^{1/2}\). At a fixed value of \(P\) we see that it will always be best to increase the value of \(B_{0}\), while at fixed \(B_{0}\) the FOM will increase with \(P\), but \(m_{\rm a}^{\rm max}\) will decrease meaning that there is a trade-off between the two and the optimal target will be a compromise between the strength of the signal and the range of mass probed. We have checked the scaling of the FOM using the raytracing code to calculate the average signal, \(\mu_{S}\), summed over all frequencies for a range of values of \(B_{0}\) and \(P\). For \(B_{0}\) these appear to broadly confirm that the approximate scaling of the FOM with relatively weak dependence \(\theta\) and \(\alpha\), although there can be significant deviations for extreme values. However, the dependence on \(P\) seems to be somewhat weaker than that predicted from the FOM from radial trajectories. For the specific choice of \(\alpha=60^{\circ}\) we find that \({\rm FOM}\propto B_{0}^{0.8}P^{1.2}\). Given the complications in constructing a FOM which depends on the orientation angle, we conclude that it is reasonable to continue to use the FOM based on radial trajectories as a rule of thumb, but we should not expect it to give quantitatively accurate predictions for the increase in constraining power. The argument above applies to all attempts to constrain \(g_{a\gamma\gamma}\) using neutron stars, not just a search for time de Figure 6: In the left panel, we present the constraints on the axion-photon coupling for the average-subtracted case as a function of \((\theta,\alpha)\) for \(m_{\rm a}=4\,\mu{\rm eV}\). As expected, the derived limit is weaker in the limit \(\alpha\to 0\) since the time-dependence of the signal is negligibly small. In fact there is no limit for \(\alpha\equiv 0\) since there is no time dependence. Fortunately, the limits from the pulsar main beam modelling require \(\alpha>20^{\circ}\) and \(|\beta|<4^{\circ}\) which are included as red lines on the plot, so we are able to achieve a limit \(g_{\gamma\gamma\gamma}\gtrsim 4\times 10^{-11}\,{\rm GeV}^{-1}\). On the right panel, we show the ratio of the signal-to-noise in the case where the average has been subtracted (i.e., where only time-domain data is used) the case where the time-averaged flux is used in frequency space. In other words, this ratio quantifies the gain in working in the time-domain. Figure 7: 2 upper limits on \(g_{a\gamma\gamma}\) as a function of \(m_{\rm a}\). This is determined by calculating the highest upper limit in the region of the \(\alpha-\theta\) plane allowed by \(|\beta|<4^{\circ}\) and \(\alpha>20^{\circ}\) from the equivalent of Fig. 6. On the basis of this figure we quote an upper limit of \(g_{a\gamma\gamma}<4\times 10^{-11}\,{\rm GeV}\) over the mass range \(3.9\,\mu{\rm eV}\leq m_{\rm a}\leq 4.7\,\mu{\rm eV}\). pendent signals. Let us come now to the question of other targets, focusing in particular on the time-dependence of their signals. Note that in our original target selection, we used only the mean power, however as demonstrated in Sec. III.1 and as is apparent from Eq. (29), the key parameter determining whether including a time-domain analysis can add value over just using the frequency domain is \(\sigma_{S}/\mu_{S}\). Clearly this will depend on the intrinsic pulsar parameters, \(B_{0}\), \(P\) and axion mass, \(m_{a}\) as well as \((\alpha,\theta)\). In Fig. 8 we show that \(\sigma_{S}/\mu_{S}\) clearly increases with \(B_{0}\) with the amount being sensitive to the choice of \(\theta\) and \(\alpha\). We have also investigated the dependence on \(P\), but this is typically much weaker for the relevant range of parameters. Based on this, we can expect that time-domain observations offer the greatest enhancement over a total flux measurement for larger values of \(B_{0}\). Hence, objects such as magnetars stand to gain most from a time-domain versus total flux analysis. The GCM in particular has already been a popular target for searching for axion signals. In Fig. 9 we present some GCM profiles analogous to those in Fig. 1. The first thing to notice is that the signal is quite a bit larger, mainly due to the enhanced dark matter density assumed in the GC, although this is mitigated somewhat by it being further away. Clearly, at least for some choices of the orientation angles the profiles are substantially more localised in pulsar phase - they are almost pulse-like, but they are still much narrower than the width main peak in the radio pulse profile for this object. The strong dependence on pulse phase in this case is due to the effects of the "throats" in the magnetosphere where axion production is enhanced. This is likely to also enhance the constraining power. Further to these points, the GCM offers both large relative time-variance and it lies in a region of larger dark matter density. Furthermore the axion-to-photon conversion is enhanced due to large magnetic fields. More specifically it has a large magnetic field \(B_{0}\approx 1.4\times 10^{14}\,\mathrm{G}\) and a period of \(P=3.76\,\mathrm{sec}\) and is a distance of \(D\approx 8.3\,\mathrm{kpc}\). In fact, the ratio of the FOM Eq. (22) from PSR J2144\(-\)3933 relative to the GCM is given by \[\frac{\mathrm{FOM}|_{\mathrm{GCM}}}{\mathrm{FOM}|_{J2144}}\approx 1.3\times 10^{ -4}\times\frac{\rho_{\mathrm{DM}}|_{\mathrm{GCM}}}{\rho_{\mathrm{DM}}|_{ \mathrm{local}}}\,. \tag{30}\] If one assumes a standard NFW profile for the galaxy, one obtains an enhancement of \(\sim 10^{5}\) with respect to the local value10 which suggests that it will lead to a larger FOM by a factor of \(\sim 20\). If the dependence on \(B_{0}\) and \(P\) is slightly stronger than using the radial trajectory based FOM, as we have suggested above, we might expect a slightly stronger improvement than given by this simple argument. In addition, due to the larger value of \(B_{0}\) from Fig. 8 we would expect \(\sigma_{S}/\mu_{S}\) to be large enough to produce the most significant gains from a time-domain study compared to other targets. Footnote 10: Note that without this assumption, the constraints from this pulsar are weaker than PSR J2144\(-\)3933. Prima-facie there seems to be an argument for reconsidering the GCM as a target. As an indication of what could be achieved for the GCM with similar observational resources to those currently available, we have simulated the equivalent of Fig. 7 using the GCM as the source assuming a similar noise level to the present data, that is Figure 8: The relative time variance, \(\sigma_{S}/\mu_{S}\), of the profiles as a function of the magnetic field \(B_{0}\) at the surface of the neutron star. We fix \(P=4\,\mathrm{s}\) and \(m_{\mathrm{a}}=1\,\mathrm{\mu eV}\) and the value of \(g_{a\gamma\gamma}\) scales out in this ratio. Despite the time-variation between the maximum and minimum of the profiles increasing by orders of magnitude for \(B_{0}\sim 10^{14}\,\mathrm{G}\) compared to \(B_{0}\sim 10^{12}\,\mathrm{G}\), \(\sigma_{S}/\mu_{S}\) only goes up by a factor of a few. Figure 9: Predicted pulse profiles for the GCM. In order to generate these profiles, we fix \(g_{a\gamma\gamma}=10^{-10}\,\mathrm{GeV}^{-1}\) and \(m_{\mathrm{a}}=1\,\mathrm{\mu eV}\) and \(\alpha=60^{\circ}\) while varying the observing angle. As in Fig. 1 we have removed the average signal from the profiles. We can see very clearly that in some cases the profiles are very time variable - they almost look pulse-like for \(\theta=30^{\circ}\) and \(60^{\circ}\). Nonetheless these only correspond to a \(\sigma_{S}/\mu_{s}\approx 3\) compatible with Fig. 8. \(3\,\mathrm{mJy}\), and this is presented in the top panel of Fig. 1011. Making similar assumptions about constraints on \((\alpha,\beta)\) from the pulse profile, we obtain a projected upper limit of \(g_{a\gamma\gamma}<4\times 10^{-12}\,\mathrm{GeV}^{-1}\) which is a factor \(\sim 10\) stronger than that we obtained from PSR J2144\(-\)3933. In addition, the GCM allows a much wider range masses with \(m_{\mathrm{a}}^{\mathrm{max}}\approx 85\mu\mathrm{eV}\) (\(f_{\mathrm{obs}}^{\mathrm{max}}\approx 20\,\mathrm{GHz}\)). While there are many caveats to such an analysis, notably the noise levels that one might achieve in the direction of the GC, this suggests recording time domain information for this object is well motivated. Note that above we have argued that it might be possible to improve limit by a factor \(\sim 6\) using a 100 hour observation using SKA1-mid, which would lead to a projected limit of \(g_{a\gamma\gamma}<6\times 10^{-13}\,\mathrm{GeV}^{-1}\). Footnote 11: We note that at \(1.4\,\mathrm{GHz}\) there is a significant increase in the sky temperature at the location of the GCM and this will dominate \(T_{\mathrm{sys}}\). This means to achieve this noise figure one would need to observe for \(\sim\)10 h with MeerKAT. In addition the pulsar and the axion signal will be scattered [97]. However, as the signal can be completely modelled using the observed pulsar profile it can be combined with the axion templates before the matched filtering is performed. Both these effects will fall off rapidly as a function of increasing frequency. In the bottom panel of Fig. 10, similar to the right panel of Fig. 6, we show how much one can gain from doing the time-domain analysis as a function of \((\alpha,\theta)\). We see that the values of \(q_{\mathrm{time}}/q_{\mathrm{freq}}\) larger, typically \(\approx 2-3\) compared to \(\approx 1\) for PSR J2144\(-\)3933. It seems reasonable to conclude that the factor \(\approx 20\) improvement in the constraining power seen in the case of the GCM comes from the combination of the FOM based on radial trajectories, a slightly stronger dependence of the signal on \(B_{0}\) and \(P\) than predicted by radial trajectories and the use of the time domain structure. Our conclusion is that there is a strong argument for attempting to apply this technique to the GCM. Finally, we comment that it goes without saying that improved knowledge of the orientation angles will lead to improved constraints. The constraints which we have imposed on \(g_{a\gamma\gamma}\) from PSR J2144\(-\)3933 are strongly dependent on constraints we have placed on \(\alpha\) and \(\beta\) from the pulse profile. These are conservative and given the nature of the signal - there are some regions of the \(\alpha-\theta\) plane where the constraints are very weak and even non-existent - meaning one is always dominated by the lower limit one imposes on \(\alpha\). However, if one were to know the actual angles and, indeed, if they were in the region where the signal is predicted to be strongest then more optimal limits can be imposed. For example, in the case of the GCM if the actual angles are in the region where the strongest limit is, \(g_{a\gamma\gamma}\gtrsim 10^{-13}\,\mathrm{GeV}^{-1}\) for a 1 hour observation with MeerKAT. This could be a factor of 6 lower for 100 hours with the SKA. ## VI Search for generalized periodic signals In the previous sections we have deduced limits on \(g_{a\gamma\gamma}\) using PSR J2144-3933 and have discussed how this might be improved in the future using the GCM. The key qualitative feature of the predicted signal profiles is that their timescale is given by the pulse period \(P\) - this is what allows us to use the pulsar data already folded at the pulse period. One might be concerned that the precise predictions using the GJ model might be too simplistic given the complicated nature of the pulsar magnetosphere. However, taking the qualitative prediction that Figure 10: In the top panel, we show the _expected_ constraints on \(g_{a\gamma\gamma}\) from simulated observations of the GCM with an r.m.s. noise level of \(3\,\mathrm{mJy}\) for \(m_{\mathrm{a}}=1\,\mu\mathrm{eV}\). We fix the pulsar parameters to be \(B_{0}=1.4\times 10^{14}\,\mathrm{G}\), \(P=3.76\,\mathrm{s}\) and \(\rho_{\mathrm{DM}}=5.4\times 10^{4}\,\mathrm{GeV}\,\mathrm{cm}^{-3}\) which is the value computed using a standard NFW profile for the galaxy. Note that, depending on the values of \(\theta\) and \(\alpha\) limits as low as \(g_{a\gamma\gamma}\lesssim 10^{-13}\), \(\mathrm{GeV}^{-1}\) might be possible. In the bottom panel, we quantify the effect of adding time-domain information over the parameter space \((\alpha,\theta)\). Note that this has a slightly different morphology to that for PSR J2144\(-\)3933, but also the values of \(q_{\mathrm{time}}/q_{\mathrm{freq}}\) are slightly larger approaching \(\approx 3\) as it is indicated might be the case in Fig. 8. the signal is dominated by low harmonics of the period one might perform a generalised search for the periodic signals in the data. Of course, without specific connection to the physics, such a search cannot yield an upper limit on \(g_{\alpha\gamma\gamma}\), but does allow us to search more specifically for periodic would-be axion signals which might otherwise be missed due to modelling uncertainties. Any time-periodic data \(d(t)\) can be written as a Fourier series \[d(t)=\sum_{k=-\infty}^{\infty}a_{k}e^{2\pi ikt/P}\,, \tag{31}\] where \(P\) is the signal period. The coefficients \(a_{k}\) are then given by \[a_{k}=\frac{1}{P}\int_{0}^{P}dt\ d(t)e^{-2\pi ikt/P}\,. \tag{32}\] Reverting to the discrete case considered in this text, where the data is defined on \(N\) discrete time-bins, with entries \(d_{q}\), \(q=0,..,N-1\), the Fourier coefficients in the discrete limit can be written as \[a_{k}=\frac{1}{N}\sum_{q=0}^{N-1}d_{q}\exp\left(-\frac{2\pi iqk}{N}\right). \tag{33}\] Note that in what follows we do not consider the \(k=0\) mode associated to the time-average \(\mu_{S}\) of the profiles which, in principle, has been already removed by the data processing. In order to confirm our assumption that the profiles are dominated by low \(k\) modes we have computed the power spectrum12, \(|a_{k}|^{2}\), of two profiles with varying levels of time-dependence, i.e., a relatively flat profile with (\(\alpha=10^{\circ},\theta=10^{\circ}\)) and one with a relatively large time variation with (\(\alpha=60^{\circ},\theta=30^{\circ}\)). This is presented in the in the top panel of Fig. 11. We see that the power spectrum for both decreases rapidly as a function of the mode number \(k\). The point can be further reinforced by computing the inverse-Fourier transform from the \(a_{k}\) while neglecting all modes above some value of \(k=n_{\text{cut}}\). We have done this for \(n_{\text{cut}}=1,3,7\) and present the outcome in the middle and bottom panels of Fig. 11. It is clear that the profiles are reasonably well-represented by \(n_{\text{cut}}=3\) and \(n_{\text{cut}}=7\) in each case. We find that this is true for a wide range of predicted templates, but not all e.g. some of those presented in Fig. 9. Footnote 12: We note that \(a_{k}\) depends on the relative phase of the main peak of the pulsar profile and axion signal, but when one takes the power spectrum this information, which is encoded in the complex phase of \(a_{k}\), is removed. So our test using the power spectrum is independent of this assumption. The advantage of this approach is that we know a periodic signal search can be carried out using only the first \(n_{\text{cut}}\) Fourier modes, reducing computational overhead when scanning. Figure 11: In the top panel we present the power spectrum of two profiles with with (\(\alpha=60^{\circ},\theta=30^{\circ}\)) (blue) and (\(\alpha=10^{\circ},\theta=10^{\circ}\)) (red). The power spectrum has been normalised such that \(|a_{1}|=1\). It is clear that both the power spectra are dominated by the lowest \(n\) modes. In the middle and bottom panels we illustrate this point by presenting profiles obtained from the inverse transform of the FFT of each profile with \(n_{\text{cut}}=(1,3,7)\) (red, green and blue, respectively) compared to the exact profile (black). The significance of the low \(n\) modes is further demonstrated by the fact that the blue curves are remarkably close in shape to the black curves. Furthermore, in what follows, rather than scanning over every available template parametrised by \((a_{1},\cdots,a_{n_{\rm cat}})\), with say, a fixed total power (which would be very costly from a numerical point of view) we instead compare the expected distribution of the \(a_{k}\) that would follow if \({\bf d}\) were pure Gaussian noise. In that case the expected PDF for the values of the squared-amplitudes \(|a_{k}|^{2}\) is a \(\chi_{m}^{2}\) distribution \[{\cal P}(x;m)=\frac{1}{2^{m/2}\Gamma\left(\frac{1}{2}m\right)}x^{m/2-1}\exp \left(-\frac{1}{2}x\right)\,, \tag{34}\] where \(\Gamma\) is the standard gamma function. For a given mode, we have that \(m=1\) is the number of degrees of freedom since the noise is real, and hence the real and imaginary parts are correlated. Note the distribution is the same for all \(k\) meaning the spectrum is _scale-invariant_. However, if there is an axion signal in the data, we expect this scale invariance to be broken, and, in particular, for there to be a greater portion of power contained in the low Fourier modes, providing a model independent test of periodic axion signals in the data. Therefore, since the scale-invariance implies all \(|a_{k}|^{2}\) have an identical distribution, by Fourier transforming the data \(d(t)\) and binning the corresponding values of \(|a_{k}|^{2}\) across all \(k\), we can see to what extent they are \(\chi_{1}^{2}\) distributed. This comparison is shown in the top panel of Fig. 12, where we present histograms of the data (together with a single noise realization) for a particular frequency channel corresponding to a value of \(m_{\rm a}=4\,\mu\)eV, both of which seem compatible with theoretical \(\chi_{1}^{2}\) distribution. The sample is drawn from Fourier modes up to some \(k_{\rm max}\) where the wavelength of the mode would be less than the temporal bin-width, beyond which the description breaks down owing to insufficient resolution. Having extracted spectral information from the data, we now want to analyse to what extent power is concentrated in the lower modes, as expected for an axion signal. To do this, we look at the power contained in the sum of the first \(n_{\rm cut}\) modes, given by \[Q(n_{\rm cut})=\sum_{k=1}^{n_{\rm cut}}|a_{k}|^{2}. \tag{35}\] We then want to understand if the measured values of low-mode power are again consistent with the scale-invariant spectrum. The PDF for \(Q(n_{\rm cut})\) is \(\chi_{n_{\rm cut}}^{2}\) since it is the sum of \(n_{\rm cut}\) independent modes each \(\sim\chi_{1}^{2}\). This distribution can clearly be seen in the bottom two panels of Fig. 12 where for \(n_{\rm cut}=3\) and \(n_{\rm cut}=7\) we display simulated and analytic PDFs for the frequency channel corresponding to \(m_{\rm a}=4\,\mu\)eV. Note we have also included the measured value of \(Q(n_{\rm cut})\) shown in red. In order to make a more precise statistical statement about the presence of a signal dominated by low-Fourier modes, we have calculated the probability, using the appropriate PDF, for there to be a larger value of \(Q(n_{\rm cut})\) than that which is measured. This gives a sense of whether or not the measured value happens by chance Figure 12: In the top panel we present a histogram of the power spectrum computed from the data for \(m_{\rm a}=4\,\mu\)eV in orange and a single Gaussian noise realization of the same variance and sample size. Each of the individual mode labelled by \(k\) is binned, so this corresponds to a single mode search. We also plot the \(\chi_{1}^{2}\) PDF associated expected for \(|a_{k}|^{2}\) and observe that both the data, noise realization and analytic PDF are clearly compatible with each other. This appears to suggest that the data are compatible with being pure noise. In the middle and bottom panels we compare the distribution of the sum of first three modes (middle) and first seven modes (bottom) from 300 realisations of simulated Gaussian noise with the same standard deviation as the data. The measured value is represented with a red line. We use the same value \(m_{\rm a}\) as in the top panel and have also included the theoretical PDFs which are \(\chi_{3}^{2}\) and \(\chi_{1}^{2}\) respectively. By doing this, we are searching for signals that have more structure than just a single mode in the time-domain. Since the data value is not a significant outlier, we exclude the presence of such signals in the data. in a way consistent with our sample size, or whether it is a sufficient outlier to indicate the presence of a periodic signal in the data. For the case of \(n_{\rm cut}=3\) we find that there are three frequency channels where this "probability to exceed" \(<0.05\) and one where it is \(<0.02\). However, since there are 213 individual frequency channels it seems likely that this is a chance outcome in a few frequency channels compatible with being a random noise realization. For \(n_{\rm cut}=7\) there are none with a probability \(<0.05\). ## VII Discussion and Conclusions In this paper we have presented a procedure for carrying out time-domain searches for radio signals produced by axion dark matter converting into photons in the magnetospheres of pulsars. We have developed a matched filter formalism to define the signal-to-noise ratio of time-dependent signals and have used this to show that time-domain searches always improve the signal to noise ratio. By how much the SNR is improved is determined by the relative variance of the signal (the ratio of the standard deviation to the mean of the signal over the pulse period13) and the matched filter formalism provides a robust framework to understand why this is the case. Footnote 13: Note that in pulsar astronomy, when applied to e.g. the main pulse, this quantity is referred to as the modulation index. As a test case, we then applied the matched filter formalism to real data on PSR J2144\(-\)3933 obtained using MeerKAT, searching for expected periodic signal templates for the radio signatures produced by axion dark matter. This was selected from a list of pulsars on the basis of a simple figure of merit for axion detection. In the present analysis, for a fixed axion mass, these templates form a two-parameter family for each observing direction set by the angle \(\theta\) between the stars rotation axis and the line of sight towards the pulsar, and \(\alpha\) the angle between the stars magnetic axis and its rotation axis. Using the morphology of the observed pulsar main-beam signal, we were able to exclude a range of values \((\alpha,\theta)\), narrowing down the number of viable templates. Scanning over the allowed set of templates we find no evidence for axion dark matter and obtain an upper limit of \(g_{a\gamma\gamma}<4\times 10^{-11}\,\mathrm{GeV}^{-1}\) over the mass range \(3.9\,\mu\mathrm{eV}\leq m_{\rm a}\leq 4.7\,\mu\mathrm{eV}\). Given the astrophysical uncertainties in modelling the templates, we also carried out a generic periodic signal search independent of any modelling. This also returned no significant signal from axion dark matter. In Fig. 13 we have placed the limits derived here in context of other limits on \(g_{a\gamma\gamma}\) from the CAST helioscope [70], haloscopes [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 64; 69; 71; 72; 73; 74; 75]. There are a number of other limits in the literature from neutron stars [64; 65; 66; 68; 69; 75]. We have included [64] as the present best limits obtained from observations that only use frequency information, but have not included the rest. For example, when it comes to GCM observations, the latest work by three of the present authors [60], was the most up-to-date since that work includes ray-tracing, however this was based on a previous version of ray tracing-code which, unlike the present work, did not include multiple reflections that increase the predicted signal, leading to overly-conservative constraints. The GCM will be re-analysed in [79], combining the most up-to-date account of the combined results on modelling from [58; 59] and data [65; 67; 75]. Ref. [69] also does not include ray-tracing so we do not include it. Similarly [68] (which we again do not include) has been superseded by the authors' follow-up work [64] (shown in Fig. 13), which uses the most up-to-date modelling. We have tried to be fair in displaying those results which use the most up-to-date modelling and conservative assumptions. In this work, we set out with the intention of examining to what extent detailed time-domain information could be leveraged to increase the reach of radio searches for axions relative to a simple radio-line search which simply averages the flux over a long-time. We were able to quantify this precisely in terms of the time-variance of the signal which we examined both for two specific pulsars and a range of other pulsar magnetic field strengths. It seems that for characteristic pulsar parameters there is a modest enhancement to the signal to noise from including time-domain information, however this marginal gain could be enough to tip a tentative detection in a total-flux measurement into a signal-to-noise level above 5 when time-domain information is included, making it well worthwhile to extract maximum leverage from pulsar observations. This is especially relevant as we look to future telescopes such as the SKA where we want to use all possible tools at our disposal to enhance the prospects for detection. Furthermore, given the rich variety of ever-increasing astrophysical probes of axions [104; 105; 106; 107; 108] events which are sharply peaked in time (or other detailed features amenable to a matched filter search) could benefit from more sophisticated search strategies. **Acknowledgements** JIM thanks Sam Witte for useful discussions and is supported by an FSR Fellowship. SS was supported formerly by a George Rigg Scholarship and more recently by the UK Science and Technology Facilities Council (STFC). Pulsar research at the Jodrell Bank Centre for Astrophysics is supported by a consolidated grant from the STFC. The data used in this project was obtained through the MeerTIME project. We would like to thank that collaboration for making this data available to us. ## Appendix A Interpolation of the ray-tracing results Due to the computational cost of the ray-tracing simulations, that require \(\sim 24\) hours to produce a pulse-profile when parallelized over 32 CPU cores, we require a faster alternative to predict the time-dependence of the signal for arbitrary input angles. Therefore, we generate a simulated database of flux profiles as a discrete function of \((\theta,\alpha)\), represented by the circular data points in the top panels of Fig. 14 with \(\Delta\theta=10^{\circ}\) and \(\Delta\alpha=5^{\circ}\). Based on these datasets we generate an interpolation routine (where we use the SciPy package scipy.interpolate) that can then predict the signal for arbitrary values of \(\alpha\) and \(\theta\), the performance of which can be seen in the bottom panels of Fig. 14, where we compare the prediction of our interpolation routine with the data points. For the purpose of our analysis, the level of agreement is sufficient.
2307.03661
An Investigation of the state changes of PSR J2021+4026 and the Vela pulsar
We investigate the high energy emission activities of two bright gamma-ray pulsars, PSR~J2021+4026 and Vela. For PSR~J2021+4026, the state changes in the gamma-ray flux and spin-down rate have been observed. We report that the long-term evolution of the gamma-ray flux and timing behavior of PSR~J2021+4026 suggests a new gamma-ray flux recovery at around MJD~58910 and a flux decrease around MJD~59500. During this epoch, the staying time, the gamma-ray flux difference and spin-down rate are smaller than previous epochs in the same state. The waiting timescale of the quasi-periodic state changes is similar to the waiting timescale of the glitch events of the Vela pulsar. For the Vela pulsar, the quench of the radio pulse was in a timescale of $\sim0.2$~s after the 2016 glitch, and the glitch may disturb the structure of the magnetosphere. Nevertheless, we did not find any evidence for a long-term change in the gamma-ray emission properties using years of $Fermi$-LAT data, and therefore, no long-term magnetosphere structural change. We also conduct searching for photons above 100~GeV using 15-year $Fermi$-LAT data, and found none. Our results provide additional information for the relation between the state change of the gamma-ray emission and the glitch event.
H. -H. Wang, J. Takata, L. C. -C. Lin, P. -H. T. Tam
2023-07-07T15:33:34Z
http://arxiv.org/abs/2307.03661v2
# An Investigation of state changes of PSR J2021+4026 and Vela pulsar ###### Abstract We investigate the high energy emission activities of two bright gamma-ray pulsars, PSR J2021+4026 and Vela. For PSR J2021+4026, the state changes in the gamma-ray flux and spin-down rate have been observed. We report that the long-term evolution of the gamma-ray flux and timing behavior of PSR J2021+4026 suggests a new gamma-ray flux recovery at around MJD 58910 and a flux decrease around MJD 59500. During this epoch, the staying time, the gamma-ray flux difference and spin-down rate are smaller than previous epochs in the same state. The waiting timescale of the quasi-periodic state changes is similar to the waiting timescale of the glitch events of the Vela pulsar. For the Vela pulsar, the quench of the radio pulse was in a timescale of \(\sim 0.2\) s after the 2016 glitch, and the glitch may disturb the structure of the magnetosphere. Nevertheless, we did not find any evidence for a long-term change in the gamma-ray emission properties using years of \(Fermi\)-LAT data, and therefore, no long-term magnetosphere structural change. We also conduct searching for photons above 100 GeV using 15-year \(Fermi\)-LAT data, and found none. Our results provide additional information for the relation between the state change of the gamma-ray emission and the glitch event. keywords: dense matter - pulsars:PSR J2021+4026,Vela ## 1 Introduction Pulsars are highly magnetized and rapidly rotating neutron stars with a stable rotational period. However, there are two types of timing irregularities, namely, glitch and timing noise. A pulsar glitch is defined as a sudden increase in spin frequency. Although the first glitch was discovered over fifty years ago (Radhakrishnan & Manchester, 1969), it remains an area of active research and interest. Most of the glitch discoveries have been made through radio surveys, and nearly 670 glitches have been detected in 208 pulsars1. Two main models are commonly used to explain the glitch phenomenon: the starquake model (Ruderman, 1969) and the superfluid model (Packard, 1972). The starquake model suggests that the glitches are caused by sudden adjustments in the shape or orientation of the pulsar's solid crust, which alters the moment of inertia of the star. In the superfluid model, glitches are results of sudden angular momentum transfers from the interior superfluid to the solid crust of the neutron star. Footnote 1: [http://www.jb.man.ac.uk/pulsar/glitches/gl](http://www.jb.man.ac.uk/pulsar/glitches/gl) Table.html In addition to the timing irregularity, there are various intriguing phenomena in the observed pulsar's emission, such as mode changing, nulling, intermittency and pulse-shape variability. It can be considered that some of the phenomenon intimately connect with and arise from alterations of the structure of magnetosphere of the pulsar. For example, Lyne et al. (2010) report that several pulsars show mode changes between two different spin-down states accompanying with a clear change in the radio pulse profiles. Evidence has been accumulated that glitches are associated with these model change(s) and pulse profile variability, by causing perturbation of the structure of the magnetosphere(Kou et al., 2018; Liu et al., 2021; Zhou et al., 2023). The \(Fermi\) Large Area Telescope (\(Fermi\)-LAT) detected more than 250 gamma-ray pulsars with pulsed gamma-ray emission 2, PSR J2021+4026 is a bright gamma-ray emitting pulsar with a spin-period of \(P_{s}\sim 265\) ms, and it is known as the gamma-ray pulsar that shows a repeated state change in gamma-ray emission properties and the spin-down rate. Allafort et al. (2013) report sudden decrease of the gamma-ray flux by \(\sim 14\) % and increase in the spin-down rate by \(\sim 7\) %. Subsequent studies (Ng et al., 2016; Zhao et al., 2017) confirmed the persistent-like state changes in the gamma-ray flux and spin-down rate. Takata et al. (2020) report that PSR J2021+4026 transits between two states with a timescale several years, namely, a state with high gamma-ray flux and low spin-down rate (i.e., HGF/LSD state) and a state with low gamma-ray flux and high spin-down rate (i.e., LGF/HSP state). No significant change in the spectrum and pulse profile of the X-ray emission, which is probably originated from the heated polar cap, has been reported (Wang et al., 2018). Unfortunately, since PSR J2021+4026 is radio-quiet gamma-ray pulsar (Shaw et al., 2023), a detailed timing studies around the state change cannot be carried out. Because the current timing solution derived using \(Fermi\)-LAT data of PSR J2021+4026 cannot confirm the existence of the glitch, it remains an unresolved issue whether the state change of PSR J2021+4026 is associated with the glitch of the neutron star. A long-term monitoring in \(Fermi\)-LAT is desired to understand the mechanism of the state change. With the timing solution obtained from the radio observation or from the \(Fermi\)-LAT data, about \(\sim 50\)\(Fermi\)-LAT pulsars show glitch activities and several sources show a state change in the spin-down rate. Although the long-term timing solution obtained from \(Fermi\)-LAT data has been utilized to constrain the glitch mechanisms (e.g. Gugercinoglu et al. (2022)), no clear effect of the glitch or the state change of the gamma-ray emission properties have been reported (e.g. Dang et al. (2021) for PSR J0740-2822 and Ge et al. (2020) for PSR J1124-5916). The Vela pulsar is associated with the Vela supernova remnant (Tsuruta et al., 2009) and it is one of the brightest gamma-ray sources in the sky, which was discovered over 50 years ago in 1968 (Large et al., 1968). Its spin period and spin-down rate are \(\sim 89\) ms and \(\sim 1.25\times 10^{-13}\) s\({}^{-1}\), respectively. Vela pulsar is known as the first glitching pulsar (Radhakrishnan & Manchester, 1969) and 24 glitch events have been reported1. Among all glitching pulsars, the Vela pulsar shows frequent glitching activities with a waiting time of \(\sim\)3 years (Dodson et al., 2007) and a glitch size of \(\simeq\nu/\nu\sim 10^{-6}\)(Espinoza et al., 2011). Interestingly, Palfreyman et al. (2018) reported a sudden change in the radio pulse shape coincident with the glitch activity at 2016 December 12; an evolution of the pulse profile and polarization were observed in several rotation cycles of the Vela pulsar. The results indicate that the glitch event of the Vela pulsar affects the structure of the magnetosphere. Since the Vela pulsar is the one of the brightest gamma-ray pulsar, it will be worthwhile to investigate whether the glitch activity causes any variation or state change of the gamma-ray emission. Footnote 1: [http://www.astro.psu.edu/](http://www.astro.psu.edu/)\(\sim\)pulsar/ In this paper, we study the temporal evolution of the gamma-ray emission properties for two bright gamma-ray pulsars, PSR J2021+4026 and the Vela pulsar. We create the timing solution for each state of PSR J 2021+4026 and for each glitch interval of the Vela pulsar, and compare the pulse profiles and spectra in different epochs. In section 2, we describe the data reduction process for the two pulsars observed by \(Fermi\)-LAT. In section 3, we present results of our data analysis and report new state change of PSR J2021+4026, for which the jump of the gamma-ray flux is smaller than those of previous events. In Section 4, we provide a discussion on the implications of our results. ## 2 Data reduction ### Fermi-LAT datas \(Fermi\)-LAT is a gamma-ray imaging instrument that scans the whole sky every three hours and cover the energy band from \(\sim 20\) MeV to 300 GeV(Atwood et al., 2009). We selected Pass 8 data in the energy band of 0.1-5300 GeV. We choose the position to be R.A.=20\({}^{2}\)21\({}^{\rm m}\)30\({}^{\rm s}\).48, decl=40\({}^{\circ}\)26\({}^{\circ}\)53\({}^{\rm s}\).5 for PSR J2021+4026, and R.A.=08\({}^{\circ}\)34\({}^{\circ}\)00\({}^{\circ}\).00, decl=45\({}^{\circ}\)49\({}^{\circ}\)48\({}^{\prime}\).0 for the Vela pulsar. To avoid Earth's limb contamination, we only included events with zenith angles below 90 degrees. We limited our analysis to events from the point source or Galactic diffuse class (event class = 128) and used data from both the front and back sections of the tracker (evttype = 3). For our spectral analysis, we constructed a background emission model that incorporated both the Galactic diffuse emission (gll_jem_v07) and the isotropic diffuse emission (iso_PSR3_SOURCE_V3_v1) provided by the \(Fermi\) Science Support Center. For each bin of spectra and flux evolution, we refit the data by the binned likelihood analysis (gtlike) and estimate the flux. We have analyzed data from the \(Fermi\)-LAT instrument taken between 2008 August and 2023 May to study the gamma-ray emission of PSR J2021+4026. Our analysis focused on phase-averaged spectra, in which we divided the entire dataset spanning from 2008 to 2023 into six epochs: MJD 54710.00-55850 (high gamma-ray flux/Low spin-down rate stage: HGF/LSD 1), MJD 55850-56990 (low gamma-ray flux/high spin-down rate stage: LGF/HSD 1), MJD 576990-58130 (HGF/LSD 2),MJD 58130-58970 (LGF/HSD 2), MJD 58970-59510 (HGF/LSD 3) and >MJD 59510 (LGF/HSD 3). To model the spectrum of PSR J2021+4026, we used a power-law with an exponential cutoff, which can be expressed as: \[\frac{dN}{dE}=N_{0}\bigg{(}\frac{E}{E_{\alpha}}\bigg{)}^{-\gamma_{1}}e^{-4E^{ \gamma_{2}}} \tag{1}\] the spectral parameters of PSR J2021+4026 includes the photon index(\(\gamma_{1}\)), \(a\) which is related to the cutoff energy (\(E_{c}\)), and \(\gamma_{1}\), the exponent index. To study the evolution of the gamma-ray flux above 0.1 GeV, we divided the data into 60-day time bins. The contribution of each background source is calculated with the spectral parameters that are obtained from the entire data in each bin. Then, we refit the data by the binned likelihood analysis (gtlike) and estimate the flux. We also performed the likelihood analysis of Vela pulsar in the time range of 2008 August to 2022 December. We use the power-law with an exponential cutoff model as mentioned above, and the data are divided into seven time segments separated by glitchs: MJD 54606-55205, MJD 55422-56400, MJD 56600-56915, MJD 56925-57730, MJD 57740-58510, MJD 5825-59400 and MJD 59430-59900. By creating timing epephmeris for each epoch, we study the evolution of the gamma-ray flux and examine the pulse profile in different energy bands. This will allow us to determine whether there are any significant changes in the pulsar's high-energy emission associated with glitch events. ### Timing analysis To ensure that we include the majority of significant source photons, we consider the photons events within a 1\({}^{\circ}\) aperture centered at the targets. We assign phases to these photons using the \(Fermi\) plug-in for TEMPO2. We barycentric-corrected the photon arrival times to TDB (Barycentric Dynamical Time) using the JPL DE405 Earth ephemeris. This was done using the "gtbary" task, which provides accurate measurements of the positions and velocities of Solar System. To obtain the timing ephemeris, we use the Gaussian kernel density estimation method (de Jager et al., 1986) provided in Ray et al. (2011) to build an initial template. We then use this template to cross-correlate with the unbinned geocentered data to determine the pulse time-of-arrival (TOA) for each pulse. Once we have obtained the pulse TOAs, we can fit them to an original timing model that we assume from the semi-blind search. To obtain the temporal evolution of the spin-down rate, we divide the data into several ten days time bin and determine the first derivative of the frequency for each time bin. We also create the long-term ephemeris for the two states of PSR J2021+4026 and for each epoch between two glitch events for the Vela pulsar. ## 3 Results ### Psr j2021+4026 #### 3.1.1 Temporal evolution We analyse the data corrected by _Fermi_-LAT until 2023 May and cover about 15 years from 2008 to 2023. We divide the entire time range into 60 days time bin and obtain the temporal evolution of the flux in \(0.1\rm{GeV}<E<300\rm{GeV}\) energy bands and the timing ephemeris for each time bin. The top and bottom panels in Figure 1 show the flux temporal evolution and the time derivative of the spin frequency \(f_{1}\), respectively. In addition to the previous three events of the state changes reported in Allafort et al. (2013) and Takata et al. (2020) (vertical black dashed lines in Figure 1), we find new events that occurred at around MJD 58910 and MJD 59510 (vertical red dashed lines). At around MJD 58910, the state transfer from low gamma-ray flux/high spin-down rate state (LFG/HSD) to the high gamma-ray flux/low spin-down rate state (HGF/LSD). At around MJD 59510, state change from HGF/LSD to LGF/HSD was implied by the _Fermi_-LAT data. Hereafter, we denote the newly-found stages as LGF/HSD 3 and HGF/LSD 3, as indicated in Figure 1. As Figure 1 indicates, the time durations of LGF/HSD 2 and HGF/LSD 3 are shorter than those in the previous states. For example, LGF/HSD 2 continues about 840 days, which is significantly shorter than about 1140 days (\(>1140\) days) for HGF/LSD 2 (HGF/LSD1). We also find that the average flux of the new state is different from the previous values. The flux \(\sim 1.12(1)\times 10^{-6}\rm{photon\ cm^{-2}\ s^{-1}}\) of new HGF/LSD 3 is slightly smaller than \(1.15-1.16\times 10^{-6}\rm{photon\ cm^{-2}\ s^{-1}}\) of the previous HGF/LSD 1 and 2. The flux level of new LGF/HSD 3 is higher than that in the previous two states of LGF/HSD 1 and 2. The flux changes from HGF/LSD to LGF/HSD in previous two events were \(\sim 12-14\%\) (ref. Table 1), while the fractional flux change of new event is only \(\sim 4\%\). A sudden change of the first time derivative of the frequency, \(f_{1}\), in a time scale of \(\sim 10\) days was observed at the previous state changes from HGF/LSD to LGF/HSD. For new event from HGF/LSD 3 to LFG/HSD 3, on the other hand, a sudden change in timing behavior could not be confirmed, and the \(f_{1}\) shows a more gradual evolution to HSD as indicated in the bottom panel of Figure 1. According to the long-term timing track, we calculate the average time derivatives for each state (i.e., horizontal dashed lines in the bottom panel of Figure 1). We find the average value of \(f_{1}=-7.81(5)\times 10^{-13}\rm{Hz\ s^{-1}}\) in new HGF/LSD 3 is smaller than those in previous HGF/LSD states, while \(f_{1}=-8.13(5)\times 10^{-13}\rm{Hz\ s^{-1}}\) in new LGF/HSD 3 is consistent with the previous LGF/HSD states within errors. Figure 1: Top: gamma-ray flux (\(>0.1\) GeV) evolution of PSR J2021+4026 from 2008 to 2023. Bottom: The spin-down rate of PSR J2021+4026 from 2008 to 2023. Each point was obtained from the data with a 60-day time bin. The black vertical lines represent the state changes reported in Allafort et al. (2013) and Takata et al. (2020). The vertical red dashed lines indicate new state change reported in this study. The horizontal dashed lines in the top panel indicate the averaged flux for each state. #### 3.1.2 Timescale of state change Next, We investigate the correlation between the spin-down rate and the gamma-ray flux. In order to compare with the correlation of the evolution of gamma-ray flux and spin-down rate of PSR J2021+4026 from 2008 to 2023, we adopted the method of discrete correlation function (DCF) which measures correlation functions without interpolating in the temporal domain (Edelson & Krolik, 1988). The unbinned discrete correlation function is defined as: \[{\rm UDCF}_{ij}=\frac{(a_{i}-\overline{a})(b_{j}-\overline{b})}{\sigma_{a} \sigma_{b}}, \tag{2}\] where \(a_{i}\) and \(b_{j}\) present the data lists of gamma-ray flux and spin-down rate, respectively, in each time bin of Figure 1, and \(\overline{a}\) and \(\overline{b}\) represent the average values of the \(a_{i}\) and \(b_{j}\), respectively. In addition, \(\sigma_{a}\) and \(\sigma_{b}\) are the standard errors of \(a\) and \(b\), respectively. We calculate the time-lag of each pair (\(a_{i},b_{j}\)) and create the group of the pairs for every \(10^{7}\) second of the time-lag (\(0<\triangle t_{1}<10^{7}\) s, \(10^{7}\) s \(<\triangle t_{1}<2\times 10^{7}\) s,...). For each bin of the time-lag, we obtain number of the pairs, \(M\), and calculate the average of \({\rm UDCF}\), \[{\rm DCF}(\triangle t)=\frac{1}{M}\sum{\rm UDCF}_{ij}, \tag{3}\] where a standard error for the DCF is defined as, \[\sigma_{\rm DCF}(\triangle t)=\frac{1}{M-1}\big{[}\sum{\rm(UDCF}_{ij}-{\rm DCF }(t))^{2}\big{]}^{1/2}. \tag{4}\] In Figure 2, we present the DCF curve, which exhibits the maximum correlation coefficient of \(\sim 0.32\) at a time lag 0 s. This indicates that the derivative of frequency changes its state simultaneously with the LAT flux state. The values of the DCF show periodicity with a period of \(2\times 10^{8}\) seconds, which is approximately 6.5 years. This result is consistent with that of Takata et al. (2020), which suggests that PSR J2021+4026 switches between different states with a timescale of 6-7 years. #### 3.1.3 Phase-averaged spectrum and pulse profile In Figure 3, we present the observed spectra for the new HGF/LSD 3 (black dots) and LGF/HSD 3 (blue dots) stages. The solid lines show \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{2011} & \multicolumn{2}{c}{2018} & \multicolumn{2}{c}{2021} \\ \hline & HGF/LSD 1 & LGF/HSD 1 & HGF/LSD 2 & LGF/HSD 2 & HGF/LSD 3 & LGF/HSD 3 \\ \hline MJD & \(<55850\) & 55850-56990 & 56990-58130 & 58130-58970 & 58970-59510 & \(>59510\) \\ Flux\({}^{1}\) & 1.16(1) & 0.99(1) & 1.15(1) & 1.01(1) & 1.12(1) & 1.07(1) \\ \(\triangle F_{g}^{1}\) & 0.07(1) & 0.10(1) & 0.06(1) & 0.08(1) & 0.03(1) & 0.02(1) \\ \(\triangle F_{g}^{2}\) & & 14(1) & 16(1) & 12(1) & 11(1) & 4(1) \\ \(f_{1}\) & -7.70(4) & -8.17(4) & -7.63(4) & -8.14(4) & -7.81(5) & -8.13(5) \\ peak 1\({}^{a}\) & 0.19(2) & 0.13(2) & 0.19(2) & 0.11(2) & 0.16(3) & 0.16(1) \\ peak 2\({}^{a}\) & 0.176(7) & 0.174(6) & 0.15(1) & 0.16(1) & 0.164(1) & 0.18(3) \\ peak 1/peak 2\({}^{b}\) & 0.54(6) & 0.24(3) & 0.494(9) & 0.26(1) & 0.41(1) & 0.33(1) \\ \hline \hline \end{tabular} \({}^{1}\)Average flux in each state (\(10^{-6}\)photon cm\({}^{-2}\) s\({}^{-1}\)) \({}^{2}\)Difference between the average flux of each state and averaged flux of whole data (\(10^{-6}\)photon cm\({}^{-2}\) s\({}^{-1}\)) \({}^{3}\)Flux change from previous state (\(\%\)) \({}^{a}\)FWHM \({}^{b}\)Ratio of amplitude \({}^{c}\)Allafort et al. (2013), three-Gaussian components. \({}^{d}\)Allafort et al. (2013), two-Gaussian components. \({}^{e}\)Zhao et al. (2017), two-Gaussian components. \({}^{f}\)Takata et al. (2020), two-Gaussian components. \end{table} Table 1: Information of the average flux and timing parameters of PSR J2021+4026 \begin{table} \begin{tabular}{l c c} \hline \hline & HGF/LSD 3 & LGF/HSD 3 \\ \hline Flux\({}^{1}\) & 1.74(1) & 1.69(1) \\ \(\gamma_{1}\) & 1.60(3) & 1.74(4) \\ E\({}_{\rm cutoff}\)/(GeV) & 2.2(5) & 2.4(2) \\ \hline \end{tabular} \({}^{1}\)Energy flux of each state in units of \(10^{-10}\)erg cm\({}^{-2}\) s\({}^{-1}\) \end{table} Table 2: Parameters of Phase-averaged Spectra for PSRJ2021+4026 in two new Stages. Figure 2: The correlation of \(Fermi\)-LAT flux and spin down rate of PSR 2021+4026 examined by DCF. The data is from \(Fermi\)-LAT in the time range of 2008 to 2023. the fitting function using Equation 1. Table 2 summarizes the parameters of the best fitting function. As indicated by the timing evolution (Figure 1 and Table 1), the flux change from HGF/LSD 3 to LGF/HSD 3 is only \(\sim 3\%\), which is much smaller than \(12-14\%\) of the previous two events. The previous state changes from HGF/LSD to LGF/HSD make the spectrum softer; an increase of \(\gamma_{1}\) and a decrease of the cut-off energy were observed. In new state change, both the power-law index \(\gamma_{1}\) and the cut-off energy increase. To investigate the changes in the pulse profile after each stage change, we derive the ephemeris for the new states: HGF/LSD 3 and LGF/HSD 3 (Table A1), and generate the corresponding pulse profiles (Figure 4). We fit the obtained pulse profiles with two Gaussian functions. Table 1 summarizes the parameters of the fitting functions for each state. In the previous state changes from HSG/LSD state to LFG/HSD, the ratio of height for the first peak (small peak) to the height of the second peak (main peak) decreased from about \(50\%\) to \(25\%\), as Table 1 indicates. In the new state change, we can see a decrease of the ratio of the peak height, but the magnitude of the change is smaller than that in the previous cases. This would be consistent with the smaller flux change in the new event compared to the previous cases. ### Vela pulsar #### 3.2.1 Spectral analyses After the launch of the \(Fermi\) telescope, six glitches of the Vela pulsar have been observed. We search for the permanent-like state changes of the GeV emission triggered by the glitches. Figure 5 presents the evolution of the spin-down rate and the gamma-ray flux of the Vela pulsar. We divide the whole \(Fermi\)-LAT data into 7 epochs, which are bounded by the time of the glitches. Top panel of the Figure 5 shows the evolution of the spin-down rate (\(\dot{\gamma}\)) and the bottom panel shows the temporal evolution of the gamma-ray flux; the vertical dashed lines in the figure show the occurrence times of the glitches. Figure 6 compare the phase-averaged spectra for 7 epochs. Timing ephemeris and the emission properties for each epoch are summarized in Table A2 and Table 3, respectively. Despite the frequent glitches of the Vela pulsar, we do not confirm any significant state change in the spectral properties associated with the glitch. In the bottom panel of Figure 5, the flux show a small fluctuations with an amplitude of several \(\%\). However, such a fluctuation will be explained by the static fluctuation of the observation. Table 3 summaries the information of the time averaged spectrum for each epoch. We can find that the spectral properties of the energy flux, the spectral index (\(\gamma_{1}\)) and the cut-off energy (\(E_{c}\)) in the different time epochs are consistent to each other within several \(\%\) of errors. The Vela pulsar, therefore, has not experienced a permanent-like flux change with a magnitude of \(>10\) \(\%\) as observed for the case of PSR J2021+4026. #### 3.2.2 Pulse profile at different epochs The brightness of the gamma-ray emission enables us to investigate the temporal evolution of the shape of the pulse profile. Using the timing ephemeris in Table A2, we create the integrated pulse profile with \(10^{4}\) photons in the energy range of 0.1-300 GeV. We fit the pulse profile with a four-Gaussian function, which provides a better fitting of a plateau between two main peaks in Figure 8. Figure 7 shows the temporal evolution of the phase separation between peak 1 and peak 2 (upper panel), the Gaussian width of peak 1 (middle panel) and peak 2 (bottom panel). Actually, there is no significant change in the long term evolution of pulse profile, which means that we did not see any evidence. The glitch of the Vela may still cause some disturbances of magnetosphere, as indicated in the radio data[16], but its effect may be on short-term timescale, or is so small that it is not seen in the gamma-ray emission properties averaged over a time scale much longer then 10 days. #### 3.2.3 Searching for \(>\)100 GeV photon with \(Fermi\)-LAT data Leung et al. (2014) searched for the pulsed emission above 50 GeV with about 5.2 years \(Fermi\)-LAT data and reported 5 photons that are probably originated from the Vela pulsar. They also report the existence of about 208 GeV photon with a significant level of 2.2\(\sigma\). The results of H.E.S.S.-II (H. E. S. S. Collaboration et al., 2018) also suggest that the spectrum of the pulsed emission from the Vela pulsar extends beyond 100 GeV. We revisit the search for very high-energy gamma-ray photon with \(Fermi\)-LAT data using updated \(Fermi\)-LAT gamma-ray catalog (PASS 8 data), galactic diffuse emission (gll_iem_v07.fits) and isotropic background emission (PSR3_SOURCE_V3) models, while Leung et al. (2014) used the reprocessed Pass 7 "Source" class (P7REP_SOURCE_V15IRFs). Following Leung et al. (2014), firstly, we divide the \(Fermi\)-LAT data obtained within MJD 54710-56580 into 50-300 GeV with a ROI (region of interest) of 4-degree in radius. To select photons originating from the pulsar, we used the \(gtsrcprob\) tool to calculate the probability of each photon within ROI associated with the pulsar. We also calculated the probabilities of the photons coming from the Vela X (P\({}_{PWN}\)) and the Galactic diffuse emission (P\({}_{GAL}\)). We selected only photons with P\({}_{Veda}>\) max(P\({}_{PWN}\),P\({}_{GAL}\)). Subsequently, the same methodology was applied to each state, utilizing the respective ephemeris obtained from the timing analysis. The results in the MJD 54710-56580 are depicted in panel (e) of Figure 8, which illustrates the weighted pulse profile for energies above 50 GeV, and in Table 4. We detect 7 photons with an energy larger than 50 GeV. We find the arrival times of two photons detected at MJD 56149 and MJD 56317 are consistent with the results reported in Leung et al. (2014), but the energy of the photon detected at MJD 56317, \(\sim 54.6\) GeV, is significantly smaller than \(\sim 79.5\) GeV of previous result. Although we employ the same Figure 3: The comparison of spectra at the stage of HGF/LSD 3 and LGF/HSD 3. The “PLSuperExpCutoff2’ model can be best fitted are shown by the solid lines. Figure 4: The comparison of the pulse profile for PSRJ2021+4026 at the epoch of MJD 54710-55800 (HGF/LSD 1), MJD 55900-56930 (LGF/HSD 1), MJD 57050-58070 (HGF/LSD 2), MJD 58215-58890 (LGF/HSD 2), MJD 58920-59200 (HGF/LSD 3) and MJD 59780-60000 (LGF/HSD 3). The pulse profile is generated with photon energy \(>\)0.1 GeV. Figure 5: Spin down rate, and gamma-ray flux evolution of the Vela pulsar. The top panel shows the evolution of spin down rate, and each data set is obtained from the _Fermi_ telescope with a 40-day time bin. The bottom panel shows the evolution of gamma-ray flux with a 60-day time bin. the red horizontal dashed lines are the average flux in each state, the vertical blue lines indicate each glitch epoch. method as Leung et al. (2014) to investigate the emission in different time ranges, we do not confirm the photons above 100 GeV. ## 4 Discussion and Summary We have investigated the temporal evolution of the characteristics of the GeV emission from two young pulsars, PSR J2021+4026 and Vela. For PSR J2021+4026, we confirm new state change from HGF/LSD to LFG/HSD, but the magnitude of the change in the gamma-ray flux and the spin-down rate is smaller than the previous state changes. The structure change in the pulse profile is also small compared to the previous event. We speculate that the change of global electric current circulating the magnetosphere could attribute to the state change of the emission and spin-down properties, and a larger ( a smaller) electric current is produced in LGF/HSD (HGF/LSD) state. We expect that the change of gamma-ray flux is due to the change of the size of the particle acceleration/emission regions, and HGF/LSD (LGF/HSD) has a larger (smaller) size of the gap. There is a tendency in the previous state changes that the HGF/LSD state has a larger cut-off energy in the spectrum and the width of the pulse profile is wider (Takata et al., 2020). This tendency can understand if HGF/LSD has a larger size of the acceleration/emission region. Although it is expected that the magnitude of the electric current and the size of the acceleration/emission region is related, a positive or negative correlation will depend on the location of the acceleration/emission region and the geometry of the magnetosphere (Takata et al., 2006). In the new event of the state change, the small change in the spin-down rate implies the change of the electric current is smaller, which results in a small change in the structure of the acceleration/emission region. The mechanism of the state change of PSR J2021+4026 is still unknown (Takata et al., 2020), and our study will provide additional information to understand the origin of the state changes of PSR J2021+4026. We found that the stay-time in LGF/HSD 2 (\(\sim\) 840 days) and HGF/LSDF 3 (\(\sim\) 530 days) were shorter than the previous corresponding states (\(\sim\) 1140 days), indicating that the state change is quasi-periodic event. This result of the quasi-periodicity will prefer the glitch-like interpretation, and rule out some models, for example, the precession of the neutron star. We also found that as Figure 1 indicates, the staying time at HGF/LSD 3 was shorter than previous two HGF/LSD states, and the change in the flux and probably first derivative (\(f_{1}\)) from HGF/LSD 3 to LGF/HSD 3 were also smaller. This could indicate a possible correlation between the waiting time and size of the change, although more accumulated data is required to confirm the correlation. The size-waiting-time correlations are rare in the pulsar glitches. PSR J0537-6910 is known as a glitching pulsar that shows a strong correlation between the glitch size and waiting time to the following glitch (Ferdman et al., 2018). \begin{table} \begin{tabular}{l c c c} \hline Energy(GeV) & Time(MJD) & Pulse-phase & P\({}_{PSR}\) \\ \hline 51.08 & 55889.11 & 0.93 & 0.98 \\ 57.41 & 56149.28 & 0.57 & 0.59 \\ 54.586 & 56317.21 & 0.93 & 0.98 \\ 51.01 & 57092.19 & 0.91 & 0.81 \\ 87.16 & 57534.33 & 0.81 & 0.68 \\ 50.82 & 58316.77 & 0.90 & 0.89 \\ 67.08 & 59754.76 & 0.92 & 0.97 \\ \hline \end{tabular} \end{table} Table 4: Energy, arrival time and source probability of \(>\)50 GeV photons with P\({}_{PSR}\) large than max(P\({}_{PWN}\),P\({}_{PGAL}\)). \begin{table} \begin{tabular}{l c c c c c c c} \hline time range(MJD) & 54606-55205 & 55422-56400 & 56600-56915 & 56925-57730 & 57740-58510 & 58525-59400 & 59430-59900 \\ \hline Flux\({}^{\alpha}\) & 1.077\(\pm\)0.008 & 1.074 \(\pm\)0.008 & 1.071\(\pm\)0.008 & 1.063\(\pm\)0.008 & 1.066\(\pm\)0.007 & 1.070\(\pm\)0.008 & 1.070\(\pm\)0.007 \\ Flux\({}^{1}\) & 9.40 \(\pm\)0.028 & 9.49\(\pm\)0.014 & 9.37\(\pm\)0.038 & 9.35\(\pm\)0.020 & 9.43\(\pm\)0.11 & 9.39\(\pm\)0.018 & 9.25\(\pm\)0.025 \\ \(\gamma_{1}\) & 1.201\(\pm\)0.011 & 1.205\(\pm\)0.008 & 1.200\(\pm\)0.013 & 1.206\(\pm\)0.008 & 1.214\(\pm\)0.009 & 1.209\(\pm\)0.007 & 1.205\(\pm\)0.012 \\ \(a\) & 0.0479\(\pm\)0.0006 & 0.0484\(\pm\)0.0004 & 0.0485\(\pm\)0.0007 & 0.0484\(\pm\)0.0004 & 0.0476\(\pm\)0.0005 & 0.0477\(\pm\)0.0004 & 0.0475\(\pm\)0.0005 \\ peak 1\({}^{a}\) & 0.0226\(\pm\)0.0012 & 0.0236\(\pm\)0.0015 & 0.0238\(\pm\)0.0015 & 0.0245\(\pm\)0.0019 & 0.0221\(\pm\)0.0015 & 0.0245\(\pm\)0.0023 & 0.0222\(\pm\)0.0045 \\ peak 2\({}^{a}\) & 0.0283\(\pm\)0.0013 & 0.0285\(\pm\)0.0014 & 0.0278\(\pm\)0.0007 & 0.0274\(\pm\)0.0010 & 0.0294\(\pm\)0.0039 & 0.0264\(\pm\)0.0008 & 0.0245\(\pm\)0.0008 \\ peak 2/peak 1\({}^{b}\) & 0.803\(\pm\)0.083 & 0.739\(\pm\)0.071 & 0.718\(\pm\)0.050 & 0.721\(\pm\)0.068 & 0.799\(\pm\)0.156 & 0.688\(\pm\)0.065 & 0.655\(\pm\)0.100 \\ \hline \end{tabular} \({}^{\alpha}\) Average flux in each state (10\({}^{-6}\) photon cm\({}^{-2}\) s\({}^{-1}\)) \({}^{1}\) Energy flux of each state in units of 10\({}^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\) \({}^{a}\)FWHM \({}^{b}\)Ratio of amplitude \end{table} Table 3: Parameters of spectra of Vela pulsar at different time epochs. Figure 6: Spectral energy distributions of Vela pulsar indifferent glitch intervals. The spectra of Vela in different energy bands and different time epochs from 2008 to 2022, there are seven data segments bounds by six glitches. The ’PLSuperExpCutoff2’ model can be best fitted are shown by the solid lines. Melatos et al. (2018) pointed out that when a threshold trigger mechanism of the glitch is combined with the electromagnetic breaking of the neutron star, the waiting-time-size correlation can appear. They predict the correlation between the size and waiting time for the following glitch of a rapid-spin down pulsar (e.g. PSR J0537-6910), while a waiting-time-size correlation may be see before the glitch of the slow-spin down pulsar. We may speculate that if the glitch triggers the state change of PSR J2021+4026, the size of the flux jump is related to the size of the frequency jump (i.e. larger flux jump corresponds to a large frequency jump). Hence, it would be worth to investigate the possible correlation between the waiting time and size of the flux jump with more accumulated data in the future. About the studies of the Vela pulsar, Palfreyman et al. (2018) reported sudden changes in the radio pulse shape coincident with the 2016 glitch event, indicating the glitch affected the structure of the magnetosphere. Bransgrove et al. (2020) suggested that the glitch launched the Alfven wave, which subsequently causes the high-energy radiation and the electron-position pair creation. They suggest that the created pairs quench the radio emission at the 2016 event of the Vela pulsar. Although the enhancement of the gamma-ray flux is expected during propagation of the Alfven wave in the magnetosphere, the predicted timescale of the existence of the wave, \(\sim 0.2\) s, is too short to be investigated with the \(Fermi\)-LAT data. We did not find any evidence for the change of the magnetosphere structure in a timescale of years, as discussed in section 3.2. Current observational results suggest that the state change of PSR J2021+4026 is likely related to the glitch event, and the glitch of the Vela pulsar would disturb the magnetosphere in a timescale of \(\sim 0.2\) s. It is not trivial for the main reason to make the difference in the spin-down/high-energy emission evolutions after the glitch event. Because the waiting timescale of the glitch events is several years for both pulsars, the glitch mechanism will be a sudden angular momentum transfer from the superfluid to the solid crust. Since the expected glitch size for PSR J2021+4026 is \(\nu/\Delta\nu<10^{-7}\)(Allafort et al., 2013; Takata et al., 2020) is much smaller than \(\nu/\Delta\nu\sim 1-3\times 10^{-6}\) of the Vela pulsar, the glitch size alone will not be the main cause of the observed state change of PSR J2021+4026. Because the gamma-ray emission is produced in the outer magnetosphere near the light cylinder, the state change probably requires the disturbance of the magnetic field lines that extends to the outer magnetosphere and of the structure of the polar cap. A theoretical model that unifies between the state change of PSR J2021+4026 and the disturbance of the magnetosphere of the glitch of the Vela pulsar are desired. In summary, we have examined the evolution of gamma-ray flux and spin-down rate of two bright gamma-ray pulsars, PSR J2021+4026 and Vela. We confirmed new state change from HGF/LSD to LGF/HSD of PSR J2021+4026 occurred around MJD 59500. We found that the staying time, the change in the flux and the first time derivative of frequency are smaller than previous events. The change of the shape of the pulse profile is also smaller Figure 7: The shape parameters of the pulse profile in different Vela pulsar periods, each black point covering 10000 photons stage. The upper panel show the phase separation between the two maxima of peak 1 and peak 2 (\(\Delta\Phi_{21}\)), which peak is defined in Figure 8, and the middle and the lower panels show the gauss width of peak 1 and peak 2(W1 and W2). than those of the previous invents. Our results confirm that the state change of PSR J2021+4026 is not regularly repeated but it is quasi-periodically repeated with the different magnitude of change of the gamma-ray flux and the spin-down rate. We speculate that this quasi-periodic state change prefers a glitch-like mechanism than the model to predict for a periodic change, such as the precession of the neutron star. We also carried out the timing and spectral analysis of the Vela pulsar to investigate the effect of the glitch on the observed gamma-ray emission properties, since the radio observation indicates that the glitch disturbs the structure of the magnetosphere. However, we did not confirm any significant state change of the gamma-ray emission triggered by the glitch of the Vela pulsar. Our results also suggest that the effect of the glitch on the structure of the magnetosphere is different for the different pulsars. Finally, using the 15-year \(Fermi\)-LAT data of the Vela pulsar to search photons above 100 GeV, we did not find any photons above 100 GeV from the \(Fermi\)-LAT data. We still did not find any evidence for the change of the magnetosphere structure in a timescale of years in the 2016 glitch, while the created pairs maybe quench the radio emission. ## Acknowledgements J.T. appreciate Dr Kisaka and S.Q. Zhou for useful discussion on glitching pulsars. This work made use of data supplied by the LAT data server of the Fermi Science Support Center (FSSC) and the archival data server of NASA's High Energy Astrophysics Science Archive Research Center (HEASARC). H.H.W is supported by the Scientific Research Foundation of Human Provincial Education Department (21C0343). J.T is supported by National Key Research and Development Program of China ( 2020YFC2201400). L.C.-C.L. is supported by NSTC through grants 110-2112-M-006-006-MY3 and 111-2811-M-006-012. H.H.W. and P.H.T. are supported by the National Natural Science Foundation of China (NSFC) grant 12273122 and a science research grant from the China Manned Space Project (No. CMS-CSST-2021-B11). ## Data Availability (i) The Fermi-LAT data used in this article are available in the LAT data server at [https://fermi.gsfc.nasa.gov/ssc/data/](https://fermi.gsfc.nasa.gov/ssc/data/) access/. Figure 8: (a): Folded light curve in different time range(different color) in the energy range 0.1-1 GeV.(b): Folded light curve in different time range(different color) in the energy range 1-10 GeV with a \(1^{\circ}\) radius.(c): Folded light curve in different time range(different color) in the energy range 10-100 GeV with a \(1^{\circ}\) radius.(d): Folded light curve in different time range(different color) in the energy range 30-50 GeV with a \(0.4^{\circ}\) radius. (e): The weighted light-curve in the energy 50-300 GeV with a \(4^{\circ}\) radius in the MJD 54686-56583.(f): Photons distribution of different energy from the Vela pulsar within MJD 54686-56883 in comparison to Leung et al. (2014) with a \(4^{\circ}\) radius, pulse phase and source probability of \(>50\) GeV photons with \(\rm{Pr_{\rm{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{\it{ \it{ }}}}}}{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \). \) \).\).\}} (ii) The Fermi-LAT data analysis software is available at [https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/). (iii) We agree to share data derived in this article on reasonable request to the corresponding author.
2305.16373
DeepGate2: Functionality-Aware Circuit Representation Learning
Circuit representation learning aims to obtain neural representations of circuit elements and has emerged as a promising research direction that can be applied to various EDA and logic reasoning tasks. Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior. However, their capabilities are limited due to weak supervision or flawed model design, resulting in unsatisfactory performance in downstream tasks. In this paper, we introduce DeepGate2, a novel functionality-aware learning framework that significantly improves upon the original DeepGate solution in terms of both learning effectiveness and efficiency. Our approach involves using pairwise truth table differences between sampled logic gates as training supervision, along with a well-designed and scalable loss function that explicitly considers circuit functionality. Additionally, we consider inherent circuit characteristics and design an efficient one-round graph neural network (GNN), resulting in an order of magnitude faster learning speed than the original DeepGate solution. Experimental results demonstrate significant improvements in two practical downstream tasks: logic synthesis and Boolean satisfiability solving. The code is available at https://github.com/cure-lab/DeepGate2
Zhengyuan Shi, Hongyang Pan, Sadaf Khan, Min Li, Yi Liu, Junhua Huang, Hui-Ling Zhen, Mingxuan Yuan, Zhufei Chu, Qiang Xu
2023-05-25T13:51:12Z
http://arxiv.org/abs/2305.16373v1
# DeepGate2: Functionality-Aware Circuit ###### Abstract Circuit representation learning aims to obtain neural representations of circuit elements and has emerged as a promising research direction that can be applied to various EDA and logic reasoning tasks. Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior. However, their capabilities are limited due to weak supervision or flawed model design, resulting in unsatisfactory performance in downstream tasks. In this paper, we introduce _DeepGate2_, a novel functionality-aware learning framework that significantly improves upon the original DeepGate solution in terms of both learning effectiveness and efficiency. Our approach involves using pairwise truth table differences between sampled logic gates as training supervision, along with a well-designed and scalable loss function that explicitly considers circuit functionality. Additionally, we consider inherent circuit characteristics and design an efficient one-round graph neural network (GNN), resulting in an order of magnitude faster learning speed than the original DeepGate solution. Experimental results demonstrate significant improvements in two practical downstream tasks: logic synthesis and Boolean satisfiability solving. The code is available at [https://github.com/curve-lab/DeepGate2](https://github.com/curve-lab/DeepGate2). ## I Introduction The application of Deep Learning (DL) techniques in Electronic Design Automation (EDA) has attracted a lot of attention, including routing [1, 2], synthesis [3, 4], and testing [5, 6]. Among learning-based EDA solutions, circuit representation learning [7, 8, 9, 10, 11, 12] has emerged as a promising research paradigm. This paradigm adopts a two-step approach instead of training individual models from scratch for each EDA task. First, a general model is pre-trained with task-agnostic supervision. Then, the model is fine-tuned for specific downstream tasks, resulting in improved performance. One representative technique, called _DeepGate_, embeds both the logic function and structural information of a circuit as vectors on each gate. DeepGate uses signal probabilities as the supervision task and applies an attention-based graph neural network (GNN) that mimics the logic computation procedure for learning. It has achieved remarkable results on testability analysis [5] and SAT solving problems [13]. However, we argue that its capabilities can still be dramatically enhanced. On one hand, DeepGate falls short in circuit functionality supervision. Generally speaking, the circuit truth table represents functionality in its most direct form. However, obtaining a complete truth table through exhaustive simulation is infeasible due to the exponentially increasing time requirement in relation to the number of primary inputs (PIs). DeepGate, as an alternative, employs the ratio of logic-1 in the truth table as a functionality-related supervision metric. It then approximates this value as the probability of logic-1 under randomized simulations performed a limited number of times. This approach, however, has a notable caveat. For example, a NOT gate and its fan-in gate can both have a logic-1 probability of \(0.5\), but their truth tables are entirely opposite, thus highlighting the inadequacy of this supervision method. On the other hand, DeepGate assigns the same initial embedding to all the PIs. Although these homogeneous embeddings reflect the equal logic probability of 0.5 for each PI under random simulation, they do not offer unique identifiers for individual PIs. Consequently, the model lacks the capacity to discern whether gates are reconvergent and driven by common PIs, information that is vital for circuit analysis. To preserve the logical correlation of gates, the model must execute multiple forward and backward propagation rounds to compensate for the absence of PI identification, which it achieves by revealing differences in local structure. Nevertheless, a model that involves many rounds of message-passing is time-consuming and inefficient, particularly when dealing with large circuits. In response to these challenges, we present _DeepGate2_, an innovative functionality-aware learning framework that notably advances the original DeepGate solution in both learning effectiveness and efficiency. Specifically, we incorporate the pairwise truth table difference of logic gates as supplementary supervision. This involves obtaining an incomplete truth table via rapid logic simulation, and then calculating the Hamming distance between the truth tables of two logic gates, referred to as the 'pairwise truth table difference'. Subsequently, we construct a functionality-aware loss function with the following objective: to minimize the disparity between the pairwise node embedding distance in the embedding space and the pairwise truth table difference, which serves as the ground truth. As a result, our proposed supervision introduces authentic functional information, a stark contrast to the initial DeepGate model, which predominantly depended on statistical facets of functionality. Moreover, we introduce a single-round GNN architecture that efficiently encapsulates both structural and functional characteristics. Our GNN segregates the node embeddings into two discrete elements: functional embeddings and structural embeddings, each initialized differently. For the initial functional embeddings of the PIs, we assign a uniform vector to denote the equal logic probability shared among all PIs. For the initial structural embeddings of the PIs, we allocate a set of orthogonal vectors. These vectors, unique and uniformly spaced apart in the embedding space, mirror the functional independence of each PI. By transmitting these PI embeddings to the internal logic gates through two separate aggregation streams, our model effectively amalgamates both structural and functional data through a singular round of forward message-passing procedure. We execute a range of experiments to highlight the effectiveness and efficiency of our proposed circuit representation learning framework. When compared to the predecessor, DeepGate, our model, DeepGate2, demonstrates a significant accuracy improvement and achieves an order of magnitude speedup in logic probability prediction. To further demonstrate the generalization capability of DeepGate2 in critical downstream applications that heavily rely on circuit functionality, we integrate it into EDA tools to aid logic synthesis [14] and SAT solving [15]. The experimental results further validate the efficacy of our DeepGate2 framework. The remainder of this paper is organized as follows. Section II surveys related work. We then detail the proposed DeepGate2 framework in Section III. We compare DeepGate2 with the original DeepGate and another functionality-aware solution [8] in Section IV. Next, we apply DeepGate2 onto several downstream tasks in Section V. Finally, Section VI concludes this paper. ## II Related Work ### _Circuit Representation Learning_ A prominent trend in the deep learning community is to learn a general representation from data first and then apply it to various downstream tasks, for example, GPT [16] and BERT [17] learn representations of natural language text that can be fine-tuned for a wide range of natural language processing tasks. Circuit representation learning has also emerged as an attractive research direction, which falls into two categories: structure-aware circuit representation learning [9, 11, 12] and functionality-aware circuit representation learning [7, 8]. Since a circuit can be naturally formulated as a graph, with gates as nodes and wires as edge, the GNN is a powerful tool to capture the interconnections of logic gates and becomes a backbone model to learn circuit representations. For example, TAG [9] is a GNN-based model designed for analog and mixed-signal circuit representation learning and applied for several physical design applications, such as layout matching prediction, wirelength estimation, and net parasitic capacitance prediction. ABGNN [12] learns the representation of digital circuits and handles the arithmetic block identification task. However, these models tend to focus on structural encoding and are not suitable for functionality-related tasks. Consequently, the functionality-aware circuit representation learning frameworks [7, 8] are designed to learn the underlying circuit functionality. For instance, FGNN [8] learns to distinguishes between functionally equivalent and inequivalent circuits by contrastive learning [18]. However, such self-supervised manner relies on data augmentation by perturbing the original circuit to logic equivalence circuit. If the perturbation is not strong and diverse, the model still identifies the functional equivalence circuits based on the invariant local structure, resulting a low generalization ability on capturing underlying functionality. DeepGate [7] leverages logic-1 probability under random simulation as supervision, which approximates the statistic of the most direct representation of functionality, i.e. truth table. Despite achieving remarkable progress on testability analysis [5], there are limitations that affect the generalizability of DeepGate to other EDA tasks. We will elaborate on DeepGate in the next subsection. ### _DeepGate Framework_ DeepGate [7] is the first circuit representation learning framework that embeds both structural and functional information of digital circuits. The model pre-processes the input circuits into a unified And-Inverter Graph (AIG) format and obtains rich gate-level representations, which can be applied to various downstream tasks. DeepGate treats the logic-1 probability as supervision to learn the functionality. Additionaly, the DeepGate consists of a GNN equipped with an attention-based aggregation function that propagates information of gates in levelized sequential manner. The aggregation function learns to assign high attention weights to controlling fan-in of gates (e.g. the fan-in gate with logic-0 is the controlling fan-in of AND gate) that mimics the logic computation process. Although it has been applied to testability analysis [5] and SAT problem [13], we argue that the model still encounters with two major shortcomings limiting its generalization ability. First, logic probability is not an appropriate supervision for learning functionality. The most direct representation of functionality is the truth table, however, using it as a training label is impractical due to the immeasurable computational overhead. DeepGate proposes to supervise the model by utilizing the proportion of logic-1 in the truth table and approximate this proportion as the logic probability through random simulation. However, logic probability is only a statistical information of functionality, indicating the number of logic-1 values in the truth table rather than which PI assignments lead to logic-1. Consequently, DeepGate cannot differentiate the functional difference between two circuits if they have the same probability. Second, DeepGate is not efficient enough to deal with large circuit. Specifically, DeepGate requires to perform forward and backward message-passing operations for \(20\) rounds to embed rich representations. Fig. 1 illustrates the need of this multi-round GNN design in DeepGate where the nodes in grey color represent PIs. The incoming messages of nodes \(5\), \(6\), \(5^{\prime}\), and \(6^{{}^{\prime}}\) during forward propagation are noted in the figure, where \(h_{i}\) is the embedding vector of node \(i\). Since, DeepGate uses the same initial embeddings for all nodes, the messages of nodes \(5\), \(6\), \(5^{{}^{\prime}}\), and \(6^{{}^{\prime}}\) in the first forward propagation round are identical. Thus, the model can only distinguish node embeddings based on their connections by repeatedly updating PIs through multiple rounds of forward and backward message propagation. We emphasize that the limitations of DeepGate comes from the lack of effective supervision and weak model design where the unique identification of all PIs are ignored. To address these issues, we propose an efficient one-round GNN design that maintains the unique identification of PIs and uses the pairwise truth-table difference of two gates as an effective supervision. ## III Methodology ### _Problem Formulation_ The circuit representation learning model aims to map both circuit structure and functionality into embedding space, where the structure represents the connecting relationship of logic gates and the functionality means the logic computational mapping from inputs to outputs. We conclude that the previous models still lack of ability to capture functional information. In this paper, we propose to improve the previous DeepGate model [7] to represent circuits with similar functionality with the similar embedding vectors. In other words, these circuit representations should have short distance in the embedding space. We take Circuit A, B, C, and D as examples in Fig. 2, where all of them have similar topological structures. Since Circuit A, B and C perform with the same logic probability, DeepGate [7] tends to produce the similar embeddings for these three circuits. Hence, it is hard to identify the logic equivalent circuits by DeepGate. Although FGNN [8] is trained to classify logical equivalence and inequivalence circuits by contrastive learning, they cannot differentiate the relative similarity. As shown in the embedding space, the distance between Fig. 1: An example of reconvergence structure A and B is equal to the distance between A and D. Nonetheless, as indicated in the truth table, Circuit A is equivalent to Circuit C, similar to Circuit B (with only \(2\) different bits), but dissimilar to Circuit D (with \(5\) different bits). We expect that the model will bring together or separate the circuits in embedding space according to their truth tables. Therefore, the expected DeepGate2 model not only identifies the logic equivalent nodes, but also predicts the functional similarity. Thus, we can apply such functionality-aware circuit learning model to provide benefits for the real-world applications. ### _Dataset Preparation_ To train the circuit representation learning model, we need to find a supervision contains rich functional information and prepare an effective dataset at first. _Truth table_, which records the complete logic computational mapping, provides the most direct supervision. However, the length of the truth table increases exponentially with the number of primary inputs, and obtaining a complete truth table requires an immeasurable amount of time. Therefore, a reasonable supervision should be easily obtained and closely related to the truth table. Firstly, we use the Hamming distance between truth tables of two logic gates as supervision. That is, in a way similar to metric learning [19], we map nodes to an embedding space and hope that the distance of the embedding vectors is positive correlated with the Hamming distance of the truth table. Formally, we denote the truth table vector of node \(i\) is \(T_{i}\) and the embedding vector of node \(i\) is \(h_{i}\) \[distance(h_{i},h_{j})\propto distance(T_{i},T_{j}) \tag{1}\] Secondly, to improve the data efficiency, we regard the each logic gate in circuit as a new circuit (logic cone) with the current gate as output and the original PIs as inputs. By parsing a single original circuit, we obtain a large number of new circuits. Therefore, the task of graph learning becomes the task of learning node-level representation, and the difficulty of data collection is reduced. Thirdly, to ensure the quality of sample pairs and limit the number of sample pairs, we impose the following constraints during sampling node pairs: (1) Two logic cones of the two nodes should have the same PI, which is a necessary condition for comparing the truth table difference. (2) The logic probability, which is the number of logic-1 percentage in the truth table, should be similar (distance within \(5\%\)). This is because if the logic probability of two nodes is not consistent, their functions are definitely not consistent. If the logic probability of two nodes is consistent, their functions may be consistent. (3) The difference in logic levels between two nodes should be within \(5\), because when the two nodes are far apart, their functions are unlikely to be correlated. (4) We only consider the extreme cases, namely, the difference between truth tables is within \(20\%\) or above \(80\%\). We do not perform the complete simulation, but set a maximum simulation time to obtain the response of each node as an incomplete truth table. It should be noted that we utilize the And-Inverter Graph (AIG) as the circuit netlist format, which is only composed of AND gate and NOT gate. Any other logic gates, including OR, XOR and MUX, can be transformed into a combination of AND and NOT gates in linear time. ### _Functionality-Aware Loss Function_ The primary objective of our purposed functionality-aware circuit learning model is to learn node embeddings, where two embedding vectors will be similar if the corresponding two node function are similar. As we sample node pairs \(\mathcal{N}\) in the Section III-B, we can obtain the Hamming distance of truth table \(D^{T}\) of each node pair. \[D^{T}_{(i,j)}=\frac{HammingDistance(T_{i},T_{j})}{length(T_{i})},(i,j)\in \mathcal{N} \tag{2}\] According to Eq. (1), the distance of embedding vectors \(D^{H}\) should be proportional to the Hamming distance of the truth table \(D^{T}\). We define the distance of embedding vectors in Eq. (3), where is calculated based on cosine similarity. In other word, the similarity of embedding vectors \(S_{(i,j)}\) should be negative related to distance \(D^{T}_{(i,j)}\). \[\begin{split} S_{(i,j)}&=CosineSimilarity(h_{i},h_{ j})\\ D^{H}_{(i,j)}&=1-S_{(i,j)}\end{split} \tag{3}\] Therefore, the training objective is to minimize the difference between \(D^{H}\) and \(D^{T}\). We purpose the functionality-aware loss function \(L_{func}\) as below. \[\begin{split} D^{T^{\prime}}_{(i,j)}&=ZeroNorm(D^{T} _{(i,j)})\\ D^{H^{\prime}}_{(i,j)}&=ZeroNorm(D^{H}_{(i,j)}) \\ L_{func}&=\sum_{(i,j)\in\mathcal{N}}(L1Loss(D^{T^{ \prime}}_{(i,j)},D^{H^{\prime}}_{(i,j)}))\end{split} \tag{4}\] ### _One-round GNN Model_ In this subsection, we propose a GNN model that can capture both functional and structural information for each logic gate through one-round forward propagation. First, we propose to separate the functional embeddings \(hf\) and structural embeddings \(hs\), and initialize them in difference ways. We assign the uniform initial functional embeddings to primary inputs (PI), as they all have equivalent logic probability under random simulation. However, we design a PI encoding (PIE) strategy by assigning a unique identification to each PI as its initial structural embedding. Specifically, the initial PI structural embeddings \(hs_{i},i\in PI\) are orthogonal vectors. This means that the dot product of any two PIs' embeddings is zero. Second, we design four aggregators: \(aggr^{s}_{AND}\) aggregates the message for structural embedding \(hs\) of an AND gate, \(aggr^{f}_{AND}\) aggregates the message for functional embedding \(hf\) of an AND Fig. 2: Problem statement: the embedding vectors should be close if circuit functions are similar gate. \(aggr^{*}_{NOT}\) and \(aggr^{f}_{NOT}\) update \(hs\) and \(hf\) of a NOT gate, respectively. We implement each aggregator using the self-attention mechanism [20], as the output of a logic gate is determined by the controlling values of its fan-in gates. For example, an AND gate must output logic-0 if any of its fan-in gates has logic-0. By employing the attention mechanism, the model learns to assign greater importance to the controlling inputs [7]. As illustrated in Eq. (5), \(w_{q}\), \(w_{k}\) and \(w_{v}\) are three weight matrices and \(d\) is the dimension of embedding vectors \(h\). \[\alpha_{j} =softmax(\frac{w_{q}^{\top}h_{i}\cdot(w_{k}^{\top}h_{j})^{\top}}{ \sqrt{d}}) \tag{5}\] \[m_{j} =w_{v}^{\top}h_{j}\] \[h_{i} =aggr(h_{j}|j\in\mathcal{P}(i))=\sum_{j\in\mathcal{P}(i)}(\alpha_ {j}*m_{j})\] Third, during forward propagation, the structural embeddings are updated only with the structural embeddings of predecessors. As shown in Eq. (6), where the Gate \(a\) is AND gate, the Gate \(b\) is NOT gate. \[hs_{a} =aggr^{s}_{AND}(hs_{j}|j\in\mathcal{P}(a)) \tag{6}\] \[hs_{b} =aggr^{f}_{NOT}(hs_{j}|j\in\mathcal{P}(b))\] At the same time, the gate function is determined by the function and the structural correlations of the fan-in gates. Therefore, the functional embeddings are updated as Eq. (7). \[hf_{a} =aggr^{f}_{AND}([hs_{j},hf_{j}]|j\in\mathcal{P}(a)) \tag{7}\] \[hf_{b} =aggr^{f}_{NOT}([hs_{j},hf_{j}]|j\in\mathcal{P}(b))\] Therefore, as shown in Fig. 3, the GNN propagation process performs from PI to PO level by level. For the node in level \(l\), its structural embedding \(hs_{L_{l}}\) will be updated with the structural embeddings of the node in level \(l-1\). Additionally, the functional embedding \(hf_{L_{l}}\) will be updated with both structural embeddings \(hs_{L_{l-1}}\) and functional embeddings \(hf_{L_{l-1}}\). The GNN propagation completes after processing \(N\) levels. ### _Model Training Strategies_ To train the model, we employed multi-stage training strategy, similar to training a model with an easy task and then a harder task in curriculum learning [21]. During each stage, we trained the model with multiple supervisions in multi-task learning manner [22]. In the first stage, we train the one-round GNN model with two simple tasks. The Task 1 involves predicting the logic probability, while the Task 2 entails identifying the structural correlation. To achieve this, we readout the functional embedding \(hf_{i}\) to predict the logic probability \(\hat{P}_{i}\) by a multi-layer perceptron (MLP), denoted as \(MLP_{prob}\). In addition, we utilize the structural embeddings \(hs_{i}\) and \(hs_{j}\) to predict whether node \(i\) and node \(j\) can be reconvergent by \(MLP_{rc}\). \[\hat{P}_{i} =MLP_{prob}(hf_{i}) \tag{8}\] \[\hat{R_{(i,j)}} =MLP_{rc}(hs_{i},hs_{j})\] We define the loss function for Task 1 in Eq. (9), where the \(P_{i}\) is the ground truth logic probability obtained through random simulation. \[L_{prob}=L1Loss(P_{i},\hat{P}_{i}) \tag{9}\] Besides, we define the loss function for Task 2 in Eq. (10). The binary ground truth, denoted as \(R_{(i,j)}\), indicates whether node pair \(i\) and \(j\) have a common predecessor. \[L_{rc}=BCELoss(R_{(i,j)},R_{(i,j)}^{\hphantom{(i,j)}}) \tag{10}\] Consequently, the loss function for Stage 1 is presented in Eq. (11), where the \(w_{prob}\) and \(w_{rc}\) are the weight for Task 1 and Task 2, respectively. \[L_{stage1}=L_{prob}*w_{prob}+L_{rc}*w_{rc} \tag{11}\] The second training stage involves another more difficult Task 3. functionality-aware learning, as described in Section III-C. The loss function for Stage 2 is defined below, where \(w_{func}\) represents the loss weight of Task 3. \[L_{stage2}=L_{prob}\times w_{prob}+L_{rc}\times w_{rc}+L_{func}\times w_{func} \tag{12}\] Overall, the model can differentiate gates with varying probability in Stage 1. As the logic equivalent pairs only occur when nodes have the same probability, the model in Stage 2 learns to predicting the functional similarity within the probability equivalent class. The effectiveness of the above training strategies is demonstrated in Section IV-E. ## IV Experiments In this section, we demonstrate the ability of our proposed DeepGate2 to learn functionality-aware circuit representations. Firstly, Section IV-A provides the preliminary of our experiments, including details on dataset preparation, evaluation metrics and model settings. Secondly, we compare the effectiveness and efficiency of our DeepGate2 against DeepGate [7] and FGNN [8] on two function-related tasks: logic probability prediction (see Section IV-B) and logic equivalence gates identification (see Section IV-C). Thirdly, we investigate the effectiveness of model design and training strategies in Section IV-D and Section IV-E, respectively. ### _Experiment Settings_ #### Iv-A1 Dataset Preparation We use the circuits in DeepGate [7], which are extracted from ITC'99 [23], IWLS'05 [24], EPFL [25] and OpenCore [26]. These circuits consists of \(10,824\) AIGs with sizes ranging from \(36\) to \(3,214\) logic gates. To obtain the incomplete truth table, we generate \(15,000\) random patterns and record the corresponding response. Following the data preparation method described in Section III-B, we construct a dataset comprising \(894,151\) node pairs. We create 80/20 training/test splits for model training and evaluation. Fig. 3: One-round GNN propagation process #### Iv-A2 Evaluation Metrics We assess our DeepGate2 with two tasks. The first task is to predict the logic probability for each logic gate. We calculate the average prediction error (PE) as Eq. (13), where the set \(\mathcal{V}\) includes all logic gates. \[PE=\frac{1}{|\mathcal{V}|}\sum_{i\in\mathcal{V}}|P_{i}-\hat{P}_{i}| \tag{13}\] The second task is to identify the logic equivalence gates within a circuit. A gate pair \((i,j)\) is considered as a positive pair if these two logic gates \(i\) and \(j\) have the same function, where the pairwise Hamming distance of truth tables \(D_{(i,j)}^{T}=0\). If the similarity \(S_{(i,j)}\) between these two embedding vectors \(hf_{i}\) and \(hf_{j}\) exceeds a certain threshold, the model will recognize the gate pair \((i,j)\) as equivalent. The optimal threshold \(\theta\) is determined based on the receiver operating characteristic (ROC). The evaluation metrics is formally defined in Eq. (14), where \(TP\), \(TN\), \(FP\), \(FN\) are true positive, true negative, false positive and false negative, respective, and \(M\) is the total number of gate pairs. In the following experiments, the performance on logic equivalence gates identification is measured in terms of Recall, Precision, F1-Score and area under curve (AUC). \[TP=\frac{\sum((D_{(i,j)}^{T}==0)\ \&\ (S_{(i,j)}>\theta))}{M} \tag{14}\] \[TN=\frac{\sum((D_{(i,j)}^{T}>0)\ \&\ (S_{(i,j)}<\theta))}{M}\] \[FP=\frac{\sum((D_{(i,j)}^{T}>0)\ \&\ (S_{(i,j)}>\theta))}{M}\] \[FN=\frac{\sum((D_{(i,j)}^{T}==0)\ \&\ (S_{(i,j)}<\theta))}{M}\] We conduct the following performance comparisons on \(10\) industrial circuits, with circuit sizes ranging from \(3.18k\) gates to \(40.50k\) gates. #### Iv-A3 Model Settings In the one-round GNN model configuration, the dimension of both structural embedding \(hs\) and functional embedding \(hf\) is \(64\). Both \(MLP_{prob}\) and \(MLP_{rc}\) contain \(1\) hidden layer with \(32\) neurons and a ReLu activation function. The model is trained for \(60\) epochs to ensure each model can converge. The other models [7, 8] mentioned in the following experiments maintain their original settings and are trained until they converge. We train all the models for \(80\) epochs with batch-size \(16\) on a single Nvidia V100 GPU. We adopt the Adam optimizer [27] with learning rate \(10^{-4}\) and weight decay \(10^{-10}\). ### _Comparison with DeepGate on Probability Prediction_ We compare the probability prediction error (PE, see Eq. (13)) and runtime (Time) with previous DeepGate. The previous DeepGate is denoted as _DeepGate_ and our proposed model with novel loss function and GNN design is named as _DeepGate2_ in Table I. Based on the results presented in the table, we make two observations. First, our proposed DeepGate2 exhibits more accurate predictions of logic probability compared to the previous version. On average, the probability prediction error (PE) of DeepGate2 is \(13.08\%\) lower than that of DeepGate. This suggests that using the novel model architecture with embedding initialization strategy can benefit logic representation learning and lead to better results on logic probability prediction. Second, our DeepGate2 performs more efficient than DeepGate. Take the circuit D1 as an example, DeepGate requires \(36.89\) seconds for inference, but our DeepGate2 only needs \(2.23\)s, which is \(16.56\)x faster than the previous DeepGate. Moreover, compared to the previous model, DeepGate2 achieves an order of magnitude speedup (\(16.43\)x on average) in model runtime. This is attributed to the fact that the GNN model in DeepGate relies on \(10\) forward and \(10\) backward message propagation, whereas the proposed one-round GNN model in DeepGate2 only performs forward propagation for \(1\) time. Therefore, the new circuit representation learning model is more effective and efficient than DeepGate, and demonstrates the generalization ability on large-scale circuits. Fig. 4: Functionality-aware circuit learning framework ### _Comparison with other Models on Logic Equivalence Gates Identification_ This section compares the functionality-aware accuracy, as defined in Section IV-A2 of DeepGate2 with that of two other models: DeepGate [7] and FGNN [8]. The DeepGate [7] model treats the logic probability as supervision since it contains the statistical information of truth table. The FGNN [8] is trained to differentiate between logic equivalent and inequivalent circuits using contrastive learning. Table II presents the performance of three models on the task of logic equivalence gates identification. Firstly, our proposed approach outperforms the other two models on all circuits with an average F1-score of \(0.9434\), while DeepGate and FGNN only achieve F1-Score \(0.6778\) and \(0.4402\), respectively. For instance, in circuit D7, our proposed functionality-aware circuit learning approach achieves an F1-Score of \(0.9831\) and accurately identifies \(99.15\%\) of logic equivalence gate pairs with a precision of \(97.48\%\), indicating a low false positive rate. In contrast, DeepGate only achieves an F1-score of \(0.6778\), while FGNN fails on most of the pairs. Secondly, although DeepGate has an average recall of \(91.46\%\), its precision is only \(54.00\%\), indicating a large number of false positive identifications. This is because DeepGate can only identify logic equivalent pairs by predicting logic probability, which leads to incorrect identification of gate pairs with similar logic probability. According to our further experiment, in \(80.83\%\) of false positive pairs, the model incorrectly identifies gate pairs with similar logic probability as functionally equivalent. Thirdly, FGNN achieves the lowest performance among the other models, with only \(0.4402\) F1-Score. The poor performance of FGNN is attributed to the lack of effective supervision. While FGCN learns to identify logic equivalence circuits generated by perturbing local structures slightly, the model tends to consider circuits with similar structures to have the same functionality. However, in the validation dataset and practical applications, two circuits may have the same function even if their topological structures are extremely different. Therefore, the self-supervised approach limits the effectiveness of FGNN in identifying logic equivalence gates. ### _Effectiveness of PI Encoding Strategy_ To demonstrate the effectiveness of our proposed PI encoding (PIE) strategy, we trained another model without assigning unique identifications for PIs, which we refer to as _w/o PIE_. The results are presented in Table III, which show that disabling the PIE reduces the F1-Score of identifying logic equivalence gates from \(0.9434\) to \(0.7541\), resulting in an average reduction of \(20.07\%\). Such reduction can be attributed to the fact that, as demonstrated as the failure case in Section II and Fig. 1, the one-round GNN model without the PIE strategy cannot model the structural information of the circuit. More specifically, the accuracy of the reconvergence structure identification task with w/ PIE model is \(93.22\%\), while the w/o model only achieve \(74.56\%\). The functionality of logic gate is affected by both functionality of fan-in gates and whether there is reconvergence between its fan-in gates. Once the reconvergence structure cannot be accurately identified, node functionality cannot be modeled accurately. ### _Effectiveness of Training Strategies_ To investigate the effectiveness of our multi-stage training strategy, we train another model (noted as w/o multi-stage model) with all loss functions in only one stage, instead of adding the functionality-aware loss function in the second stage. The original model with multiple stages training strategy is noted as w/ multi-stage model. The w/ multi-stage model learn to predict the logic probability and structural correlation in the first stage and learn the more difficult task, which predicts the functionality in the second stage. The results are shown in Table IV, where the model w/ multi-stage achieves an F1-Score of \(0.9434\) on average and the model w/o multi-stage achieves only \(0.7137\). We analyze the reason as follows. The cost of comparing each pair of logic gates in the task of learning functionality is extremely high, which is proportional to the square of the circuit size. We limit the dataset and train the model to learn functional similarity only among pairs with similar logic probability, which is a necessary condition for functional equivalence. Therefore, without the staged multi-stage strategy, be effectively supervised with the simplified dataset, leading to poor performance in learning functionality. As shown in Table V, the differences between the two models in the loss values for predicting logic probability (\(L_{prob}\)) and identifying reconvergence structures (\(L_{rc}\)) are not significant, indicating that they perform similarly in these two tasks. However, compared to the w/o multi-stage model, the w/ multi-stage model performs better in learning functionality with \(L_{func}=0.0594\), which is \(51.47\%\) smaller than that of w/o multi-stage model. However, the w/ multi-stage model outperforms the model w/o multi-stage in learning functionality task with a significantly lower \(L_{func}\) value of \(0.0594\), which is \(51.47\%\) smaller than that of the latter. ## V Downstream Tasks In this section, we combine our DeepGate2 with the open-source EDA tools and apply our model to practical EDA tasks: logic synthesis and Boolean satisfiability (SAT) solving. The logic synthesis tools aim to identify logic equivalence gates as quickly as possible. In Section V-A, our proposed functionality-aware circuit learning model provides guidance to the logic synthesis tool about the logic similarity. Additionally, in Section V-B, we apply the learnt functional similarity in SAT solving, where the variables with dissimilar functionality are assigned the same decision value. This approach efficiently shrinks the search space by enabling solvers to encounter more constraints. ### _Logic Synthesis_ This subsection shows the effectiveness of our proposed functionality-aware circuit learning framework in SAT-sweeping [28], a common technique of logic synthesis. Fig. 5 illustrates the components of a typical ecosystem for SAT-sweeping engine (also called SAT sweeper), where including _equivalence class (EC) manager_, _SAT-sweeping manager_, _simulator_, and _SAT solver_. All computations are coordinated by the SAT-sweeping manager [29]. The SAT sweeper starts by computing candidate ECs using several rounds of initial simulation and storing ECs into EC manager. In the next step, the SAT-sweeping manager selects two gates within an EC and then calls the SAT solver to check whether they are equivalent. If so, the EC manager merges these two gates. Otherwise, SAT solver will return a satisfiable assignment as a counterexample for incremental simulation to refine the candidate ECs. To the best of our knowledge, most SAT-sweeping managers select EC only based on the circuit structure, without efficient heuristic strategy considering the functionality of candidate gates. We will introduce the functional information into SAT-sweeping manager to further improve efficiency. #### V-A1 Experiment Settings We combine our DeepGate2 into SAT sweeper to guide EC selection. To be specific, the updated manager sorts all candidate equivalence classes by computing the cosine similarity of their embeddings. Unlike traditional SAT sweepers, our proposed SAT sweeper does not need to validate the equivalence of all candidate ECs in one pass. Instead, node pairs with higher similarity have high priority for SAT solver calls. If the gate pair is formally proved to be equivalent, these two gate are merged. Otherwise, the generated counterexample should contain more conflicts than the baseline method, resulting in better efficiency for refining candidate ECs. Our model is equipped into ABC [30] as a plug-in and integrated into the SAT sweeper '&fraig' [14], which is one of the most efficient and scalable SAT sweeper publicly available at this time. The AIGs derived by merging the resulting equivalence nodes are verified by '<acc' command in ABC to ensure functional correctness. All experiments are performed on a 2.40 GHz Intel(R) Xeon(R) Silver 4210R CPU with 64GB of main memory. A single core and less than 1GB was used for any test case considered in this subsection. The proposed SAT sweeper (named as _Our_) is compared against the original engine, _&fraig_. #### Iv-B2 Results We validate the performance of our SAT sweeper with 6 industrial circuits. As shown in Table VI, section "Statistics" lists the number of PI and PO (PI/PO), logic levels (Lev) and internal AND-nodes in the original AIG (And). To ensure the fairness of comparison, the circuits after sweeping should have the same size. Section "SAT calls" lists the number of satisfiable SAT calls, performed by the solver employed in each engine. The data shows that our proposed engine decreases the number of satisfiable SAT calls, that explains why it has better results, since more resource are used to prove equivalence gates. In addition, section "Total runtime" compares the runtime and section "Red." shows the runtime reduction from &fraig to Our. The number of "SAT calls" can get an average reduction of \(53.37\%\) (\(95.88\%\) maximum) through the integration of our DeepGate2 model. As for "Total runtime", this experiment shows that the DeepGate2-based SAT sweeper outperforms state-of-the-art engines, while reducing the average runtime by \(49.46\%\) (\(57.77\%\) maximum). Thus, the sweeper formally verifies the equivalence of func Fig. 5: The proposed SAT-sweeping ecosystem. with the guidance from DeepGate2, thereby reducing the number of invalid SAT calls and improving efficiency of SAT sweeping. Take C1 as a representative example, the baseline &fraig selects gates in EC for formal verification without considering their behaviour, thus, many solver calls return satisfiable results, and few gates can be merged. However, with the guidance of DeepGate2, the sweeper can prioritize the selection of gates with similar behavior, resulting in a significant reduction of \(95.88\%\) in SAT calls and \(57.77\%\) in runtime. ### _Boolean Satisfiability Solving_ Boolean satisfiability (SAT) solving is a long-standing and fundamental NP-complete problem with applications in many areas, especially in electronic design automation (EDA) [31, 32, 33]. The existing SAT solvers are designed to incorporate efficient heuristics [34, 35, 36] to expedite the solving process. For instance, [34] proposes to utilize the correlation of logic gate functionality to enforce variable decision for solving circuit-based SAT instances. Although the solution achieves remarkable speedup over SAT solvers, it still relies on the time-consuming logic simulation to obtain the functionality. Based on [34], we demonstrate how the DeepGate2 models functional correlation efficiently and accelerates SAT solving. #### V-B1 Experiment Settings We integrate our DeepGate2 into a modern SAT solver, CaDiCal [15] to solve the instances from logic equivalence checking (LEC) task. Firstly, we obtain gate-level embeddings of the original circuit and predict the pairwise functional similarity between these gates. Given the one-to-one mapping [37] between circuits and conjunctive normal form (a problem format required by SAT solvers), we can easily transfer the gate functional similarity to variable behavioral similarity. If two logic gates have similar representations (and therefore similar functionality), their corresponding variables should be correlated and grouped together during the variable decision process. Secondly, we incorporate the learnt knowledge into the SAT solver. As shown in Algorithm 1, when the current variable \(s\) is assigned a value \(v\), we identify all unassigned variables \(s^{\prime}\) in the set \(\mathcal{S}\) that contains correlated variables with \(s\). As modern SAT solvers reduce searching space by detecting conflicts as much as possible [38], we assign the reverse value \(\bar{v}\) to \(s^{\prime}\) to promptly cause conflict for joint decision. Besides, the threshold \(\delta\) in Algorithm 1 is set to \(1e-5\). Thirdly, to evaluate the efficacy of our model in accelerating SAT solving, we compare the aforementioned hybrid solver (labeled as Our) with original CaDiCal [15] (labeled as Baseline) on \(5\) industrial instances. All experiments are conducted with a single 2.40GHz Intel(R) Xeon(R) E5-2640 v4 CPU. #### V-B2 Results The runtime comparison between Baseline and Our are listed in Table VII. To ensure a fair comparison, we aggregate the DeepGate2 model inference time (Model) and SAT solver runtime (Solver) as the Overall runtime. We have the following observations. First, our method achieves a substantial reduction in total runtime for all test cases, with an average runtime reduction of \(40.05\%\). Take I1 as an example, the plain solver requires \(88.01\)s to solve the problem, but by combining with our model, the new solver produces results in only \(32.02\)s, reducing runtime by \(63.62\%\). Second, our model only takes a few seconds to obtain embeddings, occupying less than \(10\%\) of overall runtime on average. It should be noted that our DeepGate2 is able to infer within polynomial time that is only proportional to the size of instance. Third, while the two largest instances I4 and I5 show less reduction than the others, it does not necessarily mean that our model is unable to generalize to larger instances. As evidenced by the results for I2, an instance with a similar size to I4 and I5 also demonstrates a significant reduction. The reduction caused by our model should be determined by the characteristics of instance. In summary, our model is effective in speeding up downstream SAT solving. ## VI Conclusion This paper introduces DeepGate2, a novel functionally-aware framework for circuit representation learning. Our approach leverages the pairwise truth table differences of logic gates as a supervisory signal, providing rich functionality supervision and proving scalable for large circuits. Moreover, DeepGate2 differentiates and concurrently updates structural and functional embeddings in two dedicated flows, acquiring comprehensive representations through a single round of GNN forward message-passing. In comparison to its predecessor, DeepGate2 demonstrates enhanced performance in logic probability prediction and logic equivalent gate identification, while simultaneously improving model efficiency tenfold. The applications of DeepGate2 onto multiple downstream tasks further demonstrate its effectiveness and potential utility in the EDA field.
2305.19040
Motivation and needs of informal physics practitioners
Physicists engage with the public to varying degrees at different stages of their careers. However, their public engagement covers many activities, events, and audiences, making their motivations and professional development needs not well understood. As part of ongoing efforts to build and support community in the informal physics space, we conducted interviews with physicists with a range of different experiences in public engagement. We use personas methodology and self-determination theory to articulate their public engagement motivation, challenges, and needs. We present our set of three personas: the physicist who engages in informal physics for self-reflection, the physicist who wants to spark interest and understanding in physics, and the physicist who wants to provide diverse role models to younger students and inspire them to pursue a STEM career. Needs covered a range of resources including science communication training, community building among informal physics practitioners, and mechanisms to recognize, elevate and value informal physics. By bringing user-centered design methodology to a new topical area of physics education research, we expand our understanding of motivations and needs of practitioners in physics public engagement. Therefore, departments, organizations and institutions could draw upon the personas developed to consider the ways to better support physicists in their respective environment.
Shams El-Adawy, Alexandra C. Lau, Eleanor C. Sayre, Claudia Fracchiolla
2023-05-30T13:58:11Z
http://arxiv.org/abs/2305.19040v1
# Motivation and needs of informal physics practitioners ###### Abstract Physicists engage with the public to varying degrees at different stages of their careers. However, their public engagement covers many activities, events, and audiences, making their motivations and professional development needs not well understood. As part of ongoing efforts to build and support community in the informal physics space, we conducted interviews with physicists with a range of different experiences in public engagement. We use persona methodology and self-determination theory to articulate their public engagement motivation, challenges, and needs. We present our set of three personas: the physicist who engages in informal physics for self-reflection, the physicist who wants to spark interest and understanding in physics, and the physicist who wants to provide diverse role models to younger students and inspire them to pursue a STEM career. Needs covered a range of resources including science communication training, community building among informal physics practitioners, and mechanisms to recognize, elevate and value informal physics. By bringing user-centered design methodology to a new topical area of physics education research, we expand our understanding of motivations and needs of practitioners in physics public engagement. Therefore, departments, organizations and institutions could draw upon the personas developed to consider the ways to better support physicists in their respective environment. ## I Introduction Informal physics education refers to activities and events centered on engagement with physics outside the formal classroom. Public engagement has been defined as encompassing "the myriad of ways in which the activity and benefits of higher education and research can be shared with the public. Engagement is by definition a two-way process, involving interaction and listening, with the goal of generating mutual benefit"[1].We refer to informal physics and public engagement interchangeably as informal physics activities play an important role in the public's general understanding of physics and science. Many types of activities, platforms and programs fall under informal physics education such as after-school programs, public talks, demonstration presentations, open houses, science festivals, planetariums, social media, websites, popularized books, movies and games [2]. While many of these activities can be specific to physics and astronomy, some of them include a broader sense of education across science fields or all of STEM. Despite the wide variety of possible activities, a common characteristic they share is that participation is voluntary and activities are meant to provide participants the freedom to explore and be curious about how the world works. Research in informal physics, often referred to as IPER, has focused on physics identity development, development of informal education programs, skill development for facilitators, impact of engagement in informal physics on audiences and the landscape of practices undertaken in this space [3]. Research shows that participation in informal physics programs significantly enhances facilitators' communication skills, teamwork capacity and confidence [4; 5; 6]. Moreover, participation in these programs has the added benefit of increasing sense of belonging to the field of physics for both facilitators and audience. In particular, for individuals from underrepresented populations, engagement with physics in these informal spaces allows them to develop their physics identity as they bring their whole selves to these spaces [7; 8; 9; 10; 11]. In turn, informal physics increases the interest and relevance of physics and science as a potential career path [12]. Furthermore, informal education programs provide opportunities for significant numbers of individuals in various geographic locations and diverse demographics to hear and engage with physics and physicists [13]. The dimensions at play in informal physics programs are varied, rich and nuanced. In a study about the landscape of informal physics Izadi _et al._ provide an overview of all possible components of informal programs: personnel (volunteers and paid staff), resources (funding and
2302.04908
Certified simultaneous isotopic approximation of curves via subdivision
We present a certified algorithm based on subdivision for computing an isotopic approximation to any number of curves in the plane. Our algorithm is based on the certified curve approximation algorithm of Plantinga and Vegter. The main challenge in this algorithm is to correctly and efficiently identify and isolate all intersections between the curves. To overcome this challenge, we introduce a new and simple test that guarantees the global correctness of our output. A main step in our algorithm for approximating any number of curves is to correctly approximate a pair of curves. In addition to developing the details of this special case, we provide complexity analyses for both the number of steps and the bit-complexity of this algorithm using both worst-case bounds as well as those based on continuous amortization.
Michael Burr, Michael Byrd
2023-02-09T19:30:05Z
http://arxiv.org/abs/2302.04908v2
# Certified simultaneous isotopic approximation of pairs of curves via subdivision ###### Abstract. We present a certified algorithm based on subdivision for computing an isotopic approximation to a pair of curves in the plane. Our algorithm is based on the certified curve approximation algorithm of Plantinga and Vegter. The main challenge in this computation is to correctly and efficiently compute the intersections of the curves. To address this issue, we introduce a new, but simple test that guarantees the global correctness of our output. ## 1. Introduction In [10, 11], Plantinga and Vegter introduced an algorithm to construct topologically correct piecewise-linear approximations to smooth and bounded real hypersurfaces in two and three dimensions. Their algorithm is particularly interesting as it is a symbolic-numeric algorithm based on subdivision whose predicates are simple and easy to implement. On singular input, however, the Plantinga and Vegter algorithm does not terminate as both of their predicates fail on regions containing singular points. The current paper presents an algorithm in the spirit of the original Plantinga and Vegter algorithm for correctly approximating the union of two smooth curves in the plane with simple transverse crossings. **Main question**.: _Suppose that \(f,g\in\mathbb{Z}[x,y]\) define two smooth curves in the real plane and their corresponding varieties \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect transversely in simple crossings. Our goal is to construct a pair of approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) to \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\), respectively, such that \(\mathcal{A}(f)\cup\mathcal{A}(g)\) is a topologically correct piecewise-linear approximation to \(\mathcal{V}(f)\cup\mathcal{V}(g)\), see Figure 1(b)._ In our setting, topologically correct means that there is an ambient isotopy that deforms space while taking both \(\mathcal{A}(f)\) to \(\mathcal{V}(f)\) and \(\mathcal{A}(g)\) to \(\mathcal{V}(g)\). In particular, the crossings of the approximations form a topologically correct approximation to the intersection points of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). The approximation and the varieties can also be made as close as desired in Hausdorff distance by further subdivision. The main challenge is that while the Plantinga and Vegter algorithm computes the individual approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\), the algorithm only guarantees the existence of two ambient isotopies. One of the ambient isotopies takes \(\mathcal{A}(f)\) to \(\mathcal{V}(f)\) and the other takes \(\mathcal{A}(g)\) to \(\mathcal{V}(g)\), but there is no guarantee that these isotopies are compatible in any sense, see Figure 1(a). This can cause \(\mathcal{A}(f)\cap\mathcal{A}(g)\) to fail to include intersections between \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) as well as for \(\mathcal{A}(f)\cap\mathcal{A}(g)\) to include extraneous intersections, which do correspond to intersections between \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). An extension of the Plantinga and Vegter algorithm was introduced in [1] to handle unbounded and singular input. This approach can solve the current problem by computing an approximation to the variety \(\mathcal{V}(fg)\), but this approach relies on separation bounds between singular points. These separation bounds are typically so pessimistic that it is questionable whether this algorithm is practical. On the other hand, the algorithm introduced in [6] also studies the problem considered here, but their algorithm uses more restrictive tests than what we propose, and they may require a significant number of subdivisions to characterize the local behavior of curves within a region. For instance, their algorithm has more topological requirements on boxes that contain intersections of curves than our approach. This makes our correctness statement a little bit weaker than the correctness statement in Lien _et al._, but they are still quite strong and more in line with the statement appearing in the original work of Plantinga and Vegter. Our main contribution is the design and correctness for Algorithm 2. This algorithm is a certified symbolic-numeric subdivision-based algorithm for solving the Main question, with the following correctness statement: **Theorem 1**.: _Suppose that \(f,g\in\mathbb{Z}[x,y]\) and \(R=[a,b]\times[c,d]\) is a rectangular subset of \(\mathbb{R}^{2}\) such that \(a,b,c,d\in\mathbb{Z}\). In addition, suppose that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) define smooth curves in \(R\) which intersect simply and ###### Abstract We consider the following problem: **Problem 1**.: _The problem of finding a pair of curves \((f,g)\) with \(\mathcal{A}(f)\) and \(\mathcal{V}(g)\) is given by the following problem:_ [MISSING_PAGE_POST] but this behavior does not appear in the approximation, see Figure 1 and Definition 2. Instead, the ambient isotopy stretches space so that the approximation is moved into the neighboring box. This global correctness without local correctness is a key feature of this family of algorithms. In many cases, it leads to many fewer boxes created since these algorithms do not need to resolve the behavior of small excursions. This key feature makes the problem of approximating a pair of curves given by \(f,g\in\mathbb{Z}[x,y]\) more challenging since an excursion may involve an intersection between the varieties \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\), but the ambient isotopies separate the curves and remove the intersection from the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\). The complexity of the Plantinga and Vegter algorithm was first studied in [2] using continuous amortization, see, e.g., [4, 3], there, the authors found both adaptive and worst-case complexity bounds for the number of regions formed by subdivision as well as the bit-complexity of the algorithm. In addition, the authors found examples which were guaranteed to exhibit the worst-case exponential complexity bounds. In [5], a smoothed-analysis based approach was used to show that the average complexity of the algorithm is polynomial. In [12], a condition number-based approach also showed that the average complexity of the algorithm is polynomial, but for a larger class of random polynomials including some sparse families. ### Predicate details The two predicates in the Plantinga and Vegter algorithm are typically implemented using interval arithmetic, see, e.g., [9] for more details. Interval arithmetic extends the standard arithmetic operations to intervals. For instance, \[[a,b]+[c,d] =[a+c,b+d],\] \[[a,b]-[c,d] =[a-d,b-c],\text{ and }\] \[[a,b][c,d] =[\min\{ac,ad,bc,bd\},\max\{ac,ad,bc,bd\}].\] These interval operations can be extended to the evaluation of functions, and we use the symbol \(\square\) to denote any such extension. In particular, for a polynomial \(f\in\mathbb{Z}[x,y]\) and a region \(B\), \(\square f(B)\) is an interval containing the image \(f(B)\). The interval \(\square f(B)\) is often larger than \(f(B)\), but significantly easier to compute. The \(C_{0}\) test is implemented as \(C_{0}(B)=\text{True}\) if and only if \(0\not\in\square f(B)\). Since \(\square f(B)\) is an over-approximation to \(f(B)\), if \(0\not\in\square f(B)\), then \(0\not\in f(B)\), so the variety \(\mathcal{V}(f)\) cannot intersect \(B\). The \(C_{1}\) test is slightly more complicated, as \(C_{1}(B)=\text{True}\) if and only if \(0\not\in\square(\nabla f,\nabla f)(B\times B)\). In this formulation, each of the factors of \(B\times B\) is the argument to one \(\nabla f\). If \(0\not\in\square(\nabla f,\nabla f)(B\times B)\), then there cannot be a pair of points \((x_{1},y_{1}),(x_{2},y_{2})\in B\) such that \(\langle\nabla f(x_{1},y_{1}),\nabla f(x_{2},y_{2})\rangle=0\), i.e., the gradient vectors cannot be perpendicular. We call this family of algorithms symbolic-numeric algorithms for two reasons: The predicates perform exact computations using the coefficients of \(f\), i.e., not merely treating \(f\) as a function. The computations themselves are performed using arbitrary-precision floating point computations on dyadic points. In other words, the evaluations are exact, but leverage the speed of floating point calculations. ### Topological details We collect some key facts from [10, 11] and provide a description of the ambient isotopy in the Plantinga and Vegter algorithm. These facts are used throughout our correctness proofs. Given a subdivision of \(R\) into boxes, we use the word _side_ to denote one of the four sides of a box in the subdivision. We define an _edge_ of that subdivision to be a side of a box that is not composed of a union of sides of smaller neighboring boxes. In particular, if \(B\) is a box of the final subdivision, then the edges of \(B\) are either the sides of \(B\) or a half-side of \(B\) when \(B\)'s neighbor in that direction is smaller than \(B\). **Definition 2**.: _Let \(B\) be a box of a subdivision and \(f\in\mathbb{Z}[x,y]\). An excursion of \(\mathcal{V}(f)\) is a component of \(\mathcal{V}(f)\cap B\) whose two endpoints are on the same edge of the subdivision, see Figure 1(a)._ We note that excursions do not appear in the piecewise-linear approximation \(\mathcal{A}(f)\) as they are deformed into neighboring boxes. In [10, 11], the authors use several topological lemmas to show that the predicates \(C_{0}\) and \(C_{1}\) exert control over the behavior of \(\mathcal{V}(f)\) within a box. **Lemma 3** ([10, 11]).: _Suppose that \(B\) is a box of the subdivision, and suppose that there are two segments \(s_{1}\) and \(s_{2}\) in \(B\) such that_ 1. _the lines formed from extending_ \(s_{1}\) _and_ \(s_{2}\) _are perpendicular,_ 2. _the value of_ \(f\) _on both endpoints of_ \(s_{1}\) _is the same, and_ _ 3. _the value of_ \(f\) _on both endpoints of_ \(s_{2}\) _is the same._ _Then, \(C_{1}(B)=\textsc{False}\)._ This lemma follows from applying the intermediate value theorem on segments \(s_{1}\) and \(s_{2}\) to show that each segment contains a point such that the gradient at that point is perpendicular to the segment. This lemma leads to several corollaries, three of which we list here: **Corollary 4** ([10, 11]).: _Suppose that \(B\) is a box of a subdivision such that \(C_{1}(B)=\textsc{True}\). In addition, let \(\gamma_{f}\) be a component of \(\mathcal{V}(f)\cap B\) which is an excursion on edge \(e\) of \(B\). Then, \(\gamma_{f}\) is entirely contained within the semicircle in \(B\) whose diameter is \(e\)._ **Corollary 5** ([10, 11]).: _Suppose that \(B\) is a box of a subdivision. If \(\mathcal{V}(f)\) intersects two adjacent sides of \(B\) twice on each of these sides, then \(C_{1}(B)=\textsc{False}\)._ **Corollary 6** ([10, 11]).: _Suppose that \(B\) is a box of a subdivision such that \(C_{1}(B)=\textsc{True}\). There is at most one component of \(\mathcal{V}(f)\cap B\) that extends from the northern to southern edges of \(B\)._ Finally, we briefly describe how the ambient isotopy deforms the variety \(\mathcal{V}(f)\) to \(\mathcal{A}(f)\) as a two-step procedure. The first step of the ambient isotopy is to remove all excursions by deforming space so that the excursions are moved into neighboring boxes. Briefly, we let \(\widetilde{\mathcal{V}}(f)\) be the result of applying the first step of the ambient isotopy to \(\mathcal{V}(f)\). For every box \(B\) of the subdivision, \(\widetilde{\mathcal{V}}(f)\cap B\) and \(\mathcal{A}(f)\cap B\) are ambient isotopic _within_\(B\). In particular, this means that they have the same number of components within \(B\). The second step of the ambient isotopy simultaneously deforms \(\widetilde{\mathcal{V}}(f)\cap B\) to \(\mathcal{A}(f)\cap B\) within each box \(B\) by sliding the points on the boundary of \(B\) to their appropriate places and straightening the curves within each box \(B\). By carefully considering these steps, we observe that the ambient isotopies derived from the Plantinga and Vegter algorithm move points at most one box away as the only points that move between boxes are those near excursions, but by Corollary 4, these points are never further than one box away. **Definition 7**.: _Let \(S\) be a union of boxes from a subdivision and \(\gamma_{f}:[0,1]\to\mathcal{V}(f)\cap S\) a curve in the variety of \(f\). The extension of \(\gamma_{f}\) without excursions is denoted \(\overline{\gamma}_{f}\) and is component of \(\mathcal{V}(f)\cap S\) containing \(\gamma_{f}\). The extension of \(\gamma_{f}\) with excursions is the curve \(\widetilde{\gamma}_{f}\) which is formed by following \(\gamma_{f}\) forward and backwards until either (1) the curve becomes a closed loop or (2) the curve reaches the first and last intersections of the curve with the boundary of \(S\) before passing through a box not in \(S\)._ The extension can be constructed by iteratively adding excursions and curve components at the ends of the path until the curve leaves \(S\) and passes through other boxes of the subdivision. **Lemma 8**.: _Let \(S\) be a union of boxes from a subdivision and \(\gamma_{f}:[0,1]\to\mathcal{V}(f)\cap S\) a curve in the variety of \(f\). Suppose that \(\gamma_{f}\) deforms to \(\alpha_{f}\subseteq\mathcal{A}(f)\) within \(S\). Let \(\widetilde{\gamma}_{f}\) be the extension of this path with excursions. Let \(\overline{\alpha}_{f}\) be the component of \(\mathcal{A}(f)\cap S\) containing \(\alpha_{f}\). Either \(\widetilde{\gamma}_{f}\) and \(\overline{\alpha}_{f}\) are topological circles within \(S\) or the endpoints of \(\widetilde{\gamma}_{f}\) and \(\overline{\alpha}_{f}\) are on the same edges of the subdivision._ ## 3. Subdivision step Given \(f,g\in\mathbb{Z}[x,y]\), a first attempt to solve the main question may be to simultaneously run the standard Plantinga and Vegter algorithm on \(f\) and \(g\), but to use a common refinement of the region \(R\). This approach leads to three different types of potential errors in the approximations, see Figures 1, 2, and 3, respectively: 1. Missing intersections: intersections of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) which do not correspond to intersections of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\). 2. Extra intersections: intersections of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) which do not correspond to intersections of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). 3. Shared edges: some of the edges of the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) may be shared between the two approximations. Our main tool to avoid all three of these errors is a new predicate, called \(C_{1}^{\times}\). On a square \(B\), if \(C_{1}^{\times}(B)=\textsc{True}\), then there do not exist any pair of points \((x_{1},y_{1}),(x_{2},y_{2})\in B\) such that \(\nabla f(x_{1},y_{1})\) and \(\nabla g(x_{1},x_{2})\) are parallel. In the plane, we may implement this test using the cross product, i.e., \(C_{1}^{\times}(B)=\textsc{True}\) if and only if \(0\not\in\square(\nabla f\times\nabla g)(B\times B)\), where each of the factors of \(B\times B\) is an argument to one of the gradients. As an initial illustration of the utility of this new \(C_{1}^{\times}\) predicate, we provide the following motivating result: **Lemma 9**.: _Let \(B\) be a rectangle and \(s\) line segment in \(B\). Suppose that \(f\) attains the same value twice on \(s\) and \(g\) also attains the same value twice on \(s\). Then \(C_{1}^{\times}(B)=\textsc{False}\)._ Proof.: Suppose that \((x_{1},y_{1}),(x_{2},y_{2})\in s\) such that \(f(x_{1},y_{1})=f(x_{2},y_{2})\). By applying Rolle's theorem to \(f(tx_{1}+(1-t)x_{2},ty_{1}+(1-t)y_{2})\), there is some \(t_{f}\in(0,1)\) such that \[\frac{d}{dt}f(tx_{1}+(1-t)x_{2},ty_{1}+(1-t)y_{2})\Big{|}_{t=t_{f}}=0.\] This, however, can be rewritten as the following dot product: \[\nabla f(t_{f}x_{1}+(1-t_{f})x_{2},t_{f}y_{1}+(1-t_{f})y_{2})\cdot(x_{2}-x_{1},y_{2}-y_{1})=0.\] In other words, there is some point in \(B\) where \(\nabla f\) is perpendicular to \(s\). Repeating the argument for \(g\) also gives that there is some point in \(B\) where \(\nabla g\) is perpendicular to \(s\). Therefore, the gradients of \(f\) and \(g\) are parallel for some pair of points in the box \(B\) and \(C_{1}^{\times}(B)=\textsc{False}\). This leads to the following special case: **Corollary 10**.: _Let \(B\) be a rectangle and assume that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect more than once in \(B\). Then \(C_{1}^{\times}(B)=\textsc{False}\)._ We now present the algorithm for the subdivision step of the pairwise curve approximation algorithm. For simplicity, we focus on the case where the input region \(R\) is a square and leave the details for the general rectangular case to Section 4. Since there are two polynomials \(f,g\in\mathbb{Z}[x,y]\), we write \(C_{0}^{f}\) and \(C_{1}^{f}\) for the standard tests from the Plantinga and Vegter algorithm for the function \(f\). Similarly, we define \(C_{0}^{g}\) and \(C_{1}^{g}\) for \(g\). In addition, we need the notion of a neighborhood of a box: **Definition 11**.: _Let \(R=[a,b]\times[c,d]\) be a square region and \(\mathcal{S}\) a partition of \(R\) into squares. For any square \(B\in\mathcal{S}\), the neighborhood of \(B\) in \(\mathcal{S}\) is denoted by \(\mathcal{N}(B)\) and consists of \(B\) along with all of the other squares in \(\mathcal{S}\) that have a positive-length intersection with \(B\), i.e., squares that only meet \(B\) at its corners are not in \(\mathcal{N}(B)\). More generally, we define \(\mathcal{N}_{1}(B):=\mathcal{N}(B)\) and \(\mathcal{N}_{i}(B)\) to be the union of all the neighborhoods of boxes in \(\mathcal{N}_{i-1}(B)\)._ In other words, \(\mathcal{N}_{i}(B)\) consists of all the boxes which are at most \(i\) boxes away from \(B\). We write \(C_{1}^{\times}(\mathcal{N}_{i}(B))=\textsc{True}\) to denote that \(C_{1}^{\times}\) holds in the smallest rectangle containing \(\mathcal{N}_{i}(B)\). Figure 3. The approximations from the standard Plantinga and Vegter algorithm which intersect tangentially even though the curves do not. Figure 2. Approximations from the standard Plantinga and Vegter algorithm which intersect at a point even though the curves do not. Suppose that after this new subdivision step, the balancing and approximation steps of the standard Plantinga and Vegter algorithm are performed, resulting in approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\). We now consider the crossing properties of these approximations. ### Transversal crossing of approximations For the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\), we call a crossing _transversal_ if the approximations cross within the interior of a box. This only happens when one approximation has an edge from the north side of a box to the south side, while the other approximation extends from the east side of the box to the west side. We show that every transversal crossing of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) corresponds to a unique crossing of the varieties \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) as follows: **Proposition 12**.: _Suppose that \(B\) is a box such that \(C_{1}^{f}(B)=C_{1}^{g}(B)=C_{1}^{\times}(\mathcal{N}(B))=\textsc{True}\). If \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) intersect transversely in \(B\), then \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect exactly once and transversely in \(\mathcal{N}(B)\)._ Proof.: By Corollary 10, the number of intersections of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) is at most one in \(\mathcal{N}(B)\). Moreover, any such intersection must be transversal, since otherwise the gradients agree at the intersection and \(C_{1}^{\times}(\mathcal{N}(B))\) would be False. Without loss of generality, we assume that \(\mathcal{A}(f)\) extends from the northern to the southern edges of \(B\) and \(\mathcal{A}(g)\) extends from the eastern to the western edges of \(B\). Moreover, since the Plantinga and Vegter algorithm implies the existence of two homotopies which do not deform space further than one box away, there are unique components, which we call _crossing components_ of \(\mathcal{V}(f)\cap\mathcal{N}(B)\) and \(\mathcal{V}(g)\cap\mathcal{N}(B)\) which cross \(B\) from its north side to its south side and from its east side to its west side. Let \(\gamma_{f}\) and \(\gamma_{g}\) denote these two crossing components of \(\mathcal{V}(f)\cap\mathcal{N}(B)\) and \(\mathcal{V}(g)\cap\mathcal{N}(B)\). We note that these components are not required to stay within \(B\) as there may be excursions to the neighboring boxes. See Figure 4 for additional details. Figure 4. The neighborhood of the box \(B\) with two crossing components. While there is considerable flexibility in the local shapes of \(\gamma_{f}\) and \(\gamma_{g}\), their larger structures are well-constrained. We restrict our attention to \(\gamma_{f}\) since the behavior of \(\gamma_{g}\) is analogous. The endpoints of \(\gamma_{f}\) must be on the boundary of \(\mathcal{N}(B)\) since, otherwise, \(\mathcal{N}(B)\) would contain a closed loop. This is not possible because inside any closed loop, there is an extremal point where \(\nabla f\) vanishes, but this point would force \(C_{1}^{\times}\left(\mathcal{N}(B)\right)\) to fail. Moreover, we let \(B_{N}\), \(B_{E}\), \(B_{S}\), and \(B_{W}\) denote the unions of the boxes of \(\mathcal{N}(B)\) lying to the north, east, south, and west of \(B\), respectively. We note that there may be at most two boxes in any cardinal direction due to the balancing step in the Plantinga and Vegter algorithm. In addition, we show that the endpoints of \(\gamma_{f}\) are on the _external boundaries_ of \(B_{N}\) and \(B_{S}\), i.e., the boundaries of \(B_{N}\) and \(B_{S}\) that are also boundaries of \(\mathcal{N}(B)\). In particular, suppose that there is no endpoint of \(\gamma_{f}\) in the external boundary of \(B_{N}\). By construction, \(\gamma_{f}\) crosses from the northern edge of \(B\) to its southern edge, so it intersects the northern and southern edges of \(B\) at least once. Moreover, since \(\gamma_{f}\) does not have an endpoint on the external boundary of \(B_{N}\), it must cross the northern edge of \(B\) a second time. Therefore, \(\gamma_{f}\) must cross at least one of eastern, southern, or western edges of \(B\) an additional time. First, we show that \(\gamma_{f}\) cannot cross the eastern or western edges of \(B\). Without loss of generality, we assume that \(\gamma_{f}\) crosses the eastern edge of \(B\). Since there is no vertex of \(\mathcal{A}(f)\) placed on the eastern edge of \(B\), it must be that there are an even number of crossing of the eastern edge of \(B\). Since we have assumed that there is at least one crossing, there must be at least two crossings. Then Corollary 5 shows that this configuration would violate \(C_{1}^{f}(B)=\textsc{True}\). Therefore, the only remaining possibility is for \(\gamma_{f}\) to have both of its endpoints in the boundary of \(B_{S}\). This, however, is also impossible as it would imply the existence of two crossing components of \(\mathcal{V}(f)\) from the north to the south of \(B\). This is also impossible by Corollary 6 as it would violate \(C_{1}^{f}(B)=\textsc{True}\). Finally, since \(\gamma_{f}\) extends from the external boundary of \(B_{N}\) to the external boundary of \(B_{S}\), it separates the external boundary of \(B_{E}\) from the external boundary of \(B_{W}\). On the other hand, since \(\gamma_{g}\) extends from the external boundary of \(B_{E}\) to the external boundary of \(B_{W}\) and does not intersect the boundary of \(\mathcal{N}(B)\) except at its endpoints, \(\gamma_{g}\) must intersect \(\gamma_{f}\), which implies the desired existence of a crossing in \(\mathcal{N}(B)\). By investigating the proof of Proposition 12 in more detail, we find that each intersection of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) corresponds to an intersection between \(\gamma_{f}\) and \(\gamma_{g}\) within \(\mathcal{N}(B)\). Therefore, no two crossings of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) can correspond to the same intersection of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) because at least one of \(\gamma_{f}\) and \(\gamma_{g}\) change when considering a different crossing. Therefore, there is an injective map between transversal intersections of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) and transversal intersections of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). ### Missing intersections We begin by noting that excursions are the only reason that the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) can miss an intersection of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). In particular, we have the following result: **Lemma 13**.: _Suppose that \(B\) is a box such that \(C_{1}^{f}(B)=C_{1}^{g}(B)=C_{1}^{\times}(B)=\textsc{True}\). Suppose, in addition, that there are no excursions either entering or exiting \(B\). If the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) do not intersect in \(B\), including on the boundary of \(B\), then \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) do not intersect in \(B\)._ Proof.: By the properties of the first step of the Plantinga and Vegter algorithm, see Section 2.3, since there are no excursions, in each box \(B\), the approximations \(\mathcal{A}(f)\cap B\) and \(\mathcal{A}(g)\cap B\) are each ambient isotopic to \(\mathcal{V}(f)\cap B\) and \(\mathcal{V}(g)\cap B\)_within the box_\(B\), respectively. We recall, however, that these isotopies are not necessarily the same. In particular, this implies that the number of components of \(\mathcal{A}(f)\) and \(\mathcal{V}(f)\) agree within \(B\), and, similarly for \(\mathcal{A}(g)\) and \(\mathcal{V}(g)\). Suppose that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect in \(B\). Let \(\gamma_{f}\) be a component of \(\mathcal{V}(f)\cap B\) that intersects \(\mathcal{V}(g)\cap B\). Let \(\alpha_{f}\) be the corresponding component of \(\mathcal{A}(f)\) to which \(\gamma_{f}\) deforms under the ambient isotopy. Let \(e_{1}\) and \(e_{2}\) be the edges of the subdivision that contain the endpoints of \(\alpha_{f}\). By Lemma 8, \(\gamma_{f}\) must also begin and end on these edges. On the other hand, since \(\mathcal{A}(g)\) does not have vertices on these edges, it follows that on each edge \(e_{i}\), the value of \(g\) is the same at both endpoints of \(e_{i}\). Since there are no excursions, it follows that \(\mathcal{V}(g)\) does not intersect this edge as \(\mathcal{V}(g)\) would need to intersect this edge twice to maintain the sign properties of the endpoints and this would be an excursion. Since \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) do not intersect, the signs of \(g\) at both endpoints of \(\alpha_{f}\) must be the same. Since we showed that the sign of \(g\) is constant on the edges containing the endpoints of \(f\), the signs of \(g\) on both endpoints of \(\gamma_{f}\) must be the same. This implies \(\mathcal{V}(g)\) must intersect \(\gamma_{f}\) an even number of times as each intersection changes the sign of the restriction \(g|_{\gamma_{f}}\). Since \(\mathcal{V}(g)\) and \(\gamma_{f}\) intersect once, they must intersect at least twice, but this is impossible by Corollary 10. Lemma 13 implies that missing intersections must involve at least one excursion. Our plan is to show that any missing intersection must induce a pair of intersections in the neighborhood \(\mathcal{N}_{2}(B)\), which is not possible since \(C(\mathcal{N}_{2}(B))=\textsc{True}\). **Proposition 14**.: _Suppose that \(B\) is a box such that \(C_{1}^{f}(B)=C_{1}^{g}(B)=C_{1}^{\times}(\mathcal{N}_{2}(B))=\textsc{True}\). Suppose that \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) do not intersect in \(\mathcal{N}(B)\), including on the boundary of \(\mathcal{N}(B)\), then \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) do not intersect in \(B\)._ Proof.: Suppose that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect in \(B\). Let \(\gamma_{f}\) be the component of \(\mathcal{V}(f)\) in \(\mathcal{V}(f)\cap B\) that includes this intersection. Let \(\widetilde{\gamma}_{f}\) be the extension of \(\gamma_{f}\) into \(\mathcal{N}(B)\) including all excursions into \(\mathcal{N}_{2}(B)\). Let \(\alpha_{f}\) be the component of \(\mathcal{A}(f)\cap\mathcal{N}(B)\) to which \(\gamma_{f}\) is deformed to under the ambient isotopy. We note that \(\alpha_{f}\) is in \(\mathcal{N}(B)\) since the Plantinga and Vegter algorithm does not deform the curve further than one box away. Let \(e_{1}\) and \(e_{2}\) be the edges of the subdivision that contain the endpoints of \(\alpha_{f}\). By Lemma 8, the endpoints of \(\widetilde{\gamma}_{f}\) are also on the edges \(e_{1}\) and \(e_{2}\). Let \(p_{1}\) be the endpoint of \(\widetilde{\gamma}_{f}\) on \(e_{1}\) and \(p_{2}\) be the endpoint of \(\widetilde{\gamma}_{f}\) on \(e_{2}\). Consider the restriction \(g|_{\widetilde{\gamma}_{f}}\). The signs of this function at the two endpoints must be opposite because each intersection between \(\mathcal{V}(g)\) and \(\widetilde{\gamma}_{f}\) changes the sign of \(g|_{\widetilde{\gamma}_{f}}\). Since there is one intersection, having the same sign at both endpoints would require two intersections for the two sign changes, but this is impossible by Corollary 10 and that \(C_{1}^{\times}(\mathcal{N}_{2}(B))=\textsc{True}\). Now, we prove that the sign of \(g\) at \(p_{1}\) agrees with its signs at the endpoints of \(e_{1}\). Since \(\mathcal{A}(g)\) does not intersect \(\mathcal{A}(f)\) and \(\mathcal{A}(f)\) has a vertex on \(e_{1}\), this implies that the signs of \(g\) on the endpoints of \(e_{1}\) are the same. Hence, any intersection of \(\mathcal{V}(g)\) with \(e_{1}\) must be an excursion. Suppose, for contradiction, that the sign of \(g\) at \(p_{1}\) does not match the sign of \(g\) at the endpoints of \(e_{1}\). Then, there are an odd number of intersections from \(\mathcal{V}(g)\cap e_{1}\) on either side of \(p_{1}\). Therefore, there is at least one pair of points of \(\mathcal{V}(g)\cap e_{1}\) on either side of \(p_{1}\) which are connected by an excursion. We show that this excursion implies that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) intersect once more in \(\mathcal{N}_{2}(B)\), but such an intersection is not possible since \(C_{1}^{\times}(\mathcal{N}_{2}(B))=\textsc{True}\) and Corollary 10. See Figure 5 for reference in this argument. Suppose first that this excursion is internal to \(\mathcal{N}(B)\). By Corollary 9, since \(\mathcal{V}(g)\) has an excursion on \(e_{1}\), \(\mathcal{V}(f)\) does not have an excursion on this edge. By Corollary 4, this excursion is contained within a semicircle along the edge \(e_{1}\). The curve \(\widetilde{\gamma}_{f}\), however, cannot stay within this semicircle. More precisely, let \(\widetilde{\gamma}_{g}\) be an excursion of \(g\) in \(\mathcal{N}(B)\) with endpoints on either side of \(p_{1}\) on \(e_{1}\). Then, \(\widetilde{\gamma}_{f}\) is within the region bounded by \(\widetilde{\gamma}_{g}\) and \(e_{1}\), but it must leave this region to reach the intersection in \(B\) and it can only intersect \(e_{1}\) once. Therefore, \(\widetilde{\gamma}_{f}\) and \(\widetilde{\gamma}_{g}\) must intersect in a box other than \(B\). The argument in the case where the excursion is external to \(\mathcal{N}(B)\) is similar. In this case, \(\widetilde{\gamma}_{g}\) is the excursion of \(g\) with endpoints on either side of \(p_{1}\), but is outside \(\mathcal{N}(B)\). Then, \(\widetilde{\gamma}_{f}\) must pass through a box in \(\mathcal{N}_{2}(B)\setminus\mathcal{N}(B)\), so \(\widetilde{\gamma}_{f}\) cannot remain within the region bounded by the excursion \(\widetilde{\gamma}_{g}\) and \(e_{1}\), but it must leave this region and it can only intersect \(e_{1}\) once. Therefore, \(\widetilde{\gamma}_{f}\) and \(\widetilde{\gamma}_{g}\) must intersect in a box other than \(B\). Now, we have shown that the sign of \(g\) at \(p_{1}\) agrees with the sign of \(g\) at the endpoints of \(e_{1}\). Similarly, the sign of \(p_{2}\) agrees with the sign of \(g\) at the endpoints of \(e_{2}\). Moreover, since the signs of \(g\) at \(p_{1}\) and \(p_{2}\) differ, the sign of \(g\) at the endpoints of \(e_{1}\) differ from the sign of \(g\) at the endpoints of \(e_{2}\). We also observe that removing edges \(e_{1}\) and \(e_{2}\) from the boundary \(\partial\mathcal{N}(B)\) results in two components. Since the signs of \(g\) are different at the endpoints of the two components, there must be an odd number of vertices of \(\mathcal{A}(g)\) on each of these two components. However, since \(\mathcal{A}(g)\) forms a perfect matching on its vertices in \(\mathcal{N}(B)\), one of the edges of \(\mathcal{A}(g)\) must intersect \(\alpha_{f}\), but this is not possible. Therefore, by Proposition 12, every transversal intersection of the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) corresponds to a unique intersection of the varieties \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). On the other hand, Proposition 14 implies that every intersection of the varieties \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) corresponds to an intersection of the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\). However, this intersection does not need to be transversal, see Figure 6 which is the one remaining case. ### Shared edges in approximations For the approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\), we call an intersection _nontransversal_ if the approximations meet on the boundary of a box or the approximations coincide along shared segments. **Definition 15**.: _A contiguous sequence of boxes with shared segments of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) is called a snake, see Figure 7. The boxes where the approximations separate are called the heads of the snake. A neighborhood \(\mathcal{N}(S)\) of a snake is the union of all the neighborhoods of boxes in the snake along with the neighborhoods of the heads of the snake. The neighborhood \(\mathcal{N}_{i}(S)\) is defined similarly._ Figure 5. Accompanying diagram for the proof of Proposition 14. It illustrates all the impossible features whose nonexistence forces a crossing of the approximations. Figure 6. Approximations from the standard Plantinga algorithm which share a segment while the curves intersect in two places. Figure 7. Example of a snake where the approximations share segments. **Proposition 16**.: _Suppose that \(S\) is a snake and for every box \(B\in S\), \(C_{1}^{\times}(\mathcal{N}(B))\) holds. There is at most one crossing in \(\mathcal{N}(S)\) corresponding to the snake._ Proof.: Let \(\alpha_{f}\) and \(\alpha_{g}\) be the two components of the approximation in \(S\) which share vertices or edges. Let \(\gamma_{f}\) and \(\gamma_{g}\) be the components of \(\mathcal{V}(f)\cap\mathcal{N}(S)\) and \(\mathcal{V}(g)\cap\mathcal{N}(S)\), respectively, that \(\alpha_{f}\) and \(\alpha_{g}\) deform to, respectively, under the ambient isotopies. Let \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) be the extensions of \(\gamma_{f}\) and \(\gamma_{g}\), respectively, in \(\mathcal{N}_{2}(S)\), but without including excursions. We consider \(\overline{\gamma}_{f}\) as a path and we choose an orientation to this path so that \(f\) is positive in a tubular neighborhood of the left of the path and negative in a tubular neighborhood to the right of the path. Since gradients point in the direction of greatest increase, \(\nabla f(\overline{\gamma}_{f})\) points to the left of \(\overline{\gamma}_{f}^{\prime}\). Suppose that \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) intersect multiple times in \(\mathcal{N}(S)\). We show that this violates \(C_{1}^{\times}(\mathcal{N}(B))\) for some \(B\in S\). See Figure 8 for reference in this argument. Let \(r_{1}\) and \(r_{2}\) be two intersection points of \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) in \(\mathcal{N}(S)\). In addition, we assume that \(r_{1}\) occurs before \(r_{2}\) along the path \(\overline{\gamma}_{f}\). We restrict our attention to the portion of \(\overline{\gamma}_{g}\) between \(r_{1}\) and \(r_{2}\). We choose the orientation on \(\overline{\gamma}_{g}\) so that \(r_{1}\) comes before \(r_{2}\) along this path. As above, the gradient \(\nabla g(\overline{\gamma}_{g})\) is perpendicular to \(\overline{\gamma}_{g}^{\prime}\), but, since the orientation of \(\overline{\gamma}_{g}\) is fixed, it points to one side of \(\overline{\gamma}_{g}^{\prime}\), i.e., the gradient points to either the right or the left of the tangent vector. Since \(\overline{\gamma}_{f}\) divides \(\mathcal{N}(S)\) into two pieces, every time \(\overline{\gamma}_{g}\) crosses \(\overline{\gamma}_{f}\), it crosses from the negative side to the positive side or from the positive side to the negative side. Moreover, since \(\overline{\gamma}_{g}\) stays within \(\mathcal{N}(S)\), these types of crossings alternate. Since \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) intersect at least twice, there is at least one of each type of crossing. Since the positive side of \(f\) is to the left of \(\overline{\gamma}_{f}^{\prime}\), a crossing from the positive side to the negative side of \(\overline{\gamma}_{f}\) corresponds to a clockwise turn from \(\overline{\gamma}_{f}^{\prime}\) to \(\overline{\gamma}_{g}^{\prime}\). Similarly, a crossing from the negative side to the positive side of \(\overline{\gamma}_{f}\) corresponds to a counter-clockwise turn from \(\overline{\gamma}_{f}^{\prime}\) to \(\overline{\gamma}_{g}^{\prime}\). By redefining \(r_{2}\), if necessary, we may assume that \(\nabla f(r_{1})\times\nabla g(r_{1})\) and \(\nabla f(r_{2})\times\nabla g(r_{2})\) correspond to two different turn directions, so they are different signs. Finally, we apply the function \(\nabla f\times\nabla g\) to the path \(\overline{\gamma}_{g}\). In other words, we look at \(\nabla f(\overline{\gamma}_{g})\times\nabla g(\overline{\gamma}_{g})\). We know that the value of this continuous function has different signs at \(r_{1}\) and \(r_{2}\). Hence, by the intermediate value theorem, there is some point on this curve where the gradients of \(f\) and \(g\) are parallel, but this is not possible as it would contradict \(C_{1}^{\times}(\mathcal{N}(B))=\textsc{True}\) for all \(B\in S\). We have shown that every snake corresponds to at most one intersection of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\). It remains to discuss how to decide if a snake corresponds to an intersection. We begin by defining the orientation of points with respect to the snake. Let \(S\) be a snake and let \(B\) be a head of the snake. Suppose that \(p\) and \(q\) are two points on the external boundary of \(\mathcal{N}(B)\), i.e., boundaries of \(\mathcal{N}(B)\) that are also boundaries of \(\mathcal{N}(S)\). The external boundary of \(\mathcal{N}(B)\) is, therefore, a piecewise-linear path around \(\mathcal{N}(B)\). We say that \(q\) is _clockwise_ from \(p\) with respect to the snake if walking around \(\partial\mathcal{N}(B)\setminus S\) in a clockwise direction starting at the snake reaches \(p\) before \(q\). In this case, \(p\) is _counterclockwise_ from \(q\) with respect to the snake, see Figures 7 and 8. **Lemma 17**.: _Let \(S\) be a snake and let \(\alpha_{f}\) and \(\alpha_{g}\) be the two components of \(\mathcal{A}(f)\cap\mathcal{N}(S)\) and \(\mathcal{A}(g)\cap\mathcal{N}(S)\), respectively, that include the shared edges of the snake. Let \(B_{1}\) and \(B_{2}\) be the two heads of \(S\). Let \(p_{1}\) and Figure 8. Accompanying diagram for the proof of Proposition 16. It illustrates how at two consecutive intersections, the turn directions of the gradients differ. _be the ends of \(\alpha_{f}\) and \(\alpha_{g}\), respectively, in \(B_{1}\), and define \(p_{2}\) and \(q_{2}\) similarly. The snake corresponds to an intersection if and only if the orientations from \(p_{1}\) to \(q_{1}\) and \(p_{2}\) to \(q_{2}\) are the same._ Proof.: Let \(\gamma_{f}\) and \(\gamma_{g}\) be the two components of \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\), respectively, in \(\mathcal{N}(S)\) to which \(\alpha_{f}\) and \(\alpha_{g}\) deform. Let \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) be the extensions _without excursions_ of \(\gamma_{f}\) and \(\gamma_{g}\), respectively. Similarly, let \(\widetilde{\gamma}_{f}\) and \(\widetilde{\gamma}_{g}\) be the extensions _with excursions_ of \(\gamma_{f}\) and \(\gamma_{g}\), respectively. By the definition of the heads of a snake, the paths \(\widetilde{\gamma}_{f}\) and \(\widetilde{\gamma}_{g}\) pass into different neighbors of \(B_{1}\), since, otherwise, the snake would be longer. Moreover, the endpoints of \(\widetilde{\gamma}_{f}\) and \(\overline{\gamma}_{f}\) must be in the same neighbors of \(B_{1}\), even though the endpoints might be on different sides of those boxes. This is because differences between the endpoints of \(\widetilde{\gamma}_{f}\) and \(\overline{\gamma}_{f}\) are due to excursions which, with Corollaries 5 and 6, prevent \(\widetilde{\gamma}_{f}\) from re-entering \(B\). Corresponding statements hold for \(g\) as well as for \(B_{2}\). Thus, the endpoints of \(\alpha_{f}\) and \(\alpha_{g}\) have the same clockwise or counterclockwise relationship as the endpoints of \(\widetilde{\gamma}_{f}\) and \(\widetilde{\gamma}_{g}\) as well as the endpoints of \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\). Therefore, we study the relationship between the endpoints of \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\). Topologically, the boundary \(\partial\mathcal{N}(S)\) is a circle and \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) form cords of this circle. The paths \(\overline{\gamma}_{f}\) and \(\overline{\gamma}_{g}\) intersect if and only if their endpoints interweave along the boundary of the circle. Interweaving is equivalent to having the same clockwise or counterclockwise order around the circle. ## 4. Simultaneous approximation We now prove Theorem 1 and provide the main algorithm of the paper. Suppose that \(f,g\in\mathbb{Z}[x,y]\), \(R=[a,b]\times[c,d]\subseteq\mathbb{R}^{2}\) a rectangle such that \(a,b,c,d\in\mathbb{Z}\). Suppose also that \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) are nonsingular within \(R\), do not have a common intersection on the boundary of \(R\), and intersect transversely within \(R\). Through a rescaling, we reduce to the case where \(R\) is a square. Second, we use the techniques for unbounded curves from [1] for the boundary boxes as long as in each boundary box, either \(C_{0}^{f}(B)=\textsc{True}\) or \(C_{0}^{g}(B)=\textsc{True}\). Therefore, we focus on the topological correctness. By Proposition 14, every intersection between \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) corresponds to either a transversal crossing of \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) or a snake. By investigating the proofs of Propositions 12 and 16, we see that the these two features correspond to different types of crossings and cannot identify the same crossing twice. Thus, by Lemma 17, we can identify exactly when a crossing occurs. This gives a bijection between crossings in the approximation and crossings in the varieties. Once the intersections are identified, the remaining isotopy steps, such as removing excursions, of the Plantinga and Vegter algorithm can be applied to both \(\mathcal{V}(f)\) and \(\mathcal{V}(g)\) simultaneously, by Lemma 13, giving topologically correct approximations. ``` 0: polynomials \(f,g\in\mathbb{Z}[x,y]\) and a square region \(R\) with integral corners 0: approximations \(\mathcal{A}(f)\) and \(\mathcal{A}(g)\) such that \(\mathcal{A}(f)\cap\mathcal{A}(g)\) approximates \(\mathcal{V}(f)\cap\mathcal{V}(g)\). 1: Subdivide \(R\) using Algorithm 1. 2: Further subdivide boxes until the side lengths of neighboring boxes differ by at most a factor of two. 3: Compute the Plantinga and Vegter curve approximation. 4: For any snakes, apply Lemma 17. 5:if there is no crossing then 6: slightly separate the edges of the snake so that the common edges do not overlap. 7:else 8: slightly separate the ends of the snake so that the approximations don't overlap at the ends of the snake, then add an explicit crossing in the middle of the snake. 9:endif 10:return the approximations ``` **Algorithm 2** Simultaneous approximation algorithm We note that a small Hausdorff distance can also be achieved by making sure that the boxes containing the approximations are sufficiently small and that any snakes are also small. We end with corrected examples of images that appeared earlier in the text, see Figure 9. ## Acknowledgements Burr was supported by NSF grant DMS-1913119 Simons Foundation collaboration grant #964285.
2305.09253
Online Continual Learning Without the Storage Constraint
Traditional online continual learning (OCL) research has primarily focused on mitigating catastrophic forgetting with fixed and limited storage allocation throughout an agent's lifetime. However, a broad range of real-world applications are primarily constrained by computational costs rather than storage limitations. In this paper, we target such applications, investigating the online continual learning problem under relaxed storage constraints and limited computational budgets. We contribute a simple algorithm, which updates a kNN classifier continually along with a fixed, pretrained feature extractor. We selected this algorithm due to its exceptional suitability for online continual learning. It can adapt to rapidly changing streams, has zero stability gap, operates within tiny computational budgets, has low storage requirements by only storing features, and has a consistency property: It never forgets previously seen data. These attributes yield significant improvements, allowing our proposed algorithm to outperform existing methods by over 20% in accuracy on two large-scale OCL datasets: Continual LOCalization (CLOC) with 39M images and 712 classes and Continual Google Landmarks V2 (CGLM) with 580K images and 10,788 classes, even when existing methods retain all previously seen images. Furthermore, we achieve this superior performance with considerably reduced computational and storage expenses. We provide code to reproduce our results at github.com/drimpossible/ACM.
Ameya Prabhu, Zhipeng Cai, Puneet Dokania, Philip Torr, Vladlen Koltun, Ozan Sener
2023-05-16T08:03:07Z
http://arxiv.org/abs/2305.09253v2
# Online Continual Learning Without the Storage Constraint ###### Abstract Online continual learning (OCL) research has primarily focused on mitigating catastrophic forgetting with fixed and limited storage allocation throughout the agent's lifetime. However, the growing affordability of data storage highlights a broad range of applications that do not adhere to these assumptions. In these cases, the primary concern lies in managing computational expenditures rather than storage. In this paper, we target such settings, investigating the online continual learning problem by relaxing storage constraints and emphasizing fixed, limited economical budget. We provide a simple algorithm that can compactly store and utilize the entirety of the incoming data stream under tiny computational budgets using a kNN classifier and universal pre-trained feature extractors. Our algorithm provides a consistency property attractive to continual learning: It will never forget past seen data. We set a new state of the art on two large-scale OCL datasets: Continual LOCAL-ization (CLOC), which has 39M images over 712 classes, and Continual Google Landmarks V2 (CGLM), which has 580K images over 10,788 classes - beating methods under far higher computational budgets than ours in terms of both reducing catastrophic forgetting of past data and quickly adapting to rapidly changing data streams. We provide code to reproduce our results at [https://github.com/drimpossible/ACM](https://github.com/drimpossible/ACM). ## 1 Introduction In online continual learning, a learner processes a continuous stream of data originating from a non-stationary distribution. The learner is required to solve a number of problems: it needs to successfully learn the main task (accuracy), adapt to changes in the distribution (rapid adaptation), and retain information from the past (backward transfer). A key motif in recent work on online continual learning is the search for algorithms that control the trade-off between these possibly competing objectives under resource constraints. To establish the resource constraints for typical commercial settings, we assess what is required of continual learners. A continual learner must deliver accurate predictions, scale to large datasets encountered during its operational lifetime, and operate within the system's total cost budget (in dollars). The economics of data storage have been studied since 1987 (Gray & Putzolu, 1987; Gray & Graefe, 1997; Graefe, 2009; Appuswamy et al., 2017). Table 1 summarizes the trends, show a rapid decline in storage costs over time (\(\sim\$100\) to store CLOC, the largest dataset for OCL (Cai et al., 2021), in 2017). In contrast, running ER (Cai et al., 2021), the state-of-the-art OCL method on the subset of YFCC-100M currently costs over $2000 on a GCP server. Consequently, computational costs are the primary budgetary concern, with storage costs being relatively insignificant. Therefore, as long as computational costs are controlled, economically storing the entire incoming data stream is feasible. However, online continual learning has primarily been studied under limited storage constraints (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019; Aljundi et al., 2019), with learners only allowed to store a subset of incoming data. This constraint has led to many algorithms focusing on identifying a representative data subset (Aljundi et al., 2019; Yoon et al., 2022; Chrysakis and Moens, 2020; Bang et al., 2021; Sun et al., 2022; Koh et al., 2022). Although limited storage is a practical constraint for biological learning agents and offline embodied artificial agents, deep learning-based systems are predominantly compute-constrained with a high throughput requirement. They must process incoming data points per second faster than the incoming stream speed to successfully keep up with the data stream. Cai et al. (2021) shows that even with unlimited storage, the online continual learning problem is hard as the constraint of limited computation implicitly restricts the effective samples used for training. In this paper, we argue that storing the entirety of the data stream while meeting these requirements is possible. We propose a system based on approximate k-nearest neighbor (kNN) algorithms (Malkov and Yashunin, 2018), which are well-known for their scalability and inherent incremental nature (using only insert and lookup operations). The computational cost of this system has a graceful logarithmic scaling with data size even though it stores the entirety of past data. The further rationales for kNN are threefold. i) With the right representation, the nearest neighbour rule is an effective predictor at scale. ii) It does not forget past data. In other words, if a data point from history is queried again, the query yields the same label. We refer to this as the consistency property. iii) All past data can be compactly stored in low-dimensional feature representations. A critical aspect of the aforementioned system is obtaining an effective feature representation. While feature learning is the standard approach, it is not viable in our setting due to the need to recompute the representation of stored data, implying a quadratic computational cost. Instead, we propose to use existing pretrained features that are based on extensive and diverse datasets and yield robust representations. Remarkably, we find that even pretrained representations trained on rather small-scale ImageNet1K (Caron et al., 2021) can provide effective features on datasets like Continual YFCC-100M (CLOC) (Cai et al., 2021) which are comparatively more complex and far larger in size. Additionally, our approach overcomes a significant limitation of existing gradient-descent-based methods: the ability to learn from a single example. While using one gradient per example (i.e., batch size 1) is computationally infeasible for deep networks on large-scale datasets, our method efficiently stores a single feature extracted from the pretrained model in memory. The kNN mechanism immediately utilizes this data point, enabling rapid adaptation. We argue that the capacity to adapt to a single example is essential for truly online operation, allowing our simple method to outperform existing continual learning baselines. **Problem formulation.** We formally define the online continual learning (OCL) problem following Cai et al. (2021). In classification settings, we aim to continually learn a function \(f\colon\mathcal{X}\to\mathcal{Y}\), parameterized by \(\theta_{t}\) at time \(t\). OCL is an iterative process where each step consists of a learner receiving information and updating its model. Specifically, at each step \(t\) of the interaction, 1. _One_ data point \(x_{t}\sim\pi_{t}\) sampled from a non-stationary distribution \(\pi_{t}\) is revealed. 2. The learner makes the scalar prediction \(\hat{y}_{t}=f(x_{t};\theta_{t})\) using a compute budget, \(B_{t}^{pred}\). \begin{table} \begin{tabular}{l r r r r} \hline \hline & 1987 & 1997 & 2007 & 2017 \\ \hline Storage Cost & & & & \\ Unit price (\$) & 30K & 2K & 80 & 49 \\ Unit capacity & 180MB & 9GB & 250GB & 2TB \\ \$/MB & 83.33 & 0.22 & 0.0003 & 0.00002 \\ Cost of storing YFCC (\$) & 350M & 920K & 1250 & 83 \\ \hline Compute Cost & & & & \\ Training ER on YFCC & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: The cost of storing data has decreased rapidly, allowing the storage of a large dataset for a negligible cost compared to the cost of computation. 3. Learner receives the true label \(y_{t}\). 4. Learner updates the model \(\theta_{t+1}\) using a compute budget, \(B^{learn}_{t}\) We evaluate the performance of the algorithm using the metrics used forward transfer (adaptability) and backward transfer (information retention) as given in Cai et al. (2021). A critical aspect of OCL is the budget in the second and forth steps, which limits the computation that the learner can expend. A common choice in past work is to impose a fixed limit on storage and computation per operation (Cai et al., 2021). We remove the storage constraint and argue that storing the entirety of the data is cost-effective as long as impact on computation is controlled. We relax the fixed computation constraint to a logarithmic constraint. In other words, we require that the computation time per operation fit within \(B^{pred}_{t},B^{learn}_{t}\sim\mathcal{O}(\log t)\). This construction results in total cost scaling \(\mathcal{O}(n\log n)\) with the amount of data 1 Footnote 1: Although we believe \(\mathcal{O}(n\log n)\) complexity is not prohibitive for practical applications, a further reduction (i.e \(\mathcal{O}(n\log\log n)\)) can be obtained by carefully introducing additional levels of hierarchy for astronomically large \(n\). ## 2 Related Work **Formulations.**Parisi et al. (2019) and De Lange et al. (2020) have argued for improving the realism of online continual learning benchmarks. Earliest formulations (Lopez-Paz and Ranzato, 2017) worked in a task-incremental setup, assuming access to which subset of classes a test sample is from. Subsequent mainstream formulation (Aljundi et al., 2019;a) required models to predict across all seen classes at test time, with progress in the train-time sample ordering (Bang et al., 2021; Koh et al., 2022). However, Prabhu et al. (2020) highlighted the limitations of current formulations by achieving good performance despite not using any unstored training data. Latest works (Hu et al., 2022; Cai et al., 2021; Lin et al., 2021) overcome this limitation by testing the capability for rapid adaptation to next incoming sample and eliminate data-ordering requirements by simply using timestamps of real-world data streams. Our work builds on the latest generation of formulation Cai et al. (2021). Unlike Cai et al. (2021), we perform one-sample learning; in other words, we entirely remove the concept of task by processing the incoming stream one sample at a time, in a truly online manner. Some parallel works stress on computation as the key limited resource (Harun et al., 2023). However, we further remove the storage constraint which is the key to eliminating degenerate solutions like GDumb (Prabhu et al., 2020). **Methods.** Traditional methods of adapting to concept drift (Gama et al., 2014) include a variety of approaches based on SVMs (Laskov et al., 2006; Zheng et al., 2013), random forests (Gomes et al., 2017; Ristin et al., 2015; Mourtada et al., 2019), and other models (Oza and Russell, 2001; Mensink et al., 2013). They are the most similar to our proposed method and offer natural incremental additional and querying properties, but have not been leveraged in the deep learning-based continual learning literature (Ostapenko et al., 2022). Note that this direction is explored in works like Streaming LDA (Hayes and Kanan, 2020) and ExStream (Hayes et al., 2019). However, these approaches perform worse than partial training of a deep network (Hayes et al., 2020; Ostapenko et al., 2022). In contrast, we outperform fully finetuned deep networks with unrestricted access to past samples and far larger computational budgets. The (online) continual learning methods designed for deep networks are typically based on experience replay (Chaudhry et al., 2019) and change a subset of the three aspects summarized in Table 2: (i) the loss function used for learning, (ii) the algorithm to sample points into the replay buffer, and (iii) the algorithm to sample a batch from the replay buffer. Methods to sample points into the replay buffer include GSS (Aljundi et al., 2019), RingBuffer (Chaudhry et al., 2019), class-balanced reservoir (Chrysakis and Moens, 2020), greedy balancing (Prabhu et al., 2020), rainbow memory (Bang et al., 2021), herding (Rebuffi et al., 2017), coreset selection (Yoon et al., 2022), information-theoretic reservoir (Sun et al., 2022), and samplewise importance (Koh et al., 2022). These approaches do not apply to our setting because we simply remove the storage constraint. Approaches to sampling batches from the replay buffer include MIR (Aljundi et al., 2019), ASER (Shim et al., 2021), and AML (Caccia et al., 2022). These require mining hard negatives or performing additional updates for importance sampling over the stored data, which are simply unscalable to large-scale storage as in our work. In our experiments, we compare with several of these approaches including ER as proposed in Cai et al. (2021) scaled appropriately to our setting. Additionally, our kNN based method offers attractive properties for OCL like never forgetting previously seen samples, not possible in most previous parametric approaches including the deep OCL methods presented above. **Pretrained representations.** Pretrained representations (Yuan et al., 2021; Caron et al., 2021; Chen et al., 2021; Ali et al., 2021) have been utilized as initializations for continual learning, but in settings with harsh constraints on memory (Wu et al., 2022; Ostapenko et al., 2022). Inspired by Ostapenko et al. (2022), we additionally explore effects of different pretrained representations along with comparison among traditional classifiers like logistic regression and online SVMs and discuss interesting findings. Another emerging direction for using pretrained models in continual learning has been prompt-tuning as it produces accurate classifiers while being computationally efficient (Wang et al., 2022; Chen et al., 2023). However, Janson et al. (2022) show that simple traditional classification models outperform these complex prompt tuning strategies by significant margins. Lastly, the direction most similar to ours is methods which use kNN classifiers alongside deep networks for classification (Nakata et al., 2022; Iscen et al., 2022). We operate in a very different setting with no storage constraints, online learning, illustrate the effectiveness of weaker pretrained classifiers trained on ImageNet1K when testing on large-scale datasets, and show that approximate kNN can achieve a high accuracy-performance tradeoff at scale. Additionally, Nakata et al. (2022) imposes restrictions on stored samples in compared methods but uses all past data, allowing comparatively higher performance using kNN. ## 3 Approach We utilize pre-trained features as representations and k-nearest neighbor rule as a learning algorithm. Hence, our algorithm is rather simple. The key to operationalizing our algorithm is utilizing an efficient memory structure that satisfies cost constraints. We refer to our algorithm as Adaptive Continual Memory (ACM) and refer to the kNN index as Memory. At each time step, our learner performs the following steps: 1. _One_ data point \(x_{t}\sim\pi_{t}\) sampled from a non-stationary distribution \(\pi_{t}\) is revealed. 2. Learner extracts features \(z_{t}=f(x_{t};\theta_{t})\) 3. Learner retrieves nearest neighbors \(\mathcal{N}_{t}=\texttt{Memory.Retrieve}(z_{t},k)\). 4. Learner makes the prediction \(\hat{y}_{t}=\texttt{majority-vote}(\mathcal{N}_{t})\).2 Footnote 2: We choose \(k=1\) but a larger \(k\) can be chosen. 5. Learner receives the true label \(y_{t}\). 6. Learner inserts new data: Memory.Insert\((z_{t},y_{t})\). We summarize this approach in Figure 1. Before presenting further implementation details, we discuss two advantages of this method. \begin{table} \begin{tabular}{l c c c c} \hline \hline Works & MemSamp & BatchSamp & Loss & Other Cont. \\ \hline ER (Base) & Random & Random & CEnt & - \\ GSS (Aljundi et al., 2019) & GSS & Random & CEnt & - \\ MIR (Aljundi et al., 2019) & Reservoir & MIR & CEnt & - \\ ER-Ring (Chaudhry et al., 2019) & RingBuf & Random & CEnt & - \\ GDumb (Prabhu et al., 2020) & GreedyBal & Random & CEnt & MR \\ HAL (Chaudhry et al., 2021) & RingBuf & Random & CEnt & HAL \\ CBRS (Chrysakis and Moens, 2020) & CBRS & Weighting & CEnt & - \\ CLIB (Koh et al., 2022) & ImpSamp & Random & CEnt & MR, AdO \\ CoPE (De Lange and Tytelaars, 2021) & CBRS & Random & PPPL loss & - \\ CLOC (Cai et al., 2021) & FIFO & Random & CEnt & AdO \\ InfoRS (Sun et al., 2022) & InfoRS & Random & CEnt & - \\ OCS (Yoon et al., 2022) & OCS & Random & CEnt & - \\ AML (Caccia et al., 2022) & Reservoir & PosNeg & AML/ACE & - \\ \hline \hline \end{tabular} \end{table} Table 2: Recent online continual learning approaches, with key contributions in red. Most methods focus on better techniques for sampling into storage, while in our framework there is no storage constraint. **Fast adaptation.** Suppose the learner makes a mistake in a given time step. If the same data point is received in the next time step, the learner will produce the correct answer. By leveraging nearest neighbors, we enable the system to incorporate new data immediately and locally modify its answers in response to as little as a single datapoint. Such fast adaptation, a core desideratum in online continual learning, is difficult with pure gradient descent and is not presently a characteristic of deep continual learning systems. **Consistency.** Consider a hypothetical scenario in which a data point is queried at multiple time instances. Our learner will never forget the correct label for this data point and will consistently produce it when queried, even after long time spans. While learning and memory are much more general than rote memorization, producing the correct answer on previously seen data is an informative sanity check. For comparison, existing continual learning systems forget a large fraction of previously seen datapoints even with a minimal delay (Toneva et al., 2019). ### Efficient Memory Implementation In the presented algorithm above for our method, feature extraction (step 2) and prediction (step 4) have a fixed overhead cost. However, nearest-neighbour retrieval (step 3) and inserting new data (step 6) can have high computational costs if done naively. However, literature in approximate k-nearest neighbours (Shakhnarovich et al., 2006) has demonstrated a high performance while scaling down computational complexity from linear \(\mathcal{O}(n)\) to logarithmic \(\mathcal{O}(\log n)\), where \(n\) is the number of data points in memory. We leverage approximate kNN to allow our proposed approach to operate on the entirety of the data received so far under the constraint of logarithmic computational complexity, with approximate guarantees on preserving the above two discussed properties. We use the HNSW algorithm to give high accuracy in minimal time based on the benchmarking results of Aumuller et al. (2020). Furthermore, Figure 1 presents the wall-clock time of the overhead cost imposed by ACM on datasets of 40 million samples to practically ground the logarithmic computational complexity. We observe that the computational overhead while using ACM scales logarithmically with the maximum time of \(\sim\)5ms. In comparison, the time required for classification of one sample for deep models like ResNet50 is \(\sim\)10ms on an Intel 16-core CPU. Note that while using ACM, the total inference cost of ACM inference would be 15ms considering the constant cost of feature extraction when using 40 million samples in storage. ## 4 Experiments **Datasets.** We benchmark using a subset of Google Landmarks V2 and YFCC-100M datasets. Both ordered by timestamps of upload date-time, with the task being online image classification, predicting the label of the incoming image. _i) Continual YFCC100M (CLOC)_: The subset of YFCC100M, which has date and time annotations (Cai et al., 2021). We follow their dataset splits. We order the images by timestep and iterate over 39 million Figure 1: (a) Adaptive Continual Memory (ACM) performs Memory.Retrieve and Memory.Insert operations on features of new incoming samples, extracted by a static, pretrained deep network. (b) Wall clock time overhead of ACM Memory after feature extraction (x-axis is log-scaled) on a 16-core i7 CPU server. The longest observed overhead time using 256 dim embeddings is 5ms on 40 million samples in memory. online timesteps, one image at a time, with evaluation on the next image in the stream. In contrast, CLOC uses a more restricted protocol assuming 256 images per timestep and evaluates on images uploaded by a different user in the next batch of samples. _ii) Continual Google Landmarks V2 (CGLM)_ : We use a subset of Google Landmarks V2 dataset (Weyand et al., 2020) as our second benchmark. We use the train-clean subset, filtering it further based on the availability of upload timestamps on Flickr. We filter out the classes that have less than 25 samples. We uniformly in random sample 10% of data for testing and then use the first 20% of the remaining data as a hyperparameter tuning set, similar to CLOC. We get \(430K\) images for continual learning with 10788 classes. We use the same hyperparameters as obtained on CLOC as Continual Learning algorithms should work with different data distributions. **Metrics.** We follow Cai et al. (2021), using their average online accuracy until the current timestep \(k\) (\(a_{k}\)) as a metric for measuring rapid adaptation (forward transfer), given by \(a_{k}=\nicefrac{{1}}{{N_{i}}}\sum_{t=1}^{k}\mathds{1}_{y_{t}=\hat{y}_{t}}\) where \(\mathds{1}_{(\cdot)}\) is the indicator function. Similarly, we measure information retention (preventing catastrophic forgetting) after finishing training by computing the average accuracy historically. Formally, information retention for \(i\) timesteps (\(IR_{i}\)) at time \(T\) is defined as \(IR_{i}=\nicefrac{{1}}{{i}}\sum_{s=T-i}^{T}\mathds{1}_{y_{t}=\hat{y}_{t}}\) **Approaches.** We compare with a diverse range of methods, all of which are computationally capped to have the computational budget of one training pass over the CLOC dataset, termed as _fast stream_ in the parallel work (Ghunaim et al., 2023). We take their top two performing methods and compare their performance on the CGLM dataset. We use the same hyperparameters as Ghunaim et al. (2023) for all methods unless specified otherwise, please refer to it for details about hyperparameters. Note that unlike Ghunaim et al. (2023) setup, methods can use the full set of stored samples with no storage restrictions. We use a batch incoming samples with a size of 64 for CGLM and 128 for CLOC for computational restrictions. The training batch size is double the size of the incoming samples, with the rest half of the batch being uniformly selected from all past stored data. Each resultant model is used for predicting the next 64/128 samples in CGLM/CLOC datasets respectively. We describe each method below: _i) ER_ (Cai et al., 2021): ER performs online continual learning using an learning rate of 0.0005. We use the vanilla version as reduction in the batch size is responsible for nearly all of the performance gain amongst components tested (PoLRS and ADRep). _iii) MIR_ (Aljundi et al., 2019): This adds MIR as the selection mechanism for choosing samples instead of uniform, to train on for training the base ER model. However, it is used in a task-free fashion as there are no task boundaries in tested datasets. _iv) ACE_ (Caccia et al., 2022): This replaces the loss function in the base ER model from Crossentropy to ACE loss for reducing the interference of classes not present in the current batch. It is done in a task-free fashion as there are no task boundaries in tested datasets. _v) RWalk (Chaudhry et al., 2018)_: This adds a regularization term based on a combination of Fisher information matrix and the optimization-path based importance scores. We consider each incoming batch a new task as there are no specified task boundaries. Alongside these existing methods, we evaluate two approaches that are far computationally cheaper as they involve no training. _vi) Blind Clf-k_ (Cai et al., 2021): This is a baseline classifier with no access to the images, predicting the label of the current datapoint as mode of the recent \(k\) datapoints with a memory requirement of \(k\) (\(k\)=1 for CGLM and \(k\)=25 for CLOC). _vii) ACM (Ours)_: ACM uses an XCIT DINO model pre-trained on ImageNet1K (Caron et al., 2021) with similar performance on ImageNet as the ResNet50-V2 model used in the above methods for fairness. We replace the FC layer with a two layer MLP, first projecting the features to a 256-dimensions and second layer performing classification. We train this two layer MLP on the hyperparameter tuning set for a few epochs. We extract the features from the 256-dimensional embedding space to avoid the curse of dimensionality in kNN. We choose HNSW-kNN based on the benchmarking results of Aumuller et al. (2020). We use NMSlib (Malkov and Yashunin, 2018), with default hyperparameters of k=1 (nearest neighbour), ef=200 and m=25 for rapid adaptation evaluation and use FAISS-based kNN for backward transfer evaluation. ### Main Result: Evaluating ACM **Online adaptation.** We compare the average online accuracy over time of ACM to current state-of-the-art approaches on CGLM and CLOC in Figure 2. We observe that ACM significantly outperformed past approaches, by enabling efficient learning with a memory based approach. Moreover, the pre-trained features are universal enough to enable high performance. Note that ACM has nearly no training compute cost compared to other methods resulting not only better accuracy but also a significant cost effectiveness. **Information retention.** We compare the cumulative average performance of ACM to current state-of-the-art approaches on CGLM and CLOC in Figure 2. We observe that ACM outperforms existing methods on both datasets. Interestingly, ACM shows an nearly flat cumulative accuracy even over CLOC dataset with 39 million samples illustrating the benefits of utilizing past samples instead of encoding it in DNN parameters in backward transfer into history. We notice comparing to Ghunaim et al. (2023), removing the memory restriction from 40000 samples did not significantly change performance in the methods listed below, indicating that online continual learning with limited computation is hard even without storage constraints. **Take-away messages.** We achieve a significantly better tradeoff between rapid adaptation and information retention, illustrating the overwhelming benefit of storing information across time with a good initialization instead of trying to modify weights across time which causes catastrophic forgetting. Notably, it is surprising that ImageNet1K pretrained representations scale well for data streams like subsets of YFCC100M which are significantly bigger and more challenging than ImageNet1K (Goyal et al., 2019), while enabling rapid adaptation (in terms of sample complexity) to the distribution shifts over time. ### Studying the ACM Model **Ablating the contribution of features from kNN.** The main contribution of ACM is the adaptive memory formed using the kNN classifier. Here, we try to answer the question: _"Is the rapid adaptation property due to the proposed memory or due to the quality of the feature representations?"_. Figure 2: Performance on Rapid Adaptation (top) and Information Retention (Bottom) on CGLM (left) and CLOC (right) datasets: We observe that ACM outperforms existing methods by a large margin despite being far cheaper computationally. Traditional methods perform poorly given unrestricted access to past seen data indicating continual learning is a hard problem even without storage constraints. In order to test this, we use the classifier in the MLP as an alternative to classify images. This eliminates kNN from proposed ACM system with minimal changes. If kNN representations are the primary reason for the high performance then we should observe a significant decrease in online accuracy. We test this across three pretrained models with similar accuracy on ImageNet1K dataset: DINO (Caron et al., 2021), ResNet50 (V2) (Vryniotis et al., 2020) trained on ImageNet1K and a ResNet50 (I1B) trained on Instagram1B and finetuned on ImageNet1K (Mahajan et al., 2018) on the CGLM dataset. We present our results in Figure 3 (left) which shows that using a linear classifier instead of a kNN for classification results in far lower performance. Interestingly, the linear classifier shows a downward drift, losing upward of 10% accuracy, attributable to to distribution shift in the dataset. On the contrary, the kNN performance improves over time consistently across various architectures. We see that XCIT performs significantly better when used with a kNN compared to ResNet50 (I1B) models despite being significantly worse than ResNet50 (I1B) when using a linear classifier in CGLM dataset and on ImageNet1K classification performance. _Conclusion._ kNN is the primary reason for rapid adaptation to distribution shifts and is primarily responsible for the online learning performance, consistent across architectures. Simply having good feature representation is not enough to tackle online continual learning. Lastly, ResNet50 architecture is a poor fit for ACM. **Choice of kNN vis-a-vis other online classifiers.** Now that we know that the online classifier is important for rapid adaptation, we study the choice of the online classifier. The motivation behind using kNN is that it avoids the failure modes in optimization that exacerbate catastrophic forgetting such as lack of consistency property. On the other hand, there are parametric alternatives. In this study, we ask the question: _how effective is kNN compared to parametric online classifiers like logistic regression or SVM?_ We compare the performance and time efficiency of online learning on the CGLM dataset using XCIT DINO features and compare kNN with widely used traditional models like logistic classifier and SVM, implemented using an efficient online learning library VowPalWabbit. We present results in Figure 3 (right). We see that kNN achieves significantly superior classification performance compared to popular online learning algorithms using the same features while being nearly two orders of magnitude faster, enabling use in large datasets like CLOC. _Conclusion._ kNN is a powerful non-linear classifier which can act like a knowledge base, learning from relevant memories ranging from fresh mistakes to those indexed long ago. Moreover, it is incredibly fast, possibly due to efficient implementations. Figure 3: **Left: contribution of kNN beyond the information encoded in feature representations by predicting using the linear classifier in the MLP instead of the kNN. The MLP classifier is frozen and only used for prediction. Right: comparison of kNN with other online classifiers in performance and speed. Online classifiers learn weights given the 256-dimensional features.** **Ablating the contribution of the 2-layer MLP.** Finally, we ablate the contribution of the 2-layer MLP that was trained on the hyperparameter tuning set. We compare using the features before the FC layer from the network as-is instead of using the 256-dimensional embedding layer in the trained MLP. We present the results across the same three models in Figure 4 (left). Comparing the performance with and without the MLP, we observe that the performance in XCIT DINO model has a small drop of 5% illustrating that tuning on the hyperparameter set causes minor improvements in performance. However, both ResNet50 models face large drops of over 30% in online accuracy. We show that this is attributable to the curse of dimensionality, the large feature dimension of 2048 in ResNet50 architecture compared to 512 dimensions in XCIT causing a major drop in performance. Comparing models with MLP, we surprisingly observe that ResNet50-1IB performs worse than XCIT-DINO model despite ResNet50-I1B arguably has robust and generalizable features. We conclude that ResNet50 architecture is a poor fit for the ACM method. _Conclusion._ Models with high-dimensional embeddings perform much more poorly in combination with a kNN despite better representational power due to the curse of dimensionality. We use the XCIT-DINO model instead of ResNet50-V2 despite poorer performance on ImageNet1K as ResNet50 models seem to be poorer fit with kNN despite measures to equalize performance. **Ablating the effect of the embedding size.** Lastly, since embedding size is critical, we explore to what degree can we decrease the embedding dimensions without impacting performance significantly. Since XCIT features are 512 dimensional, we explore embedding sizes of 512, 256 and 128 and benchmark using the above three models for robust conclusions. We present the results in Figure 4 (right). First, we observe that decreasing the embedding dimension to 256 results in minimal drop in accuracy across all the three models but reduces the computational costs by half as shown in Figure 1(b). Further reduction in embedding dimension leads to significant loss of performance, hence 256 dimensions achieves the best tradeoff. ## 5 Discussion We would like to start our discussion with the limitations of our approach. It is important to state that our method is not applicable to many interesting application scenarios where pretrained features are not available or storage is truly limited, such as end-devices and embodied agents. Although we believe this is a strong limitation, we do not think it invalidates the importance and impact of the settings our method operates in. Our setting, along with its constraints, are applicable in a broad set of conditions, including but not limited to cloud-based systems using natural language and images, and we believe that studying online continual learning in these settings is important. Figure 4: **Left:** ablating the effect of using representations from a learned MLP with off-the-shelf features. **Right:** comparing across small embedding dimensions to gain higher efficiency without losing accuracy. While memory restriction can also be motivated by privacy considerations (Farquhar and Gal, 2018), recent progress in machine unlearning suggests that simply preventing access to old data is inadequate to fulfill any reasonable privacy requirements (Cao and Yang, 2015). We believe that privacy-constrained continual learning should be studied with a dedicated problem definition using appropriate application domains and benchmarks. Finally, we believe it is important to ground our system and its theoretical cost into a practical setting to guide practitioners and illustrate the applicability of our proposed system. Consider a real-time data stream over time. Given our representation size is 256 and type is float32, and CLOC has 39M datapoints, the total storage cost would be roughly, 40GB of storage (256 \(\times\) 32 \(\times\) 39M bytes) which would cost $10 on a GCP cloud storage per year given 0.02$ per GB per month rate. Moreover, the total computation time (inference and training cost) would support real-time operation at 30 frames per second without any additional optimization for 71 years, extrapolating with logarithmic scaling upto 20ms from Figure 1 (b). ## 6 Conclusion This work considers online continual learning with no restrictions on storage. Our reformulation follows from a first-principles analysis of modern computing systems' economic and computational characteristics. We proposed an adaptive continual memory that stores the entirety of the data, performs per-sample adaptation at every timestep, and retains computational efficiency. When evaluated on large-scale OCL benchmarks, our system yields significant improvements over existing methods. Our approach is computationally cheap and scales gracefully to large-scale datasets. ## 7 Acknowledgements This work is supportedin part by a UKRI grant: Turing AI Fellowship EP/W002981/1 and an EPSRC/MURI grant: EP/N019474/1. We would also like to thank the Royal Academy of Engineering and FiveAI. Ameya produced this work as part of his internship at Intel Labs. A special thanks to Hasan Abed Al Kader Hammoud for their help in experiments.
2306.10252
Thermodynamics of $f(R)$ Gravity: The Double Well Potential Case
In this work we further extend the analysis of $f(R)$ theories of gravity in the metric formalism under the approach of a Thermodynamics analogy, proposed in arXiv:1911.04830v3. Here we assume a double-well inflationary potential in the Einstein frame and obtain a parametric form of $f(R)$ in the corresponding Jordan frame. The whole Thermodynamics picture then follows: an equation of state, binodal and spinodal curves, phase transition, critical quantities (pressure, volume and temperature), entropy jumps, specific-heat divergence (and the corresponding critical exponent) and a butterfly catastrophe.
C. D. Peralta, S. E. Jorás
2023-06-17T04:03:31Z
http://arxiv.org/abs/2306.10252v1
# Thermodynamics of \(f(R)\) Gravity: The Double Well Potential Case ###### Abstract In this work we further extend the analysis of \(f(R)\) theories of gravity in the metric formalism under the approach of a Thermodynamics analogy, proposed in [1]. Here we assume a double-well inflationary potential in the Einstein frame and obtain a parametric form of \(f(R)\) in the corresponding Jordan frame. The whole Thermodynamics picture then follows: an equation of state, binodal and spinodal curves, phase transition, critical quantities (pressure, volume and temperature), entropy jumps, specific-heat divergence (and the corresponding critical exponent) and a butterfly catastrophe. ###### Contents * 1 Introduction * 2 Conformal Transformation and the Inverse Problem * 3 Numerical Analysis for the DW potential * 3.1 Einstein Frame (EF) * 3.2 Jordan Frame (JF) * 4 Thermodynamics of \(f(R)\) * 5 Conclusions ## 1 Introduction In this paper we focus on \(f(R)\) theories [2; 3; 4] -- nonlinear functions of the Ricci scalar \(R\) defined, as usual, in the Jordan Frame (JF). We follow the metric formalism, which features an extra degree of freedom (d.o.f), as we will briefly review. It is well known that, upon a suitable conformal transformation (as we will also recall below), the modified gravitational Lagrangian assumes the usual Einstein-Hilbert form and the extra d.o.f. is materialized as a scalar field -- for obvious reasons, this is the so-called Einstein Frame (EF). See, for instance, Ref. [5] for a discussion on how to determine the "true physical frame". Here, we will follow the same path, but in the opposite direction: we start from a double well potential (DW) \(V_{E}(\phi)=(\phi^{2}-a^{2})^{2}+\Lambda\) with an _ad-hoc_ Cosmological Constant \(\Lambda\) in the EF (with also standard slow-roll initial conditions) and investigate the corresponding \(f(R)\) in the JF. The introduction of \(\Lambda\) will lead us to a full thermodynamical approach to \(f(R)\) theories, shedding some light on the evolution of the system in both frames -- interesting results are still obtained even for the plain \(\Lambda=0\) case. We will now briefly review the aforementioned conformal transformation and the mapping from the quantities defined in one frame to their corresponding _Doppelgangers_ in the other frame. ## 2 Conformal Transformation and the Inverse Problem From now on, the super(sub)scripts "\(E\)", "\(J\)" indicate the frame (Einstein and Jordan, respectively) where the quantity is defined. We drop the subscript in \(R_{J}\equiv R\) (and in \(\phi_{E}\equiv\phi\) -- see below) to avoid excessive cluttering of the equations. We write the modified gravitation Lagrangian in JF (in the vacuum, i.e, no matter/radiation fields) as \[L_{J}=\sqrt{-g^{J}}f(R), \tag{1}\] where \(g^{J}\equiv\det(g^{J}_{\mu\nu})\) is the determinant of the metric in the JF. General Relativity (GR) with a cosmological constant \(\Lambda\) would correspond to a linear \(f(R)=R-2\Lambda\), and vice-versa. The standard variational procedure in the metric formalism yields fourth-order equations for the metric [6] \[R_{\mu\nu}f^{\prime}-\frac{1}{2}g^{J}_{\mu\nu}f+g^{J}_{\mu\nu}\,\Box f^{ \prime}-\nabla_{\mu}\nabla_{\nu}f^{\prime}=0, \tag{2}\] where \(f^{\prime}\equiv{\rm d}f/{\rm d}R\). One then introduces the new pair of variables \(\{g^{E}_{\mu\nu},p\}\), related to \(g^{J}_{\mu\nu}\) (and to its derivatives) by a conformal transformation from the JF to the EF [7; 8; 9]: \[g^{E}_{\mu\nu}\equiv\Omega^{2}(x^{\alpha})\,g^{J}_{\mu\nu}\,,\quad\mbox{where} \quad\Omega^{2}\equiv p\equiv f^{\prime}(R). \tag{3}\] We now define \(R(p)\) as a solution of the equation \(f^{\prime}[R(p)]-p=0\). This procedure corresponds to a standard Legendre Transformation. As such, the expression \(R(p)\) is uniquely defined as long as \(f^{\prime\prime}\equiv d^{2}f/dR^{2}\) has a definite sign. Nevertheless, it is possible to write a unique expression for \(R(\phi)\) -- see Eq. (7) below -- which holds across the branches where \(f^{\prime\prime}(R)\) has different signs, and yields smooth functions \(R(t)\) and \(\phi(t)\) across the branches. A scalar field \(\phi_{E}\equiv\phi\) (dropping the subscript) is traditionally defined in the EF by \(p\equiv\exp{(\beta\,\phi)}\), with \(\beta\equiv\sqrt{2/3}\). The Lagrangian (1) can then be recast in a more familiar form: \[L_{E}=\sqrt{-g^{E}}\Bigg{[}R_{E}-g^{\mu\nu}_{E}\phi_{,\mu}\phi_{,\nu}-2V_{E}( \phi)\Bigg{]}, \tag{4}\] where \(R_{E}\) is the Ricci scalar obtained from \(g^{E}_{\mu\nu}\). In other words, in the EF, the gravitational dynamics is set by a GR-like term (\(R_{E}\)) and the field \(\phi\) is an ordinary minimally-coupled massive scalar field subject to the potential [8] \[V_{E}(\phi)\equiv\frac{1}{2p^{2}}\Big{\{}pR[p(\phi)]-f[R(p(\phi))]\Big{\}} \tag{5}\] which is completely determined by the particular \(f(R)\) chosen. In the present work we start by examining the inverse problem: from a scalar field \(\phi\) and its potential \(V_{E}(\phi)\), we map \(L_{E}\) in Eq. (4) onto the corresponding \(L_{J}\) in Eq. (1). Following a previoulsy established procedure [8], one arrives at the following parametric expressions: \[f(\phi) ={\rm e}^{2\beta\phi}\left[2V_{E}(\phi)+2\beta^{-1}\frac{{\rm d} V_{E}(\phi)}{d\phi}\right]\quad\mbox{and} \tag{6}\] \[R(\phi) ={\rm e}^{\beta\phi}\left[4V_{E}(\phi)+2\beta^{-1}\frac{{\rm d} V_{E}(\phi)}{d\phi}\right]. \tag{7}\] We will apply the above equations to the DW potential for a scalar field, to which we also add an _ad hoc_ Cosmological Constant \(\Lambda\): \[V_{E}(\phi)\equiv\frac{m_{\phi}^{2}}{8a^{2}}\,(\phi^{2}-a^{2})^{2}+\Lambda, \tag{8}\] where \(a\) is the vacuum expectation value, which rescales the effective cosmological constant in the JF (see discussion below). One might argue that the insertion of \(\Lambda\) goes completely against the reasoning of modifying GR but, for now, \(\Lambda\) is written just for the sake of completeness. As we will see later on, it will turn out to be a key ingredient for the thermodynamic interpretation. Besides, here we focus on the primordial universe, where the accelerated expansion is _not_ generated by a \(\Lambda\)-like term. Still, even the standard case \(\Lambda=0\) yields very interesting results, as we will see later on. Eqs. (6) and (7) then yield the corresponding parametric form of \(f(R)\): \[f(\phi) =e^{2\beta\phi}\left[2\left(\frac{m_{\phi}^{2}\left(\phi^{2}-a^{2} \right)^{2}}{8a^{2}}+\Lambda\right)+\frac{m_{\phi}^{2}\phi\left(\phi^{2}-a^{2} \right)}{a^{2}\beta}\right] \tag{9}\] \[R(\phi) =e^{\beta\phi}\left[4\left(\frac{m_{\phi}^{2}\left(\phi^{2}-a^{2} \right)^{2}}{8a^{2}}+\Lambda\right)+\frac{m_{\phi}^{2}\phi\left(\phi^{2}-a^{2} \right)}{a^{2}\beta}\right], \tag{10}\] which we plot in Figs. 1 and 2 for different values of free parameters \(a\) and \(\Lambda\), In all panels, \(f^{\prime}>0\,\forall R\). The field \(\phi\) and \(a\) are given in Planck-Mass (\(M_{\rm pl}\)) units, \(R\) and \(\Lambda\) are given in \(M_{\rm pl}^{4}\). We used \(m_{\phi}=1M_{\rm pl}\). Since the height of the central potential barrier \(V_{E}(\phi=0)=m_{\phi}^{2}a^{2}/8\) (obviously) depends on \(a\), there is a critical value \(a_{c}\approx 0.81\) below which the initial mechanical energy of the scalar field \(\phi\) (determined by requiring slow-roll initial conditions) is high enough to allow it to go above the central barrier and the field will end up oscillating in the second well (lower panels in Fig. 1). We remind the reader that it is not necessary to have a large \(R\) when the density is large (e.g, at early times), as in GR, because there is no algebraic relation between \(R\) and \(T\). Instead, here we have a differential equation, where \(\rho\) is just the source for the evolution of \(R\). Actually, we have no \(\rho\) (so, no source term for \(R\)) and, indeed, \(R\approx 0\) at early times but it increases during inflation and eventually it oscillates around \(R\approx 0\). In the absence of matter/radiation, \(R\) would identically vanish in GR. The usual constraint on the second derivative of \(f(R)\) -- \(d^{2}f/dR^{2}>0\) -- is necessary so that empty background solutions are stable [2]. In the present work (as well as in Ref. [1]), the system does feature such instabilities, but they are only temporary. Figure 1: Parametric plots of \(f(R)\) given by Eqs. (9, 10). Dynamics of parameter \(a\) fixing \(\Lambda=0\). **In the top panels**, the potential barrier is too high and the evolution of the system reproduces the result for a single well [1]. The red-dashed lines show the path that _would_ be followed if the field could go over such barrier. **In the lower panels**, the potential barrier is low enough for the field to reach the second well and then it presents this new second behaviour on \(f(R)\). Numerical Analysis for the DW potential ### Einstein Frame (EF) From now on, we will investigate the potential given in Eq. (8) as a inflationary potential in the EF -- initially, we will keep \(a=0.7\) and \(\Lambda=0\), except when necessary for a cleaner picture and noted so. First of all, we have to determine the time evolution of \(R(t)\) and \(\phi(t)\). We recall that throughout this paper there is no matter nor radiation, since the \(\phi\) field is actually a gravitational d.o.f., expressed as a scalar field in the EF. In GR, that would imply \(R=0\,\forall\,t\). In \(f(R)\) theories, on the other hand, \(R\) has a dynamical behavior of its own. Here, it suffices to use \(R[\phi(t)]\) (defined in the JF) from Eq. (10) and \(\phi(t)\) (in the EF) from the standard equation of motion for a scalar field in an expanding homogeneous spacetime: \[\ddot{\phi}(t)+3H(t)\dot{\phi}(t)+V^{\prime}_{E}[\phi(t)]=0, \tag{11}\] where \(V^{\prime}_{E}\equiv dV_{E}/d\phi\) and \(H^{2}(t)=\{\dot{\phi}(t)^{2}/2+V_{E}[\phi(t)]\}/3\). The initial conditions for the numerical solution of Eq. (11) are the standard ones in the slow-roll approximation [10]: \(\phi(0)\approx-100\) and \(\dot{\phi}(0)\approx 116.61\), which correspond to \(R(0)\approx 3.45\times 10^{-28}\) and \(\dot{R}(0)\approx 116.61\). Figure 2: Parametric plots of \(f(R)\) given by Eqs. (9, 10) for increasing values of the parameter \(\Lambda\) with fixed \(a=0.7\) (for which the field ends at the bottom of the second well). In all plots, there is a branch close to the horizontal axis that can be clearly seen only in the final panel. There are three critical values of \(\Lambda\): **Top panels:** At \(\Lambda\approx-1.05\), a new unstable branch appears with positive concavity, yielding a five-branch structure. **In center panels:** At \(\Lambda\approx 0.4\) the five-branch structure ends and the system has three branches again. **In low panels:** At \(\Lambda\approx 21.04\) the three-branch structure turns into to a single-branch one. \(2.67\times 10^{-28}\). 1 We point out that the slow roll is an attractor in the double-well inflation [11] so that the initial conditions do not need to be fine tuned. Footnote 1: Where \(\phi\) is given in Planck-Mass (\(M_{\rm pl}\)) units, \(R\) is given in \(M_{\rm pl}^{4}\), and \(N=60\) is the number of efolds. In Fig. 3 we plot the time evolution of the \(\phi(t)\) field from Eq. (10). From that piece of information and from Eq. (11), we are able to plot the numerical evolution of the \(R(t)\) field in Fig. 4 (left panel); note the correspondence between its extrema and the cusps in the parametric plot \(f(R)\) (right panel). For times \(t>t_{4}\), \(\phi\) oscillates around \(\phi=a\), which corresponds to \(R=4\Lambda\exp(\beta a)\) -- we have chosen \(\Lambda=0\) in Fig. 4. We plot in Fig. 5, along each of the aforementioned stages, the corresponding equation-of-state parameter for the \(\phi\) field (defined in the EF): \[w_{\phi}(t)\equiv\frac{p_{\phi}(t)}{\rho_{\phi}(t)}\equiv\frac{\frac{1}{2} \dot{\phi}^{2}-V_{E}[\phi(t)]}{\frac{1}{2}\dot{\phi}^{2}+V_{E}[\phi(t)]}, \tag{12}\] and its average over one period \(\mathcal{T}\) (defined in the final oscillatory phase, where its calculation makes sense). There are clearly two distinct phases: the early inflationary period, character Figure 4: **(Left Panel)** Numerical solution for \(R(t)\times t\) given by Eq. (11), for \(\Lambda=0\) and \(a=0.7\), the first four extremes (two maxima and two minima) are marked with the blue, orange, green and purple points at times \(t_{1}=2.42\), \(t_{2}=3.47\), \(t_{3}=4.97\) and \(t_{4}=8.16\), respectively. **(Right Panel)** Parametric plot of the function \(f[R(t)]\times R(t)\). Notice the first branch, which spreads between the origin \(\{0,0\}\) and the blue point \(t_{1}\), slightly above the horizontal axis. The first four extremes of the function \(R(t)\) correspond to the four peaks of the five-branch system. The red point marks the center of the last oscillatory phase along the fifth branch. In both panels, the moment \(t_{\rm end}\) indicates the end of inflation. Figure 3: **(Left Panel)** Numerical solution for \(\phi(t)\times t\) given by Eq. (10), with \(N=60\) efolds, using the potential defined in Eq. (9), with \(m_{\phi}=1\), \(\Lambda=0\) and \(a=0.7\). **(Center and Right Panels)**\(V[\phi(t)]\) is shown in different ranges — notice the change of scale in both axes — and the corresponding values of \(\phi\) at the times \(t_{1\to 4}\) mentioned in Fig. 4. As before, the black point \(t_{\rm end}\) marks the end of inflation and the red point marks the center of the last oscillatory phase in the second well (namely, \(\phi=a>0\)). ized by \(w_{\phi}\approx-1\), and the dust-like phase, when \(w_{\phi}\) oscillates between \(\pm 1\) and \(\bar{w}_{\phi}=0\), as for the traditional inflaton field in the JF 2. Footnote 2: At some point, the inflaton field should couple to matter (which is absent in our model from the beginning) to start (p)reheating — the study of such phase is beyond the scope of the present paper. ### Jordan Frame (JF) One can describe the evolution of the system along the branches in Figs. 1 and 2 for the case with \(\Lambda=0\) and \(a=0.7\), as follows: The system starts close to the origin and slowly moves along the first branch (close to the horizontal axis), generating an initial inflationary phase (since \(w_{\phi}<-1/3\)). The best fit to \(f(R)\) along this first branch is \(f(R)\approx R^{4.22}\), with **no** GR-like term (\(\propto R\)). The system then quickly sweeps through the second branch (where \(f^{\prime\prime}<0\)) and reaches the third branch (where \(f^{\prime\prime}>0\) once more). The system continues to a fourth branch (where again \(f^{\prime\prime}<0\)) and then oscillates around the origin along the almost-linear fifth branch. On the other hand, from the extra terms in the Einstein equations, one can define a conserved "curvature fluid" whose energy density and pressure are, respectively: \[8\pi G\rho_{c} \equiv\left(f^{\prime}R-f\right)/2-3H\dot{f}^{\prime}+3H^{2}(1-f^ {\prime}) \tag{3.3}\] \[8\pi Gp_{c} \equiv\ddot{f}^{\prime}+2H\dot{f}^{\prime}-(2\dot{H}+3H^{2})(1-f^ {\prime})+(f-f^{\prime}R)/2. \tag{3.4}\] In Fig. 6 we plot the corresponding equation-of-state parameter \(\omega_{c}\equiv p_{c}/\rho_{c}\) (left panel), and \(\rho_{c}(t)\), \(p_{c}(t)\) (right panel), all of them defined in the JF, for \(\Lambda=0\) and \(a=0.7\). In the inflationary phase, the curvature fluid behaves as a cosmological constant (\(\omega_{c}\approx-1\)), as expected, since it is responsible for the accelerated quasi-de Sitter expansion. In the oscillatory phase, on the other hand, the behavior of \(\omega_{c}\) diverges just because \(\rho_{c}\) vanishes periodically (when \(\phi(t)=a\), at the bottom of its potential \(V_{E}(\phi)\) -- see Fig. 6, right-hand panel). Nevertheless, there are no divergences of _physical_ quantities. If \(\Lambda\neq 0\), then \(\omega_{c}=\omega_{\phi}=-1\) also in the final stages, as expected. ## 4 Thermodynamics of \(f(R)\) For now, let us associate the Cosmological Constant \(\Lambda\) to an effective temperature \(T\equiv\Lambda\). It is well known [12, 13] that, in a de Sitter-like spacetime, a cosmological constant corresponds to an effective temperature due to the presence of the horizon, just like for a black hole. Figure 5: Equation-of-state parameter (\(w_{\phi}\) and its time average \(\bar{w}_{\phi}\)) for the \(\phi\) field, defined in the EF, as functions of time, for \(\Lambda=0\)\(a=0.7\). Note that \(\bar{w}_{\phi}\) can only be correctly interpreted in the oscillatory phase, where the period \(\mathcal{T}\) can be defined. Thus, such correspondence comes with no surprise. We also associate the free Gibbs energy \(G\) and pressure \(P\) to \[P =f(R) \tag{10}\] \[G =R-2\Lambda e^{\beta a}. \tag{11}\] The former identification between the Lagrangian and the pressure of a given fluid is actually usual in GR. The extra term in the latter equation prevents the entropy from becoming negative, as we will see later on. The effective volume \(V\) is the variable "canonically conjugated" to the effective pressure \(P\), i.e, since \[dG(P,T)=V\cdot dP-S\cdot dT, \tag{12}\] one can define an effective volume as \[V\equiv\left.\frac{\partial G}{\partial P}\right|_{T}=e^{-\beta\phi}, \tag{13}\] which can be inverted, yielding \[\phi=-\frac{1}{\beta}\ln V. \tag{14}\] Equations (10) and (13) allow us to write the equation of state for our non-linear gas, i.e, an expression that relates \(P\), \(V\) and \(T\): \[P=\frac{1}{4a^{2}\beta^{4}V^{2}}\left[m_{\phi}^{2}\left(\ln^{2}V-a^{2}\beta^{2 }\right)^{2}-\left(\ln^{2}V-a^{2}\beta^{2}\right)4m_{\phi}^{2}\ln V+8a^{2} \beta^{4}T\right]. \tag{15}\] The behaviour of \(P(V)\) is shown in Fig. 7 for a couple of values of \(T\), which bears some resemblance to a vdW gas, which does have a stronger similarity to the single-well problem we have studied before [1]3. In spite of such similar curves \(P(V)\), as we will see in a moment, there is a plethora of new phenomena in our gas, such as three critical temperatures. Figure 6: **Left panel:** Equation-of-state parameter \(\omega_{c}\) for the “curvature fluid” in the JF as a function of time. The divergences, all of them non-physical, correspond to \(\rho_{c}=0\), which happens periodically while the field \(\phi\) oscillates around the minimum of its potential \(V_{E}(\phi)\). **Right panel:** Corresponding pressure \(p_{c}\) (red solid curve) and density \(\rho_{c}\) (blue dashed line) for the “curvature fluid”, as a function of time. In **both panels**, \(\Lambda=0\) and \(a=0.7\). The magnitude of the qualitative differences from a standard vdW gas can be easily seen when we plot the spinodal and the binodal (or coexistence) curves, which indicate, respectively, the regions of instability and metastability of the system -- see Fig. 8. The former curve (spinodal) is obtained either from the lateral extrema ("wings") of the Gibbs function (see Fig. 4), i.e, the first four turning points (the global and first three local extrema) of \(R(t)\) or from the extrema of the \(P\times V\) plot, where \(dP/dV=0\). The latter curve (binodal) can also be obtained using two equivalent calculations: from the self-intersecting points of the Gibbs function (plotted as a function of the pressure, for fixed temperature) and from the Maxwell construction, supporting the results from each other. The three _critical points_\(\{P_{c},T_{c},V_{c}\}\) are defined at the crossing of each pair of those curves. For the case of \(\Lambda=0\) and \(a=0.7\), the system features three critical temperatures, namely \(T_{c1}\), \(T_{c2}\) and \(T_{c3}\) shown in Fig. 8. See also Fig. 2 to follow the evolution of the Gibbs function as the temperature (\(\Lambda\)) increases. If we take \(T=0\) (solid orange curve in Fig. 7), the system starts at \(V\to\infty\) in a metastable phase (the binodal region) -- the initial inflationary solution is indeed momentary. Either way, the effective fluid quickly crosses the spinodal curve (the unstable region) and then oscillates around \(P=2T\exp(2\beta a)\) and \(V=\exp(-\beta a)\), indicated by a gray circle. At Figure 8: Phase diagram for the effective model in the \(\{P,T\}\)-plane for \(a=0.7\). The binodal curve (solid blue with gaps due to numerical errors), ending at the three critical points \(T_{c_{1}}\), \(T_{c_{2}}\), \(T_{c_{3}}\) in red, green, and blue dots respectively. The spinodal curves are shown as solid red lines. The right-hand panel is a zoom in the rightmost part of the phase diagram (not shown in the former panel); note the different axis range. Figure 7: \(P\times V\) of the effective model in the \(\{P,V\}\)-plane, with \(a=0.7\) and seven temperatures, including the three critical temperatures. The dashed black line represents the spinodal curve, the dotted red lines are the binodal curves. It is worth noting that the two panels have different scales. this temperature, the system ends exactly on the binodal curve. For higher temperatures, though, the system settles down above the binodal line, i.e, in a stable configuration. One can also calculate the Helmholtz energy \[F(T,\phi)\equiv G-P\cdot V=\frac{e^{\beta\phi}}{4a^{2}}\left(a^{4}m_{\phi}^{2}+a^ {2}\left(8T-2m_{\phi}^{2}\phi^{2}\right)+m_{\phi}^{2}\phi^{4}\right)-2Te^{a \beta}, \tag{21}\] from which one can define the entropy as \[S(T,\phi)\equiv-\left.\frac{\partial F}{\partial T}\right|_{\phi}=\frac{1}{2 }\left(4e^{a\beta}-4e^{\beta\phi}\right). \tag{22}\] One can then realize that the specific heat at constant volume vanishes, since \(C_{V}\equiv T\cdot\partial S/\partial T|_{V}=0\,\forall T\). Such feature is not unusual: it has been already found in studies of thermodynamics and phase transitions of black holes [14]. Another important feature is the sudden change in entropy, from \(S(\phi\rightarrow-\infty)=2\exp\left(\beta a\right)\) to \(S(\phi=a)=0\), marking the release of latent heat, just as expected in an ordinary first-order phase transition (which will be shown below by the \(C_{P}\) behaviour). Such spontaneous decrease in entropy correctly indicates that the gravitational sector described in this paper is incomplete. we recall the reader that we have completely neglected the matter sector from the beginning, i.e, there is no energy-momentum tensor for the matter sector. The transfer of energy between those sectors is the well-known (p)reheating mechanism, which will be the subject of future work. The internal energy \(U(T,\phi)\) is given by its standard definition: \[U\equiv G-P\cdot V+T\cdot S=\frac{m_{\phi}^{2}}{4a^{2}}\left(a^{2}-\phi^{2} \right)^{2}e^{\beta\phi} \tag{23}\] for which \(\phi=a\) (or, accordingly, \(V=\exp(-\beta a)\)) is always a minimum. It turns out that also \(U\) is only a function of the volume \(V\) and _not_ of the temperature \(T\). We acknowledge (see Fig. 9) the existence of two "extra" equilibrium points -- a stable asymptotic one (a minimum at \(\phi\rightarrow-\infty\)) and a local maximum (at \(\phi=(-2\pm\sqrt{a^{2}\beta^{2}+4})/\beta\)) -- besides the expected ones at \(\phi=\pm a\). The entropy as a function of pressure and temperature provides another very important piece of information. \(S(P,T)\) is depicted in Fig. 10, which also shows the spinodal and binodal Figure 9: Plot of the internal energy \(U\) as a function of \(\phi\) (left panel) and of the volume \(V\) with \(a=0.7\) (right panel). The system starts on a stable (asymptotic) solution \(\phi\rightarrow-\infty\) (\(V\rightarrow+\infty\)), but the slow roll drives the field towards the origin. Eventually, it settles down at the minimum \(\phi=a\) (\(V=\exp(-\beta a)\)). curves. The region where the entropy is multi-valued is known in Catastrophe Theory [15] as a cusp and indicates the existence of a first-order phase transition and unstable configurations. From \(S(P,T)\) we can get the specific heat at constant pressure, \(C_{P}\equiv T\cdot\partial S/\partial T|_{P}\), shown in Fig. 11. We obtain the expected behavior for temperatures around the coexistence curve, for pressures both below (finite jump) and above (smooth behavior) the critical value \(P_{c}\). We also obtain the usual divergence at the critical point \(\{T_{c},P_{c}\}\) (solid black line in Fig. 11) as given by \(C_{P}|_{P_{c_{1}}}\sim[(T-T_{c_{1}})/T_{c_{1}}]^{\alpha}\), with \(\alpha\approx 1.36\), \(C_{P}|_{P_{c_{2}}}\sim[T-(T_{c_{2}})/T_{c_{2}}]^{\alpha}\), with \(\alpha\approx 1.39\), \(C_{P}|_{P_{c_{3}}}\sim[(T-T_{c_{3}})/T_{c_{3}}]^{\alpha}\), with \(\alpha\approx 18.92\). The sound speed squared, defined as \(c_{\rm s}^{2}\equiv\dot{P}/\dot{\rho}=-(V^{2}/\kappa)\dot{P}/\dot{V}\) (where we define \(\kappa>0\) by \(\rho=:\kappa/V\)) and plotted in Fig. 12. We can see that \(c_{\rm s}^{2}<0\)_only_ between the first two (and between the third and fourth) extrema of \(R(t)\), i.e, in the second and the fourth branches (see Fig. 4), when \(f^{\prime\prime}<0\), as expected from the usual _perturbative_ argument on stability of \(f(R)\) theories [6]. With an imaginary sound speed, fluctuations grow exponentially fast, but, during the spinodal decomposition process, only a given range of wavelength do so [16]. This is similar to a feature that has already been proposed in the preheating scenario [17]. Further details will be the subject of future work. ## 5 Conclusions In this paper, we have investigated the \(f(R)\) theories of gravity where \(f(R)\) is a nonlinear functions of the Ricci scalar \(R\) in the Jordan Frame, using the metric formalism which features Figure 11: Behavior of the specific heat at constant pressure \(C_{P}\) as a function of the temperature \(T\) close to its transition values: \(T_{c1}=0.4016\) if \(P=P_{c1}=0.9447\) (left panel), \(T_{c2}=-1.0507\) if \(P=P_{c2}=-0.6332\) (center panel), \(T_{c3}=21.0430\) (right panel) if \(P=P_{c3}=1.2252\times 10^{-2}\), for different values of pressure: \(0.9P_{c}\) (dotted red), \(P_{c}\) (solid black), \(1.1P_{c}\) (dashed blue) and \(1.2P_{c}\) (dot-dashed green). In all curves, \(a=0.7\). Figure 10: Entropy surface given by \(S(P,T)\) for \(a=0.7\) and \(\Lambda=0\) (All panels) for differents pressures scales. For this case the surface presents a triple fold. The spinodal curve is indicated in color rainbow function.. an extra degree of freedom. We have focused on the inverse problem, mapping the Einstein Frame Lagrangian onto the corresponding Lagrangian in the Jordan Frame for a double well potential with an ad-hoc Cosmological Constant \(\Lambda\). We have found that the evolution of the system in the later frame occur along various branches of the \(f(R)\) function according the configuration of the initial conditions of the scalar field and the dynamics of the free parameters \(\Lambda\) and \(a\). We have explored the thermodynamics interpretation for this case, where the cosmological constant is associated with an effective temperature, the free Gibbs energy to the Ricci scalar \(R\) and pressure to the Lagrangian in the jordan frame and derived the effective volume as the variable conjugate to the effective pressure. We have also derived an equation of state for our non-linear gas that relates pressure, volume, and temperature. We have shown that the behavior of the pressure-volume curve bears some resemblance to a van der Waals gas, but exhibits a variety of new phenomena, such as three critical points. The three critical temperatures and their related pairs of spinodal and binodal lines correspond to three first-order phase transitions. Indeed, the Gibbs Potential does present the expected coalescence of extrema when plotted as a function of \(V\) (or \(\phi\), which features a nicer scale range) at each \(T_{ci}\), as shown in Fig. 13. For each \(T_{ci}\), there is one value of \(P_{i}\); all of them were already indicated in Fig. 8. For temperatures (higher) lower than \((T_{c2})\)\(T_{c1}\) and \(T_{c3}\), there is a line (binodal) in the phase diagram where two phases can coexist. The crossing of the binodal lines (at about \(P_{*}\approx-0.36\), \(T_{*}\approx-0.20\)) indicates a "triple" point, where **all** phases coexist. In addition to calculating standard thermodynamic quantities like the Helmholtz energy, internal energy, entropy, specific heats at constant volume and constant pressure, and sound speed squared, our comprehensive approach also sheds light on the evolution of the system in the Jordan frame. By exploring the behavior of \(f(R)\) theories in relation to thermodynamics, we offer a unique perspective on the thermodynamics of spacetime. ## Acknowledgements CDP thanks Yeinzon Rodriguez for his support during the developed of this research and acknowledges financial support from MinCiencias, Colombia, under the program "estancias posdoctorales convocatoria 891-2020", grant number: 80740-687-2021, and Centro de Investigaciones en Ciencias Basicas y Aplicadas at Universidad Antonio Narino. SEJ thanks Figure 12: Plot of \(\kappa\cdot c_{\rm s}^{2}\equiv\kappa\cdot\dot{P}/\dot{\rho}\) for the non-linear gas for \(a=0.7\). \(T=0\) (red), and \(T=T_{c_{1}}\) (dotted green), as functions of time. The dots indicate when \(\dot{R}(t)=0\), i.e, at the sideway peaks in Fig. 4, between which \(f^{\prime\prime}(R)<0\). Eduardo Fraga for insightful discussions on the subject and FAPERJ for the financial support.
2307.07407
Retrieval of phonemes and Kohonen algorithm
A phoneme-retrieval technique is proposed, which is due to the particular way of the construction of the network. An initial set of neurons is given. The number of these neurons is approximately equal to the number of typical structures of the data. For example if the network is built for voice retrieval then the number of neurons must be equal to the number of characteristic phonemes of the alphabet of the language spoken by the social group to which the particular person belongs. Usually this task is very complicated and the network can depend critically on the samples used for the learning. If the network is built for image retrieval then it works only if the data to be retrieved belong to a particular set of images. If the network is built for voice recognition it works only for some particular set of words. A typical example is the words used for the flight of airplanes. For example a command like the "airplane should make a turn of 120 degrees towards the east" can be easily recognized by the network if a suitable learning procedure is used.
Brunello Tirozzi, Orchidea Maria Lecian
2023-07-10T17:25:07Z
http://arxiv.org/abs/2307.07407v1
# Retrieval of phonemes and Kohonen algorithm ## Abstract A phoneme-retrieval technique is proposed, which is due to the particular way of the construction of the network. An initial set of neurons is given. The number of these neurons is approximately equal to the number of typical structures of the data. For example if the network is built for voice retrieval then the number of neurons must be equal to the number of characteristic phonemes of the alphabet of the language spoken by the social group to which the particular person belongs. Usually this task is very complicated and the network can depend critically on the samples used for the learning. If the network is built for image retrieval then it works only if the data to be retrieved belong to a particular set of images. If the network is built for voice recognition it works only for some particular set of words. A typical example is the words used for the flight of airplanes. For example a command like the "airplane should make a turn of 120 degrees towards the east" can be easily recognized by the network if a suitable learning procedure is used. Introduction The phonemes are the fundamental elements of a spoken language. Vowels and consonants are two particular phonemes, and they are produced in different mechanisms. A vowel is generated after the use of the vocal cords, which give rise to a periodic acoustic signal, which is qualified after precise spectral components. The differences among vowels are due to te articulations and to the opening of the lips and to that of the jaw: the corresponding signal does not exhibit random components nor disturbance ones. On the contrary, the production of consonants does not involve vocal cords, since it is due to a constriction of the mouth, where the latter, on its turn, induces a turbulence on the air which blows out of the lungs. The turbulence creates a random component in the vocal signal, or a combination of noise and periodic signal. There are also consonants which are due to the mouth and to the nose. The spectrum of the vowels demonstrates resonances which are a multiple of the fundamental frequency,which coincides with the frequency of the oscillations of the vocal cords; the pertinent power spectrum exhibits maxima in the correspondence of the multiples of the fundamental frequencies. ## 2 The power spectrum The power spectrum is the quantity after which vowels and consonants are parameterised. **Definition 1**.Let \(x(t)\) be a stochastic process defined on a space of probability \((\Omega,P,\Sigma)\); the power spectrum \(S(\omega)\) is defined after the relation \[S(\lambda)=E\mid\int e^{i\lambda t}x(t)dt\mid^{2} \tag{1}\] where \(E\) is the expectation value with respect to the probability \(P\). Vowels are characterised after a concentrated spectrum on the lower part of the spectrum and on the medium one, i.e. for frequencies smaller than \(3\ K\ Hz\). The fundamental frequencies of the vowels can be easily identified, as the signal is periodic, and the noise component is small. The vowels differ one from the other for the position of the fundamental frequencies; in particular, it is enough to consider the first one and the second one. The form of the spectrum of a phoneme depends also on the word which contains it and on the pronunciation of the speaker. The vector that represents a phoneme is built staring from the power spectrum \(S(\lambda)\). Since the samples of the spoken language are obtained after measures at discrete times, it is necessary to perform a Fourier transform of a sequence \(x(t_{k})\). It is therefore more apt to use the fast Fourier transform (FFT) [1]. The algorithm consists in dividing the data \(x(k)\equiv x(t_{k})\), \(k=1,...,2K\) into two subsets, i.e. one consisting of the data of even index, and the other of the data of odd index; a set of complex data \(z(k)\) with index \(k=1,...,K\) is introduced thereafter, such that the real part \(Re[z(k)]\) equals the first \(K\) data with \(k=1,...,K\), and the imaginary part of \(z(k)\), \(Im[z(k)]\) to the second part. It is easy to find the relation which colligates the Fourier transform of the reals part of \(z(k)\) with that of the imaginary part of \(z(k)\), such that the problem is reconducted to the calculation of the Fourier transform for \(K\) data only, instead of one for the \(2K\) data which were originally given. If \(K=2^{N}\), it is straightforward to verify that the number of operations \(K^{2}\) reduces to \(Klog_{2}K=2^{N}N\) after the iteration of the procedure. The interval of frequencies in which the power spectrum is defined is divided in a certain number of intervals (or 'band'). These intervals correspond to the ways the human ears work at the variation of the frequencies. The pattern \(x\) consists of a vector that has as many components as the number of the bands, and the value f the \(i-th\) component equals the mean value of the power spectrum of the \(i-th\) band. The sampling of the signal has to be accomplished at a proper frequency which avoids the distortion of the signal, after the appropriate theorem [2]. It is the purpose of the present paper to analyse the case of vowels, as it is easier to isolate the stationary part of the acoustic signal which corresponds to the phoneme, as consonants give raise to a signal which exhibits the presence of a strong noise. The data \(x(t)\) which measure the acoustic signal are grouped in boxes of 512, and the FFT is applied to every box. The blocks are issued from the first datum, then from the second one, and so on. The FFT is averaged among the blocks which correspond to the same part of the signal; in other words, the Fourier transforms are averaged, among the blocks which correspond to the blocks which are part of the same phoneme. The average is accomplished because the division zone between one phoneme and another is not so easily outlined. As a further problem to be solved, it is worth mentioning that the Fourier transform accomplished only on the block containing 512 data is not the true Fourier transform, as the integral defining it is an integral on \(\mathbb{R}\). There exists a theory [2] which allows one to correct this miscalculation, according to which it is necessary to multiply the succession of the data which is to be considered in the summation (or in the integral) times a certain function which depends on the shape of the data block (which is called a 'time window'). There exist several functions: the Welch functions, the Parzen function, the Hanning function, and so on: it is customary to verify how the pattern vecor depends on the choice of these functions. Of course, this correction has to be accomplished before the calculation of the power spectrum. In [3] the verification is presented, of the issue that the vowels \(a,o,e,i,u\), extracted from a certain succession of words, in the application of a rectangular window, i.e. the multiplication time the characteristic function of the block constituted of 512 and the multiplication times the Hanning function do not lead to very different results as far as the frequencies less than 3 \(Hz\), which is the interval among which the power spectrum of the spoken language is concentrated. To summarise, the patterns are extracted from the vocal signal after the following operations: * 1) For each group of 512 data, the FFT is calculated with the above-mentioned corrections, and the power spectrum is determined; * 2) the frequency interval \(0-5000\ Hz\) is divided into 15 intervals (or 'channels'): from 200 to 3000\ Hz\), 12 intervals are considered, of breadth 233\(Hz\), while from 3000\ Hz\) to 5000\(Hz\) only 3 intervals are considered, of breadth 667\(Hz\); * 3) in each channel, the average of the power spectrum is calculated; * the vector \(\vec{x}\), which consists of 15 components, constructed this way is normalised to 1 in the Euclidean norm, for a special convergence theorem of the weights to be applied, in a particular Kohonen network [4]. The described construction can be now applied. Given a phoneme, which is represented after a vector built in the appropriate manner, the different position of the phoneme in the different words and the different pronunciation due to the inflection of the voice allow for the existence of a set of vector \(A(\vec{x})\) which correspond to that phoneme. It is obvious that the phonemes generate a Voronoi partition, and the Kohonen partition algorithm should allow one to construct such a partition, together with the vectors \(\vec{x}_{i}\) which define the partition. The dynamics of the winning neuron is applied to prove to theorem of convergence of the weights. Nevertheless, the theorem which is aimed to be proven is based on a non-linear dynamics of the weights of the network. In [3], a validation of this version of the theorem was provided with; nevertheless, a recognition of vowels only was achieved only in 51% of cases. The reason of the inefficient performance was outlined in that the phonemes cannot be restricted to vowels an consonants only, but the transitions between phonemes have to be introduced, as in [5]. In [5], more sensitive parameters were introduced as well, as far as the power spectrum is concerned, which lead the patter vector to consist of 500 components. ## 3 A particular Kohonen algorithm The present section is aimed at discussing a network consisting of \(n\) input neurons and of \(n\) output neurons.. At each knot of the first type the same pattern \(\vec{x}\in\mathbb{R}^{n}\) is presented, which represent a particular phoneme. A weight vector \(m_{i}\in\mathbb{R}^{n}\) is associated with the \(i-th\) output neuron \(i=1,...,N\). Each input neuron is connected with all the output neurons. The weights \(m_{i}(t)\) satisfy the \(n-\)dimensional Riccati equation \[\dot{m}_{i}=\alpha x(t)-\beta m_{i}\sum_{j=1}^{j=N}m_{ij}x_{j}(t),\ \ \alpha,\beta>0 \tag{2}\] The dynamics was introduced by Kohonen to take into account the non-linear response of the circuits which can eventually realise this algorithm on some computer. As soon as the dynamics is opportunely discretised, the dynamics is applied on the winning euro only, i.e. on the neuron \(i\) whose weight is as closest as possible to the input vector \(x\) i the Euclidean distance. The evolution equation of this dynamics is non-linear. The vector which corresponds to a pattern is constructed as described in Section 1. It is possible to state that, if an input time sequence \(x(t)\), issued from the spoken language of a chosen person, is presented to the network, a structure of vectors \(\hat{x}_{k}\) should be obtained, where the latter define the Voronoi partition associated with the set of Phonemes generated by the chosen person. It is expected that two different persons give rise to two different partitions. For it to be accomplished, a convergence and stability Voronoi is necessary. As the dynamics described after Eq. (2) is non-linear, the convergence theorem in probability is substituted by a Voronoi which states the stability of the weights \(m(t)\) in the asymptotic limit when the vector \(x(t)\) varies within a neighbourhood which is small enough. The voice recognition after this kind of network has not been applied successfully yet,a d, up to now, there are programs which are able to recognise only a limited number of words, if applied to one person only, after a sufficiently-enough long instruction time, within a certain error. There exist also other algorithms for the voice recognition [6], [7]. The vector \(\hat{x}_{k}\) which generates the atom \(A(\hat{x}_{k})\) of the Voronoi partition is the vector with the least distance from all the other vectors of \(A(\hat{x}_{k})\), which is named the central vector. The characteristic radius \(r_{k}\) of the atom \(A(\hat{x}_{k})\) is the maximum distance between the central vector and all the other vectors belonging to \(A(\hat{x}_{k})\). The distance \(\delta_{kl}\) between the central vectors of the atoms \(A(\hat{x}_{k})\) and \(A(\hat{x}_{l})\) and the minimum distance between the atoms is defined: \[\delta=min_{k,l}\delta_{kl}. \tag{3}\] **Definition 2:**_If an unknown pattern \(x\) is presented to the network, the neuron \(i\) is found, such that its weight vector is endowed with the minimum distance \(\rho\) from the input \(x\). Let \(A(\hat{x}_{k})\) be the atom of the partition to which the weights of the neuron \(i\) belong: if \(\rho\leq r_{k}\), then the pattern \(x\) is recognised as the \(k-\)phoneme; differently, it is not recognised._ For a learning process to give rise to weights similar to the central vectors of each phoneme or atom of the Voronoi partition, it is necessary to prove the stability of the Riccati equation with respect to the variation of the input function \(x(t)\). More precisely, two theorems need to be demonstrated [8]. **Theorem 1**: If in Eq. (2)a constant function \(\hat{x}_{k}\) is introduced, the the limit of the solution is proportional to \(\hat{x}_{k}\), and the vector \(m(t)\) approaches to this value with exponential velocity. As it happens during the instruction procedure of the network \(x(t)=\hat{x}_{k}+y(t)\), on the contrary, one has the following **Theorem 2**: If the norm of the perturbation \(y(t)\) is small, the norm of the variation of the solution of the equation of the evolution of the weights is minorised by a constant multiplied times this small constant. In other words, the Riccati equation is stable with respect to variations of the input vector. This property allows one for the construction of the Voronoi partition if the learning algorithm here chosen is used. This results remains valid also if perturbation \(y(t)\) is a stochastic process of continuous trajectories; the almost-everywhere convergence is not obtained any more from the learning dynamics: only stability is achieved. It is important to remark that stability holds only if the components of the input vector are all strictly greater than or equal to a fixed positive number \(\gamma\); this behaviour was noticed also by Kohonen [9], but no satisfying explanation was given thereafter. This hypothesis provides one also with the motivation of the choice of the power spectrum as a representation of the voice. **Theorem 3**: If \(\mid y(t)\mid<\delta\), \(\forall t\), then there is stability if \[\delta<\gamma/8; \tag{4}\] it is important that this criterion be satisfied when \(\delta\) coincides with the value given by Eq. (3). In [3] work has been developed to prove Theorem 3. As a sample, 44 words pronounced by a woman were analysed. The patterns pertinent to the vowels \(a,o,e,i,u\) were extracted. 33 samples of \(a\) were obtained; 33 samples of \(o\) were obtained; 20 samples of \(e\) were obtained; 13 samples of \(i\) were obtained. Because only 3 samples of \(u\) were obtained, the latter vowel was excluded form the study. The constant \(\gamma\) is of order \(0.9\cdot 10^{-4}\). The convergence of the Riccati equation was obtained after a numerical integration of 150 steps, and the value was differing from that forecast by the theoretical one of a quantity smaller than \(3\cdot 10^{-7}\), in the case of a constant input. Furthermore, the numerical solution confirms that the approach to the limiting value is an exponential one. The stability condition was not matched because the value of \(\delta\) was calculated as \(\delta<\gamma/8\); nevertheless, after the numerical integration of Eq. (2), this property was verified by means of the data available. ## 4 Discussion It is now appropriate to state the following Definitions and Theorems. **Definition 3**: let \(x(t)\) be the input value of the network, and it is the same for each of the \(N\) neurons which constitute it; a weight vector \(m_{i}(t)\in\mathbb{R}^{n}\) is associated with each neuron \(i\); the output of a generic neuron is given by \[\eta=(m,x) \tag{5}\] where \((,)\) is the Euclidean scalar \(n-\)dimensional product. The output of the network given in this definition corresponds to what learnt from the recognition; indeed, the condition according to which the Euclidean distance between the weight of the winning neuron and the input pattern be minimal corresponds to the fact that \(\eta\) is the maximum: it is therefore appropriate to state that, during the recognition, only the neuron whose maximum output, according to the previous definition, be active. **Definition 4**: The phonemes form a set of \(n-\) dimensional constant vectors \(\hat{x}^{1},...,\hat{x}^{M}\), \(M\leq N\), such that \[||\ \hat{x}^{i}\ ||=1,\ \ ||\ \hat{x}^{i}-\hat{x}^{j}\ ||>\delta \tag{6}\] for \(i,j=1,...,M\), where \(\delta\) is the parameter previously defined, and \(||\ \cdot\ ||\) is the Euclidean norm of the space \(\mathbb{R}^{n}\). This definition equals the statement that the central vectors of the Voronoi partition are normalised to 1, and that there exists a minimum distance between them, which is an important parameter within the construction. **Definition 5**: The 'perturbed' set of the phonemes is a vector function \[x^{i}(t)=\hat{x}^{i}+y^{i}(t) \tag{7}\] with \(||\ y^{i}(t)\ ||^{2}\leq\delta^{2}\) for \(i=1,...,M\) and \(y^{i}(t)\) continuous. In this definition, the fact is established, that the set of phonemes by which the instruction of the network is done is given by the vector which really represents the phoneme, to which a quantity is added, which can be also random as far as it is continuous, which represents all the random fluctuations due to the accent of a person, to the position of the phoneme in the word, and so on. It is remarked that the construction is meaningful if this perturbation is smaller than \(\delta\); differently, there is a superposition between the data which instruct the network. **Definition 6**: The evolution equation of the weight vector of the generic neuron \(i\) is as follows: \[\dot{m}_{i}=\alpha x^{k}(t)-\beta m_{ij}x^{k}_{j}(t), \tag{8}\] where \(x^{t}(k)\) is the \(k-th\) perturbed phoneme, \(\alpha\) and \(\beta\) are positive constants, which are fixed, which depend on the characteristics of the particular circuit which realises the neural circuit. It is interesting to remark that, if one takes \(x(t)=\hat{x}^{k}(t)\), with \(k\) fixed, the the \(m^{*}\) chosen as \[m^{+}=\sqrt{\frac{\alpha}{\beta}}\hat{x}^{k}(t) \tag{9}\] is a fixed point of Eq. (8). Let \(m^{0}(t)\) the solution of Eq. (8) for \(x(t)=\hat{x}^{k}(t)\). There holds the following **Theorem 4**: For each initial condition \(m(0)\), one has \[m^{0}=\hat{x}^{k}(t)\sqrt{\frac{\alpha}{\beta}}+O(-\sqrt{\alpha\beta}t). \tag{10}\] Let \(m(t)\) be the solution of the evolution equation for the weights \(x(t)=\hat{x}^{k}(t)\), and let \(v\), \(v=m-m^{0}\), the variation of the solution with respect to the solution \(m^{0}\). The following Definition 7 and the following Theorem 5 are equivalent to stating the previous discussion, which demonstrates that the instruction of the neural network is meaningful only if the fluctuations which are present in the set of instruction of the network do not let the central vector of the Voronoi partition, which the instruction process builds, vary much. **Definition 7**: The network formed by the \(N\) neurons and their weights is stable with respect to the variation of the input \(\hat{x}^{k}(t)\) if it is possible to find \(\delta\) such that, for each \(y(t)\) continuous, \(||\;y(t)\;||\leq\delta\), the exists \(c\) such that \[||\;v(t)\;||\leq C\delta, \tag{11}\] where \(v\) is the variation of the previously-defined solution. It is now possible to expose the theorem which states the stability of the system in the sense of Definition 7. The theorem is valid also if the perturbation is a random function, as the only property which is requested is the continuity which is verified for many stochastic processes present in nature. **Theorem 5**: Let \(x^{k}(t)=\hat{x}+y^{k}(t)\) a continuous vector function, and let \(\gamma>0\) such that \[\hat{x}^{k}_{i}\geq\gamma,\;\;\forall i=1,...,n \tag{12}\] and \(m_{i}(0)>0\): then one has \[\delta<\frac{\gamma}{8} \tag{13}\] to determine \(T=T(y^{k})\) such that \[|\;v_{i}(t)\leq C\delta,\;\;for\;\;\;t\geq T(y^{k}),i=1,...,n \tag{14}\] \[C=\frac{16}{\gamma}\sqrt{\frac{\alpha}{\beta}}. \tag{15}\] ## 5 Outlook Self-organising maps have been tested for the recognition of word boundaries in [10]. Coding strategies between layers are discussed in [11]. Self-organising neural networks are analysed in [12] as far as the validity of the technique to span the speech space. The ability of time-dependent self-organising maps can be used to determine the time-dependent features of the input speech signal [13]. The consequences of the modifications of the input signal are studied in [14], [15]. The analysis of consonants has been scrutinised from different techniques; as a main result, the analysis of consonants is dependent of the chosen language [16], [17], [18]. The dynamic stability of the neural networks has been investigated in [19]. The Kohonen dynamics in a dynamically expanding context has been considered in [20]. An example of winner-take-all neural network is given in [21].
2307.11265
Unique common fixed point results for four weakly commuting maps in $G$-metric spaces
Using the setting of $G$-metric spaces, common fixed point theorems for four maps satisfying the weakly commuting conditions are obtained for various generalized contractive conditions. Several examples are also presented to show the validity of main results.
Talat Nazir, Sergei Silvestrov
2023-07-20T23:03:33Z
http://arxiv.org/abs/2307.11265v1
# Unique common fixed point results for four weakly commuting maps in \(G\)-metric spaces ###### Abstract Using the setting of \(G\)-metric spaces, common fixed point theorems for four maps satisfying the weakly commuting conditions are obtained for various generalized contractive conditions. Several examples are also presented to show the validity of main results. Keywords:weakly commuting maps, common fixed point, generalized contraction, \(G\)-metric space : 54H25, 47H10, 54E50 + Footnote †: journal: ## 1 Introduction The study of unique common fixed points of mappings satisfying certain contractive conditions has been at the center of strong research activity. Mustafa and Sims [10] generalized the concept of a metric space. Based on the notion of generalized metric spaces, Mustafa _et al._[11; 12; 13; 14; 15] obtained some fixed point theorems for mappings satisfying different contractive conditions. Study of common fixed point theorems in generalized metric spaces was initiated by Abbas and Rhoades [1]. Also, Abbas _et al._[2] obtained some periodic point results in generalized metric spaces. Saadati _et al._[16] stud
2306.06430
Approximations of Time-Dependent Nonlinear Partial Differential Equations using Galerkin Optimal Auxiliary Function Method
The purpose of this research work is to employ the Optimal Auxiliary Function Method (OAFM) for obtaining numerical approximations of time-dependent nonlinear partial differential equations (PDEs) that arise in many disciplines of science and engineering. The initial and first approximations of parabolic nonlinear PDEs associated with initial conditions have been generated by utilizing this method. Then the Galerkin method is applied to estimate the coefficients that remain unknown. Finally, the values of the coefficients generated by the Galerkin method have been inserted into the first approximation. In each example, all numerical computations and corresponding absolute errors are provided in schematic and tabular representations. The rate of convergence attained by the proposed method is depicted in tabular form
Nilormy Gupta Trisha, Md. Shafiqul Islam
2023-06-10T12:52:55Z
http://arxiv.org/abs/2306.06430v1
**Approximations of Time-Dependent Nonlinear Partial Differential Equations using Galerkin Optimal Auxiliary Function Method** ## Abstract The purpose of this research work is to employ the Optimal Auxiliary Function Method (OAFM) for obtaining numerical approximations of time-dependent nonlinear partial differential equations (PDEs) that arise in many disciplines of science and engineering. The initial and first approximations of parabolic nonlinear PDEs associated with initial conditions have been generated by utilizing this method. Then the Galerkin method is applied to estimate the coefficients that remain unknown. Finally, the values of the coefficients generated by the Galerkin method have been inserted into the first approximation. In each example, all numerical computations and corresponding absolute errors are provided in schematic and tabular representations. The rate of convergence attained by the proposed method is depicted in tabular form. **Keywords :** Parabolic PDE, Optimal Auxiliary Function Method, Nonlinear PDE, Galerkin Method ## 1 Introduction Nonlinear parabolic partial differential equations (PDEs) are used in simulating a wide variety of physical phenomena in the technological and scientific realms, from turbulence to particle dispersion, as well as in establishing the valuation of a wide range of derivative financial instruments. They are utilized in the endeavor of providing an explanation for a wide range of occurrences, including liquid filtration, sound, heat, diffusion, chemical reactions, fluid dynamics, environmental contamination, and many more. Nonlinear parabolic PDEs have been numerically studied using the well-known Adomian Decomposition Method [1]. The adaptive grid Haar wavelet collocation method [2] has been used to get quantitative solutions to these types of equations. Lang [3] has authored a book on the applicability of multilevel solutions to the parabolic PDE system. For fully nonlinear PDEs, Arash Fahim et al. [4] developed a method that combines Monte Carlo with a finite difference scheme. JW Wang et al. [5] have published an article utilizing a fuzzy control approach to address the complexities of nonlinear parabolic PDE systems. Several authors have put together a comprehensive reference work on these nonlinear PDEs. This book [6] covers a wide range of approaches and various aspects of these equations. Various eminent authors have published numerous books on these nonlinear PDEs [7, 8, 9, 10]. They have thoroughly examined these types of equations and analyzed their relevance to real-world issues. In order to address some particularly challenging instances of parabolic PDEs, the field of wavelet analysis has recently emerged as a significant mathematical tool [11]. Ekren has provided the viscosity solutions for fully nonlinear parabolic spatial PDEs [12]. In order to deal with the nonlinearity of parabolic PDEs, Mironchenko et al. [13] have utilized the monotony-control system. To estimate solutions to a set of nonlinear parabolic PDEs, Izadi et al. [14] have proposed an innovative hybrid spectral collocation methodology. In the research study cited in [15], an expansion of the Taylor series has been presented as a method for numerically solving parabolic PDEs. Many renowned authors [16, 17, 18, 19] have approximated parabolic PDEs associated with different boundary conditions with the help of the Galerkin Finite Element Method. Kamrujjaman et al. [20] have applied the finite difference method to renowned nonlinear PDEs. Alam et al. [21] have provided the approximate solutions of different parabolic PDEs--heat and wave equations. Each of the above approaches offers benefits and drawbacks. Likewise, we generalize a recently established methodology called the optimal auxiliary function method (OAFM) to the context of PDEs. The pioneers of this approach were Marinca & Marinca [22]. For obtaining an analytical solution of the fluid's thin layer in the cylinder's vertical orientation, they used this methodology. This method offers a means of controlling the convergence of numerical solutions with the assistance of convergence-control parameters in order to achieve the desired level of accuracy. Asserting such a command allows for controlled execution. It is a straightforward, convergent, moderate, and explicit method for obtaining nonlinear approximations. A further application of this approach was made by Marinca et al. [23] to resolve the nonlinear Blasius issue. In addition to this, it has been used to analyze the nonlinear vibrations of a pendulum that has been folded around two cylinders [24]. This method has been offered as an approximate analytical solution for the nonlinear boundary issue of viscous flow that is created by a stretched surface with partial slippage [25]. The thin layer of a third-grade fluid has been modeled using the optimal auxiliary function method so that it can be simulated on a moving belt [26]. It has also been demonstrated that the technique is superior to other approaches in terms of its effectiveness in addressing challenges brought on by misalignment [27]. Recently, Laiq Zada has utilized this method in order to create approximate-analytical solutions to partial differential equations and generalized modified b-equations, as referenced in [28] and [29]. In reference [30], the OAFM is generalized to the realm of partial differential equations and is employed to approximately solve KdV equations. Ullah et al. [31] have just recently utilized this strategy in order to evaluate fractional KdV equations. In reference [32], the proposed method has been employed for the steady nanofluids. The method has recently been applied in biological modeling [33]. In the research studies cited in [34] and [35], a thorough study into the electronic and physical modifications of a low-power PMS generator has been carried out with the assistance of the OAFM. The application of the Galerkin Weighted Residual Method (GWRM) dates back centuries, even before the advent of computers. The method is acknowledged to be one of the best and most widely applied methods. In the book cited in [36], Lewis and Ward have provided a detailed description of the procedure. Hossan et al. [37] have successfully implemented this strategy on the well-known Black-Scholes model. In their analysis of the Fredholm equations, Shirin et al. [38] have used the Galerkin method in conjunction with other specialized polynomials. The method has been applied to the boundary value problems in the research cited in [39]. It has also been employed to numerically calculate the eigenvalues of the Sturm-Liouville problem [40]. The technique has had extensive application in issues involving metal beams and polygonal ducts with rounded corners. [41, 42]. Inspired by all previous research, our proposed methodology has been deployed to solve some renowned nonlinear parabolic PDEs that are associated with corresponding initial conditions. So far, the method has not required any complicated computations due to the variety of boundary conditions considered. As far as we know, this approach to solving these kinds of PDEs is not available. All approximate results are analyzed in light of the exact solutions of the parabolic PDEs that have been provided. This article is split into four distinct sections. Section 2 provides a formal derivation of our suggested approach. In the third section, the approach's implications are shown while analyzing four problems characterized by high nonlinearity. Graphs of errors and numerical findings are included here as well. The fourth section contains some concluding remarks and a general discussion. ## 2 Mathematical Formulation Let us start with a general PDE of the form [28]: \[\Lambda[M(x,t)]+\Upsilon[M(x,t)]+g(x,t)=0,\ \ \ \ \ x\in\mathcal{D} \tag{1}\] subject to the boundary/initial conditions \(\Omega\left[M,\dfrac{\partial M}{\partial t}\right]=0\). Here \(\Lambda\) is considered the linear operator, and \(\Upsilon\) is regarded as the nonlinear operator. Again we consider \(g(x,t)\) and \(M(x,t)\) as known function and unknown functions respectively. The domain of interest is \(\mathcal{D}\). Let the form of the approximate solution of Equation (1) be \[\widetilde{M}(x,t)=M_{0}(x,t)+M_{1}(x,t,C_{i}),i=1,2,3...,p \tag{2}\] where \(C_{i}\)'s are currently unknowable \(p\) parameters. Here \(M_{0}(x,t)\) is the initial approximation and \(M_{1}(x,t)\) is the first approximation. In this case, \(p\) is a positive number chosen at random. Substituting (2) in Equation (1) we get, \[\Lambda[\widetilde{M}(x,t)]+\Upsilon[\widetilde{M}(x,t)]+g(x,t)=0\] \[\text{or,}\Lambda[M_{0}(x,t)+M_{1}(x,t,C_{i})]+\Upsilon[M_{0}(x,t)+M_{1}(x,t,C_{i})]+g(x,t)=0\] \[\text{or,}\Lambda[M_{0}(x,t)]+\Lambda[M_{1}(x,t,C_{i})]+\Upsilon[ M_{0}(x,t)]+\Upsilon[M_{1}(x,t,C_{i})]+g(x,t)=0 \tag{3}\] Since the nonlinear operator is \(\Lambda\), then we use the following linear equation to approximate the value of initial approximation \(M_{0}(x,t)\) at the outset: \[\Lambda[M_{0}(x,t)]+g(x,t)=0,\ \ \ \ \ \ \ \ \ \Omega\left[M_{0},\dfrac{\partial M_{0}}{ \partial t}\right]=0 \tag{4}\] To estimate the first approximation of solution (2), we consider the second differential equation in the following form which also comprises the nonlinear operator \(\Upsilon\), \[\Lambda[M_{1}(x,t,C_{i})]+\Upsilon[M_{0}(x,t)]+\Upsilon[M_{1}(x,t,C_{i})]=0, \ \ \ \ \ \ \ \ \Omega\left[M_{1},\dfrac{\partial M_{1}}{\partial t}\right]=0 \tag{5}\] The nonlinear term in Equation (5) is expanded in the form as \[\Upsilon[M_{0}(x,t)]+\Upsilon[M_{1}(x,t)]=\Upsilon[M_{0}(x,t)]+\sum_{\kappa \geq 1}\dfrac{M_{1}^{\kappa}(x,t,C_{i})}{\kappa!}\Upsilon^{(\kappa)}(M_{0}(x,t)) \tag{6}\] where \(\kappa!=\)1.2.3...\(\kappa\) and \(\Upsilon^{(\kappa)}\) stand for the differentiation of order \(\kappa\) of the nonlinear operator \(\Upsilon\). Instead of solving Equation (5), we want to find a way to circumvent the difficulties of resolving the nonlinear differential equation in (5) and hasten the first approximation's swift convergence and the implication of the solution \(\widetilde{M}(x,t)\). Therefore, Equation (5) is shown to have an alternate expression \[\Lambda[M_{1}(x,t,C_{i})]+B_{1}(M_{0}(x,t),C_{i})\Upsilon(M_{0}(x,t))+B_{2}(M_{0 }(x,t),C_{j})=0 \tag{7}\] with \[\Omega\left[M_{1}(x,t,C_{i}),\frac{\partial M_{1}(x,t,C_{i})}{ \partial t}\right]=0 \tag{8}\] \(B_{1}\) and \(B_{2}\) are presumed to be the random auxiliary functions that rely on the initial approximation \(M_{0}(x,t)\) and unknown parameters \(C_{i}\) and \(C_{j}\), where \(i=1,2,...,s\) and \(j=s+1,s+2,...,p\) respectively. The positive integer \(p\) and the auxiliary functions can be chosen in a wide variety of ways, providing us with a lot of flexibility. The auxiliary functions \(B_{1}(M_{0}(x,t),C_{i})\) and \(B_{2}(M_{0}(x,t),C_{j})\) are not unique; they rely on the initial approximation \(M_{0}(x,t)\) or on the combinations of \(M_{0}(x,t)\) and \(\Upsilon[M_{0}(x,t)]\). The unknown parameters' values \(C_{i}\) and \(C_{j}\) have been obtained by applying the Collocation method in the research article cited in [28]. But in this research article, we have employed the Galerkin method which is as follows: First of all, the approximate solution \(\widetilde{M}(x,t)\) can be redefined as : \[\widetilde{M}(x,t) =M_{0}(x,t)+M_{1}(x,t) \tag{9}\] \[=M_{0}(x,t)+\sum_{j=1}^{n}tC_{j}\phi_{j}(x) \tag{10}\] where the functions \(\phi_{i}(x)\) are called the coordinate functions. The **residual function** of Equation (1) can be written as, \[R(x,t)=\Lambda[\widetilde{M}(x,t)]+\Upsilon[\widetilde{M}(x,t)] +g(x,t) \tag{11}\] Then the residual function \(R(x,t)\) by the coordinate functions and set the residual equation as \[\int_{\mathcal{D}}R(x,t)\phi_{i}(x)dx=0\] \[\text{or}\int_{\mathcal{D}}\Big{[}\Lambda[\widetilde{M}(x,t)]+ \Upsilon[\widetilde{M}(x,t)]+g(x,t)\Big{]}\phi_{i}(x)dx=0 \tag{12}\] Then we substitute the redefined solution (12) in the residual equation (12). It results in the following equation, \[\int_{\mathcal{D}}\Bigg{[}\Lambda\Big{[}M_{0}(x,t)+\sum_{j=1}^{n }tC_{j}\phi_{j}(x)\Big{]}+\Upsilon\Big{[}M_{0}(x,t)+\sum_{j=1}^{n}tC_{j}\phi_{ j}(x)\Big{]}+g(x,t)\Bigg{]}\phi_{i}(x)dx=0\] \[\text{or}\quad\sum_{j=1}^{n}C_{j}K_{ij}=F_{i} \tag{13}\] where \[K_{ij}= \int_{\mathcal{D}}\Big{[}\Lambda\big{[}t\phi_{j}(x)\big{]}+ \Upsilon\big{[}t\phi_{j}(x)\big{]}\Big{]}\phi_{i}(x)dx \tag{14}\] \[F_{i}= \int_{\mathcal{D}}\Big{[}\Lambda[M_{0}(x,t)]+\Upsilon[M_{0}(x,t)] +g(x,t)\Big{]}\phi_{i}(x)dx \tag{15}\] The linear part of Equation (13) has been solved for the initial solution. Then the initial solutions are used for determining the coefficients \(C_{i}\) and \(C_{j}\). The method has been elaborated for the particular test problems in the following section. With these known parameters, the approximated solution \(\widetilde{M}(x,t)\) is clearly defined. Other methods for approximating nonlinear analytical solutions rely on pre-existing solutions, but we build the solution from scratch using only a minimal quantity of convergence-control parameters \(C_{i}(i=1,2,...,p)\) that are elements of the so-called optimal auxiliary functions. The optimal auxiliary function method is a sequential approach that swiftly converges on the exact solution following the first iteration of the process. This method is based on the establishment and modification of auxiliary functions, as well as a simple mechanism for regulating the convergence of the solutions. ## 3 Numerical Examples and Applications In this part of the article, we will look at the numerical solutions to several well-known nonlinear parabolic equations (the Benjamin-Bona-Mahony equation, Fisher's equation, the shock problem, and the Burger-Fisher equation) associated with initial conditions. In addition to the exact solution, graphical and numerical representations of all the numerical results and the absolute errors are provided here. The absolute error refers to the discrepancy between the observed or estimated magnitude of a quantity and its real magnitude. The absolute error is determined by the following expression, \[\text{Absolute Error (AE)}=|M(x,t)-\widetilde{M}(x,t)| \tag{16}\] where \(M(x,t)\) is the exact value and \(\widetilde{M}(x,t)\) is the approximate value. The absolute error of the measurement shows how large the error actually is. In addition, the **Rate of Convergence**[43] can be defined as follows: \[\mathcal{CR}=\frac{\log\frac{\epsilon_{1}}{\epsilon_{2}}}{\log\frac{\epsilon_{ 1}}{\epsilon_{2}}} \tag{17}\] where \(\epsilon_{1}\), \(\epsilon_{2}\) are the absolute errors (AE) defined in Equation (16) for time step \(t_{1}\) and \(t_{2}\), respectively. _Test Problem 1 :_ The Benjamin-Bona-Mahony equation models long waves in a nonlinear dispersive system. The equation is also called **Regularized Long-Wave Equation (RLWE)**. It is considered an improvement of the KdV equation. The equation covers the areas of surface waves of long wavelengths in liquids, acoustic-gravity waves in compressible fluids, hydromagnetic waves in a cold plasma, and acoustic waves in anharmonic crystals. Let us consider the Benjamin-Bona-Mahony equation of the following type [28], \[\left.\begin{aligned} &\frac{\partial M(x,t)}{\partial t}-\frac{ \partial^{3}M(x,t)}{\partial x^{2}\partial t}+\frac{\partial M(x,t)}{\partial x }+M(x,t)\frac{\partial M(x,t)}{\partial x}=0\\ & M(x,0)=\text{sech}^{2}\left(\frac{x}{4}\right)\end{aligned}\right\} \tag{18}\] where \(x\in[0,0.07]\) and \(t>0\). The following provides an exact solution to the problem mentioned in (18) \[M(x,t)=\text{sech}^{2}\left(\frac{x}{4}-\frac{t}{3}\right)\] The first-order approximate solution to the corresponding problem is given by, \[\widetilde{M}(x,t)=\operatorname{sech}^{2}\Big{(}\frac{x}{4}\Big{)}+ t\Bigg{[}C_{1}\Big{(}\frac{1}{2}\operatorname{sech}^{4}\Big{(}\frac{x}{4} \Big{)}\tanh\Big{(}\frac{x}{4}\Big{)}\] \[+\frac{1}{2}\operatorname{sech}^{6}\Big{(}\frac{x}{4}\Big{)} \tanh\Big{(}\frac{x}{4}\Big{)}\Big{)}+C_{2}\Big{(}\frac{1}{2}\operatorname{ sech}^{6}\Big{(}\frac{x}{4}\Big{)}\tanh\Big{(}\frac{x}{4}\Big{)}\] \[+\frac{1}{2}\operatorname{sech}^{8}\Big{(}\frac{x}{4}\Big{)} \tanh\Big{(}\frac{x}{4}\Big{)}\Big{)}-C_{3}\operatorname{sech}^{6}\Big{(} \frac{x}{4}\Big{)}-C_{4}\operatorname{sech}^{8}\Big{(}\frac{x}{4}\Big{)}\Bigg{]} \tag{19}\] The solution (19) can be written in the form of \[\widetilde{M}(x,t)=M_{0}(x,t)+\sum_{j=1}^{n}tC_{j}\phi_{j}(x),\hskip 28.452756ptn =1,2,3,4 \tag{20}\] where \(\phi_{1}(x)=\frac{1}{2}\operatorname{sech}^{4}\Big{(}\frac{x}{4}\Big{)}\tanh \Big{(}\frac{x}{4}\Big{)}+\frac{1}{2}\operatorname{sech}^{6}\Big{(}\frac{x}{4 }\Big{)}\tanh\Big{(}\frac{x}{4}\Big{)}\), \(\phi_{2}(x)=\frac{1}{2}\operatorname{sech}^{6}\Big{(}\frac{x}{4}\Big{)}\tanh \Big{(}\frac{x}{4}\Big{)}+\frac{1}{2}\operatorname{sech}^{8}\Big{(}\frac{x}{4 }\Big{)}\tanh\Big{(}\frac{x}{4}\Big{)}\) \(\phi_{3}(x)=-\operatorname{sech}^{6}\Big{(}\frac{x}{4}\Big{)}\), \(\phi_{4}(x)=-\operatorname{sech}^{8}\Big{(}\frac{x}{4}\Big{)}\). The corresponding residual function can be represented as, \[R(x,t)=\frac{\partial}{\partial t}\big{(}\widetilde{M}(x,t)\big{)}-\frac{ \partial^{3}}{\partial x^{2}\partial t}\big{(}\widetilde{M}(x,t)\big{)}+ \frac{\partial}{\partial x}\big{(}\widetilde{M}(x,t)\big{)}+\big{(}\widetilde {M}(x,t)\big{)}\frac{\partial}{\partial x}(\widetilde{M}(x,t)\big{)} \tag{21}\] To obtain the values of auxiliary parameters' we set the residual equation as, \[\int_{0}^{0.07}R(x,t)\phi_{i}(x)dx=0\] \[\text{or}\hskip 14.226378pt\sum_{j=1}^{n}C_{j}\int_{0}^{0.07} \Bigg{[}\phi_{j}-\frac{\partial^{2}\phi_{j}}{\partial x^{2}}+t\frac{\partial \phi_{j}}{\partial x}+t\phi_{j}\Bigg{(}\frac{\partial M_{0}}{\partial x}+\sum_ {k=1}^{n}tC_{k}\frac{\partial\phi_{k}}{\partial x}\Bigg{)}+M_{0}t\frac{ \partial\phi_{j}}{\partial x}\Bigg{]}\phi_{i}(x)dx\] \[=\int_{0}^{0.07}\Bigg{[}-\frac{\partial M_{0}}{\partial t}+\frac {\partial^{3}M_{0}}{\partial x^{2}\partial t}-\frac{\partial M_{0}}{\partial x }-M_{0}\frac{\partial M_{0}}{\partial x}\Bigg{]}\phi_{i}(x)dx\] \[\text{or}\hskip 14.226378pt\sum_{j=1}^{n}C_{j}K_{ij}=F_{i}\] where, \[K_{ij}=\int_{0}^{0.07}\Bigg{[}\phi_{j}-\frac{\partial^{2}\phi_{ j}}{\partial x^{2}}+t\frac{\partial\phi_{j}}{\partial x}+t\phi_{j}\Bigg{(} \frac{\partial M_{0}}{\partial x}+\sum_{k=1}^{n}tC_{k}\frac{\partial\phi_{k}}{ \partial x}\Bigg{)}+M_{0}t\frac{\partial\phi_{j}}{\partial x}\Bigg{]}\phi_{i}( x)dx\] \[F_{i}=\int_{0}^{0.07}\Bigg{[}-\frac{\partial M_{0}}{\partial t} +\frac{\partial^{3}M_{0}}{\partial x^{2}\partial t}-\frac{\partial M_{0}}{ \partial x}-M_{0}\frac{\partial M_{0}}{\partial x}\Bigg{]}\phi_{i}(x)dx\] The values of these coefficients are therefore inserted in the solution (19), yielding the final outcome. In the following table, the absolute errors obtained by different methods and our proposed method have been shown. Table (1) has been used to represent the absolute errors at different time steps for values of \(x=0.03\) and \(x=0.04\). The collocation method was used to generate the values of the coefficients of solution (19) in reference [28]. In light of the information shown in Table (1), we are able to draw the conclusion that the solutions are pretty similar to one another. The test problem has been addressed using the optimal homotopy asymptotic methodology, as shown in reference [44]. The table shows that our findings are significantly better than those achieved by the preceding method. _Test Problem 2 :_ Fisher's equation [45] is widely considered to be one of the most prominent examples of a nonlinear reaction-diffusion equation. The equation has been used in different aspects like flame propagation, chemical reactions, etc. Let us now consider Fisher's equation with the initial condition, \[\left.\begin{aligned} &\frac{\partial M(x,t)}{\partial t}=\frac{ \partial^{2}M(x,t)}{\partial x^{2}}+6M(x,t)(1-M(x,t))\\ & M(x,0)=(1+e^{x})^{-2}\end{aligned}\right\} \tag{22}\] where \(x\in[0,1]\) and \(t>0\). The following equation provides an exact solution to the aforementioned problem (22), \[M(x,t)=(1+e^{x-5t})^{-2}.\] The first-order approximate solution to the corresponding problem is given by, \[\widetilde{M}(x,t)=(1+e^{x})^{-2}-t\Big{[}C_{1}(1+e^{x})^{-2}+C_ {2}\Big{(} (1+e^{x})^{-2}\Big{)}^{2}\] \[+C_{3}\Big{(}(1+e^{x})^{-2}\Big{)}^{3}+C_{4}\Big{(}(1+e^{x})^{-2} \Big{)}^{4}\Big{]} \tag{23}\] where \(\phi_{1}(x)=-(1+e^{x})^{-2}\), \(\phi_{2}(x)=-\Big{(}(1+e^{x})^{-2}\Big{)}^{2}\), \(\phi_{3}(x)=-\Big{(}(1+e^{x})^{-2}\Big{)}^{3}\), \(\phi_{4}(x)=-\Big{(}(1+e^{x})^{-2}\Big{)}^{4}\). The residual function \(R(x,t)\) can be defined as \[R(x,t)=\frac{\partial\widetilde{M}(x,t)}{\partial t}-\frac{ \partial^{2}\widetilde{M}(x,t)}{\partial x^{2}}-6\widetilde{M}(x,t)(1- \widetilde{M}(x,t)) \tag{24}\] \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(t\)} & \multicolumn{3}{c|}{\(x=0.03\)} & \multicolumn{3}{c|}{\(x=0.04\)} \\ \cline{2-7} & **Absolute Error** & **Absolute Error [28]** & **Absolute Error [44]** & **Absolute Error** & **Absolute Error [28]** & **Absolute Error [44]** \\ \hline 0.01 & \(1.411273\times 10^{-46}\) & \(1.4104\times 10^{-66}\) & \(2.2664\times 10^{-04}\) & \(1.584927\times 10^{-08}\) & \(1.1584\times 10^{-06}\) & \(2.7703\times 10^{-04}\) \\ \hline 0.02 & \(6.004954\times 10^{-06}\) & \(5.9887\times 10^{-66}\) & \(6.03325\times 10^{-04}\) & \(9.480540\times 10^{-06}\) & \(9.4689\times 10^{-06}\) & \(7.04304\times 10^{-04}\) \\ \hline 0.03 & \(2.432482\times 10^{-08}\) & \(2.4349\times 10^{-06}\) & \(1.13601\times 10^{-08}\) & \(1.1910947\times 10^{-08}\) & \(1.9126\times 10^{-05}\) & \(1.28165\times 10^{-03}\) \\ \hline 0.04 & \(7.697610\times 10^{-05}\) & \(7.6908\times 10^{-06}\) & \(1.80786\times 10^{-03}\) & \(6.992144\times 10^{-08}\) & \(6.9944\times 10^{-05}\) & \(2.0096\times 10^{-03}\) \\ \hline 0.05 & \(1.516464\times 10^{-04}\) & \(1.5168\times 10^{-04}\) & \(2.63254\times 10^{-03}\) & \(1.429545\times 10^{-04}\) & \(1.4298\times 10^{-04}\) & \(2.88653\times 10^{-03}\) \\ \hline \end{tabular} \end{table} Table 1: Tabular Representations of the absolute error of (18) at different values of \(x\) and the residual equation can be written as \[\int_{0}^{1}R(x,t)\phi_{i}(x)dx=0\] \[\text{or}\sum_{j=1}^{n}C_{j}\int_{0}^{1}\Bigg{[}\phi_{j}-t\frac{ \partial^{2}\phi_{j}}{\partial x^{2}}+6tM_{0}\phi_{j}-6t\phi_{j}+6tM_{0}\phi_{j }+\Big{(}\sum_{k=1}^{n}6t^{2}C_{k}\phi_{k}\Big{)}\phi_{j}\Bigg{]}\phi_{i}(x)dx\] \[=\int_{0}^{1}\Bigg{[}-\frac{\partial M_{0}}{\partial t}+\frac{ \partial^{2}M_{0}}{\partial x^{2}}+6M_{0}-6M_{0}^{2}\Bigg{]}\phi_{i}(x)dx\] \[\text{or}\hskip 14.226378pt\sum_{j=1}^{n}C_{j}K_{ij}=F_{i}\] where \[K_{ij}=\int_{0}^{1}\Bigg{[}\phi_{j}-t\frac{\partial^{2}\phi_{j} }{\partial x^{2}}+6tM_{0}\phi_{j}-6t\phi_{j}+6tM_{0}\phi_{j}+\Big{(}\sum_{k=1} ^{n}6t^{2}C_{k}\phi_{k}\Big{)}\phi_{j}\Bigg{]}\phi_{i}(x)dx\] \[F_{i}=\int_{0}^{1}\Bigg{[}-\frac{\partial M_{0}}{\partial t}+ \frac{\partial^{2}M_{0}}{\partial x^{2}}+6M_{0}-6M_{0}^{2}\Bigg{]}\phi_{i}(x)dx\] We have evaluated the values of these coefficients by solving the system of equations. The values are therefore inserted in the solution (23), yielding the final outcome. Table (2) has been yielded to represent the approximate solutions and absolute errors of (22) at different time steps. The table has also included the reference absolute errors from the previously published literature. Based on Table (2), it can be shown that our suggested methodology provides a high level of precision across the domain and at various time levels. Comparing the features of the approximate solution with those of the exact solution reveals a high degree of similarity. The following table shows the convergence rate of the test problem using our proposed method for different time steps. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(\mathbf{x}\)} & \multicolumn{3}{c|}{\(t=0.001\)} & \multicolumn{3}{c|}{\(t=0.01\)} \\ \cline{2-6} & **Approximate Solution** & **Absolute Error** & **Absolute Error** [45] & **Approximate Solution** & **Absolute Error** & **Absolute Error** [45] \\ \hline 0.0 & 0.25125226 & \(7.00945\times 10^{-06}\) & \(5.0\times 10^{-04}\) & 0.26274282 & \(8.92336\times 10^{-06}\) & \(4.7\times 10^{-03}\) \\ \hline 0.1 & 0.2683291 & \(1.84484\times 10^{-06}\) & \(1.0\times 10^{-06}\) & 0.23780423 & \(1.4542\times 10^{-04}\) & \(7.0\times 10^{-04}\) \\ \hline 0.2 & 0.2036738 & \(1.90543\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.21414333 & \(1.72178\times 10^{-04}\) & \(2.3\times 10^{-03}\) \\ \hline 0.3 & 0.18214311 & \(1.75098\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.19187472 & \(1.85299\times 10^{-04}\) & \(2.2\times 10^{-03}\) \\ \hline 0.4 & 0.16201943 & \(1.72015\times 10^{-06}\) & \(2.0\times 10^{-06}\) & 0.17107753 & \(1.92511\times 10^{-04}\) & \(2.4\times 10^{-03}\) \\ \hline 0.5 & 0.14342796 & \(1.84292\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.15179831 & \(1.96502\times 10^{-04}\) & \(2.5\times 10^{-03}\) \\ \hline 0.6 & 0.12657404 & \(2.00938\times 10^{-06}\) & \(3.0\times 10^{-04}\) & 0.13405416 & \(1.97392\times 10^{-04}\) & \(2.5\times 10^{-03}\) \\ \hline 0.7 & 0.11083896 & \(2.08225\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.17183625 & \(1.94426\times 10^{-04}\) & \(2.5\times 10^{-03}\) \\ \hline 0.8 & 0.09678273 & \(1.95933\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.10813127 & \(1.8695\times 10^{-04}\) & \(2.2\times 10^{-03}\) \\ \hline 0.9 & 0.08414747 & \(1.59713\times 10^{-06}\) & \(2.0\times 10^{-04}\) & 0.08983494 & \(1.74906\times 10^{-04}\) & \(2.6\times 10^{-03}\) \\ \hline 1.0 & 0.07286085 & \(1.00891\times 10^{-06}\) & \(1.0\times 10^{-04}\) & 0.07793554 & \(1.58800\times 10^{-04}\) & \(6.0\times 10^{-03}\) \\ \hline \end{tabular} \end{table} Table 2: Tabular Representations of approximate solutions and absolute errors of (22) at different time steps. \begin{table} \begin{tabular}{|c|c|c|} \hline \(\mathbf{t}\) & **MAE** & **Convergence Rate** \\ \hline 0.001 & \(2.08\times 10^{-06}\) & \\ 0.002 & \(7.890\times 10^{-06}\) & 1.9234 \\ 0.003 & \(1.758\times 10^{-05}\) & 1.9759 \\ 0.004 & \(3.122\times 10^{-05}\) & 1.9963 \\ 0.005 & \(4.882\times 10^{-05}\) & 2.0035 \\ \hline \end{tabular} \end{table} Table 3: Convergence Rate (\(\mathcal{CR}\)) using the present approach for Test Problem 2 Figure (1) illustrates the required results derived from equation (22) employing the developed algorithm of our provided method for different time steps. Given how similar the approximate and exact solutions are, distinguishing between them through these diagrams is a challenging task. Figure (2) displays the error graphs that show the absolute discrepancy between the numerical and exact solutions of (22). An acceptable level of inaccuracy is provided by the absolute error map for the employment of the OAFM in illustrating numerical results. _Test Problem 3:_ The shock wave is a prominent type of equation used in science and technology. The shock wave has interesting applications in a variety of areas, such as medicine, biological sciences, material processing, manufacturing, and microelectronic industries. Let us now take into consideration the uniformly propagating shock problem [46], \[\left.\begin{aligned} &\frac{\partial M(x,t)}{\partial t}=\frac{1}{ Re}\frac{\partial^{2}M(x,t)}{\partial x^{2}}-M(x,t)\frac{\partial M(x,t)}{ \partial x}\\ & M(x,0)=\frac{x-4}{x-2}\end{aligned}\right\} \tag{25}\] where \(x\in[-1,1]\), \(t>0\) and \(Re\) is known as the Reynolds number. Here \(Re=1\). The following provides an exact solution to the aforementioned problem (25), \[M(x,t)=1-\frac{2}{x-t-2}\] Figure 1: Exact (on left) and Approximate (on right) solution of (22) at different time steps Figure 2: The schematic 3D representation of absolute error of (22) for different time steps The first-order approximate solution of (25) is obtained as, \[\widetilde{M}(x,t)=\frac{x-4}{x-2}-t\Bigg{[}C_{1}\frac{x-4}{x-2}+C_{2}\Big{(} \frac{x-4}{x-2}\Big{)}^{2}+C_{3}\Big{(}\frac{x-4}{x-2}\Big{)}^{3}+C_{4}\Big{(} \frac{x-4}{x-2}\Big{)}^{4}\Bigg{]} \tag{26}\] where \(\phi_{1}(x)=-\Big{(}\frac{x-4}{x-2}\Big{)}\), \(\phi_{2}(x)=-\Big{(}\frac{x-4}{x-2}\Big{)}^{2}\), \(\phi_{3}(x)=-\Big{(}\frac{x-4}{x-2}\Big{)}^{3}\), \(\phi_{4}(x)=-\Big{(}\frac{x-4}{x-2}\Big{)}^{4}\). The residual function \(R(x,t)\) can be defined as \[R(x,t)=\frac{\partial\widetilde{M}(x,t)}{\partial t}-\frac{1}{Re}\frac{ \partial^{2}\widetilde{M}(x,t)}{\partial x^{2}}+\widetilde{M}(x,t)\frac{ \partial\widetilde{M}(x,t)}{\partial x} \tag{27}\] and the residual equation can be written as \[\int_{-1}^{1}R(x,t)\phi_{i}(x)dx=0\] \[\text{or}\sum_{j=1}^{n}C_{j}\int_{-1}^{1}\Bigg{[}\phi_{j}-\frac{ t}{Re}\frac{\partial^{2}\phi_{j}}{\partial x^{2}}+tM_{0}\frac{\partial\phi_{j}}{ \partial x}+t\phi_{j}\frac{\partial M_{0}}{\partial x}+\Big{(}\sum_{k=1}^{n} t^{2}C_{k}\phi_{k}\Big{)}\phi_{j}\Bigg{]}\phi_{i}(x)dx=\int_{-1}^{1}\Bigg{[}-\frac{ \partial M_{0}}{\partial t}\] \[+\frac{1}{Re}\frac{\partial^{2}M_{0}}{\partial x^{2}}+M_{0}\frac {\partial M_{0}}{\partial x}\Bigg{]}\phi_{i}(x)dx\] \[\text{or}\quad\sum_{j=1}^{n}C_{j}K_{ij}=F_{i}\] where \[K_{ij}=\int_{-1}^{1}\Bigg{[}\phi_{j}-\frac{t}{Re}\frac{\partial^ {2}\phi_{j}}{\partial x^{2}}+tM_{0}\frac{\partial\phi_{j}}{\partial x}+t\phi_ {j}\frac{\partial M_{0}}{\partial x}+\Big{(}\sum_{k=1}^{n}t^{2}C_{k}\phi_{k} \Big{)}\phi_{j}\Bigg{]}\phi_{i}(x)dx\] \[F_{i}=\int_{-1}^{1}\Bigg{[}-\frac{\partial M_{0}}{\partial t}+ \frac{1}{Re}\frac{\partial^{2}M_{0}}{\partial x^{2}}+M_{0}\frac{\partial M_{0 }}{\partial x}\Bigg{]}\phi_{i}(x)dx\] In order to guarantee the efficacy and dependability of this process, a comparison will be made between the approximate numerical solution and the exact solution in Table (4). According to Table (4), it is evident that our aforementioned method guarantees high precision across a variety of time scales. Figure (3) provides the 3D visual representations of the exact and approximate results of (25) for different time steps. As can be seen in Figure (3), it is difficult to differentiate between exact and approximate solutions, which paves the way for the acceptance of numerical representations derived through the implementation of the optimal auxiliary function method. Figure (4) exhibits a visual representation of the absolute errors for different time steps and for different values of \(x\). The figure shows that the absolute errors are quite negligible. In addition, the figure verifies the solution's reliability and acceptance of the recommended approach. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\(\mathbf{x}\)} & \multicolumn{2}{c|}{\(t=0.01\)} & \multicolumn{2}{c|}{\(t=0.02\)} & \multicolumn{2}{c|}{\(t=0.03\)} \\ \cline{2-7} & **Approximate** & **Absolute Error** & **Approximate** & **Absolute Error** & **Approximate** & **Absolute Error** \\ \hline -1.0 & 1.6644666 & \(1.28377\times 10^{-05}\) & 1.66228964 & \(7.00945\times 10^{-05}\) & 1.66013535 & \(6.93521\times 10^{-05}\) \\ \hline -0.8 & 1.7115467 & \(1.09049\times 10^{-05}\) & 1.70925886 & \(1.84484\times 10^{-05}\) & 1.70679439 & \(8.06126\times 10^{-05}\) \\ \hline -0.6 & 1.76629410 & \(1.05769\times 10^{-05}\) & 1.76340273 & \(1.90543\times 10^{-05}\) & 1.76055458 & \(9.83131\times 10^{-04}\) \\ \hline -0.4 & 1.82988779 & \(1.22779\times 10^{-05}\) & 1.82650026 & \(1.75098\times 10^{-05}\) & 1.82316979 & \(1.24524\times 10^{-04}\) \\ \hline -0.2 & 1.90499387 & \(1.64933\times 10^{-05}\) & 1.90097158 & \(1.72015\times 10^{-05}\) & 1.89702342 & \(1.62435\times 10^{-04}\) \\ \hline 0.0 & 1.99504863 & \(2.37565\times 10^{-05}\) & 1.99019534 & \(1.84292\times 10^{-05}\) & 1.98543925 & \(2.17583\times 10^{-04}\) \\ \hline 0.2 & 2.10500705 & \(3.46761\times 10^{-05}\) & 2.09903586 & \(2.00938\times 10^{-04}\) & 2.09319728 & \(3.01108\times 10^{-04}\) \\ \hline 0.4 & 2.2428626 & \(5.02384\times 10^{-05}\) & 2.23476196 & \(2.08225\times 10^{-04}\) & 2.22743305 & \(4.39187\times 10^{-04}\) \\ \hline 0.6 & 2.41851329 & \(7.35720\times 10^{-05}\) & 2.40874659 & \(1.95933\times 10^{-04}\) & 2.39930454 & \(7.03143\times 10^{-04}\) \\ \hline 0.8 & 2.65301155 & \(1.18993\times 10^{-04}\) & 2.63985645 & \(1.59713\times 10^{-04}\) & 2.62733328 & \(1.31702\times 10^{-03}\) \\ \hline 1.0 & 2.98045785 & \(2.59828\times 10^{-04}\) & 2.96191353 & \(1.00891\times 10^{-03}\) & 2.94484991 & \(3.10234\times 10^{-03}\) \\ \hline \end{tabular} \end{table} Table 4: Tabular Representations of approximate solutions and absolute errors of (25) at different time steps Figure 3: Exact (on left) and Approximate (on right) solution with of (25) at different time steps _Test Problem 4:_ The Burger-Fisher equation illustrates a conventional concept for modeling the interplay between the reaction process, the convection impact, and diffusive mobility. The equation is important to the study of a wide variety of subfields within mathematics and physics, including finance, gas dynamics, and traffic patterns. Let us consider the Burger-Fisher equation [47] with the initial condition, \[\left.\begin{aligned} &\frac{\partial M(x,t)}{\partial t}+\alpha M(x,t)^{ \omega}\frac{\partial M(x,t)}{\partial x}-\frac{\partial^{2}M(x,t)}{\partial x ^{2}}=\beta M(x,t)(1-M(x,t)^{\omega})\\ & M(x,0)=\Big{\{}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{- \alpha\omega}{2(\omega+1)}x\Big{)}\Big{\}}^{\frac{1}{\omega}}\end{aligned}\right\} \tag{28}\] where \(x\in[0,1]\) and \(t>0\). We choose \(\alpha=1\), \(\beta=1\), and \(\omega=1\) to get our approximate solution. The following provides an exact solution to the aforementioned problem (28), \[M(x,t)=\Bigg{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{[}\frac{-\alpha\omega}{2( \omega+1)}\Big{(}x-\Big{(}\frac{\alpha}{\omega+1}+\frac{\beta(\omega+1)}{ \alpha}\Big{)}t\Big{)}\Big{]}\Bigg{)}^{\frac{1}{\omega}}.\] The first-order approximate solution of (28) is obtained as, \[\widetilde{M}(x,t)=\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(} \frac{-x}{4}\Big{)}\Big{)}-t\Bigg{[}C_{1}\Big{(}\frac{1}{2}+\frac{1}{2}\tanh \Big{(}\frac{-x}{4}\Big{)}\Big{)}+C_{2}\Big{(}\frac{1}{2}+\frac{1}{2}\tanh \Big{(}\frac{-x}{4}\Big{)}\Big{)}^{2}\\ +C_{3}\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4} \Big{)}\Big{)}^{3}+C_{4}\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4} \Big{)}\Big{)}^{4}\Bigg{]} \tag{29}\] The solution (29) can be written in the form of \[\widetilde{M}(x,t)=M_{0}(x,t)+\sum_{j=1}^{n}tC_{j}\phi_{j}(x),\hskip 28.452756ptn =1,2,3,4 \tag{30}\] where \(\phi_{1}(x)=-\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4}\Big{)} \Big{)}\), \(\phi_{2}(x)=-\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4}\Big{)} \Big{)}^{2}\), \(\phi_{3}(x)=-\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4}\Big{)} \Big{)}^{3}\), \(\phi_{4}(x)=-\Big{(}\frac{1}{2}+\frac{1}{2}\tanh\Big{(}\frac{-x}{4}\Big{)} \Big{)}^{4}\). Figure 4: The schematic representation of absolute error of (25) for various time steps The corresponding residual function can be represented as, \[R(x,t)=\frac{\partial\widetilde{M}}{\partial t}+\widetilde{M}\frac{\partial \widetilde{M}}{\partial x}-\frac{\partial^{2}\widetilde{M}}{\partial x^{2}}- \widetilde{M}(1-\widetilde{M}) \tag{31}\] To obtain the values of auxiliary parameters' we set the residual equation as, \[\int_{0}^{1}R(x,t)\phi_{i}(x)dx=0\] \[\text{or}\quad\quad\sum_{j=1}^{n}C_{j}\int_{0}^{1}\Bigg{[}\phi_{ j}-t\phi_{j}\frac{\partial M_{0}}{\partial x}+M_{0}t\frac{\partial\phi_{j}}{ \partial x}+\Big{(}\sum_{k=1}^{n}tC_{k}\frac{\partial\phi_{k}}{\partial x} \Big{)}\phi_{j}-t\frac{\partial^{2}\phi_{j}}{\partial x^{2}}-t\phi_{j}(1-M_{0} )+M_{0}t\phi_{j}\] \[\qquad\quad+t^{2}\phi_{j}\Big{(}\sum_{k=1}^{n}C_{k}\phi_{k} \Big{)}\Bigg{]}\phi_{i}(x)dx=\int_{0}^{1}\Bigg{[}-\frac{\partial M_{0}}{ \partial t}-M_{0}\frac{\partial M_{0}}{\partial x}+\frac{\partial^{2}M_{0}}{ \partial x^{2}}+M_{0}(1-M_{0})\Bigg{]}\phi_{i}(x)dx\] \[\text{or}\quad\quad\sum_{j=1}^{n}C_{j}K_{ij}=F_{i}\] where, \[K_{ij}=\int_{0}^{1}\Bigg{[}\phi_{j}-t\phi_{j}\frac{\partial M_{0 }}{\partial x}+M_{0}t\frac{\partial\phi_{j}}{\partial x}+\Big{(}\sum_{k=1}^{n} tC_{k}\frac{\partial\phi_{k}}{\partial x}\Big{)}\phi_{j}-t\frac{\partial^{2} \phi_{j}}{\partial x^{2}}-t\phi_{j}(1-M_{0})+M_{0}t\phi_{j}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad different time steps. Figure (5) is deployed to provide pictorial representations of the exact and approximate data obtained by the present methodology at different time steps. The figure represents a good agreement between the exact and approximate data. Figure (6) exhibits a visual representation of the absolute errors for various nodes at different time steps. The figure ensures that the algorithm is reliable enough. ## Conclusion In this research, we have obtained numerical approximations of nonlinear parabolic PDEs with the help of the initial conditions through the application of the optimal auxiliary function method. This procedure \begin{table} \begin{tabular}{|c|c|c|} \hline \(\mathbf{t}\) & **MAE** & **Convergence Rate** \\ \hline 0.01 & \(7.8876\times 10^{-06}\) & \\ 0.02 & \(3.1840\times 10^{-05}\) & 2.0132 \\ 0.03 & \(7.2265\times 10^{-05}\) & 2.0214 \\ 0.04 & \(1.2954\times 10^{-04}\) & 2.0288 \\ 0.05 & \(2.0402\times 10^{-04}\) & 2.0356 \\ \hline \end{tabular} \end{table} Table 6: Convergence Rate (\(\mathcal{CR}\)) using the present approach for Test Problem 4 Figure 5: Exact (on left) and Approximate (on right) solution with of (28) at different time steps Figure 6: The schematic representation of absolute error of (28) for different time steps has been put in place to determine the initial and first estimations of the parabolic PDEs. Then, by adding these parts, we have derived the first-order approximate solutions. We have also employed the well-known Galerkin approach to calculate the coefficients of initial estimation. Then, the technique has been applied to numerically solve a number of well-known parabolic PDEs, with the approximation results being compared to the exact ones. All of the approximate results, together with their graphical and tabular representations, as well as their absolute errors, are provided. In light of these facts, it is clear that the approach that was suggested is efficient, reliable, and acceptable due to the fact that it is easy to implement. In the future, the proposed method can be applied to 2D and 3D nonlinear parabolic PDEs. ## Acknowledgement The first author is grateful to the National Science & Technology (NST), Ministry of Science & Technology, Govt. of the People's Republic of Bangladesh, for granting the 'NST Fellowship' partially during the period of research work.
2308.10926
Symfind: Addressing the Fragility of Subhalo Finders and Revealing the Durability of Subhalos
A major question in $\Lambda$CDM is what this theory actually predicts for the properties of subhalo populations. Subhalos are difficult to simulate and to find within simulations, and this propagates into uncertainty in theoretical predictions for satellite galaxies. We present Symfind, a new particle-tracking-based subhalo finder, and demonstrate that it can track subhalos to orders-of-magnitude lower masses than commonly used halo-finding tools, with a focus on Rockstar and consistent-trees. These longer survival mean that at a fixed peak subhalo mass, we find $\approx 15\%{-}40\%$ more subhalos within the virial radius, $R_\textrm{vir}$, and $\approx 35\%-120\%$ more subhalos within $R_\textrm{vir}/4$ in the Symphony dark-matter-only simulation suite. More subhalos are found as resolution is increased. We perform extensive numerical testing. In agreement with idealized simulations, we show that the $v_{\rm max}$ of subhalos is only resolved at high resolutions ($n_\textrm{peak}\gtrsim3\times 10^4$), but that mass loss itself can be resolved at much more modest particle counts ($n_\textrm{peak}\gtrsim4\times 10^3$). We show that Rockstar converges to false solutions for the mass function, radial distribution, and disruption masses of subhalos. We argue that our new method can trace resolved subhalos until the point of typical galaxy disruption without invoking ``orphan'' modeling. We outline a concrete set of steps for determining whether other subhalo finders meet the same criteria. We publicly release Symfind catalogs and particle data for the Symphony simulation suite at \url{http://web.stanford.edu/group/gfc/symphony}.
Philip Mansfield, Elise Darragh-Ford, Yunchong Wang, Ethan O. Nadler, Risa H. Wechsler
2023-08-21T18:00:00Z
http://arxiv.org/abs/2308.10926v1
# Symfind: Addressing the Fragility of Subhalo Finders and Revealing the Durability of Subhalos ###### Abstract A major question in \(\Lambda\)CDM is what this theory _actually_ predicts for the properties of subhalo populations. Subhalos are difficult to simulate and to find within simulations, and this propagates into uncertainty in theoretical predictions for satellite galaxies. We present Symmind, a new particle-tracking-based subhalo finder, and demonstrate that it can track subhalos to orders-of-magnitude lower masses than commonly used halo-finding tools, with a focus on Rockstar and consistent-trees. These longer survival mean that at a fixed peak subhalo mass, we find \(\approx 15\%\)-\(40\%\) more subhalos within the virial radius, \(R_{\rm vir}\), and \(\approx 35\%\)-\(120\%\) more subhalos within \(R_{\rm vir}/4\) in the Symphony dark-matter-only simulation suite. More subhalos are found as resolution is increased. We perform extensive numerical testing. In agreement with idealized simulations, we show that the \(v_{\rm max}\) of subhalos is only resolved at high resolutions (\(n_{\rm peak}\gtrsim 3\times 10^{4}\)), but that mass loss itself can be resolved at much more modest particle counts (\(n_{\rm peak}\gtrsim 4\times 10^{3}\)). We show that Rockstar converges to false solutions for the mass function, radial distribution, and disruption masses of subhalos. We argue that our new method can trace resolved subhalos until the point of typical galaxy disruption without invoking "orphan" modeling. We outline a concrete set of steps for determining whether other subhalo finders meet the same criteria. We publicly release Symmind catalogs and particle data for the Symphony simulation suite at [http://web.stanford.edu/group/gfc/symphony](http://web.stanford.edu/group/gfc/symphony). Galaxy dark matter halos -- Computational methods -- Galaxy evolution 0000-0002-4618-8082]Philip Mansfield 0000-0002-2883-0886]Elise Darragh-Ford 0000-0002-4883-0886]Yunchong Wang 0000-0002-4883-0886]Ethan O. Nadler ## 1 Introduction Many open questions in cosmology are about the state of the universe: questions like "what is the nature of dark matter?" or "how has the clustering of matter evolved over time?" However, many are also questions about the predictions of a specific model. This second class of questions is interesting because they hamper our ability to answer the first type of question. At present, \(\Lambda\)CDM -- i.e., a popular class of cosmological models that contain both "cold" dark matter and a cosmological constant -- suffers from an open question of this second kind: there is substantial uncertainty over how subhalos behave and disrupt in \(\Lambda\)CDM. This uncertainty is a major systematic in numerous cutting-edge cosmological probes. In \(\Lambda\)CDM, all galaxies form within massive dark matter structures known as _halos_(White & Rees, 1978; Wechsler & Tinker, 2018). Large galaxies are surrounded by swarms of small _satellite_ galaxies that inhabit their own dark matter _subhalos_. The lives of satellite galaxies are dramatic: originally isolated galaxies in their own right, they are accreted onto hosts and pulled into chaotic orbits that can vary between leisurely transits across the host halo's outskirts to rapid, disruptive encounters with the host's center. Throughout its orbit, mass in a subhalo's outskirts is pulled away by the gravitational field of the host, and this decrease in mass causes the region of the subhalo protected from the host's gravity -- the region inside the _tidal radius_(see review in van den Bosch et al., 2018) -- to decrease. Simultaneously, tidal shocks heat the interior of subhalos at each pericentric passage, causing the interior mass to expand outwards towards the encroaching tidal radius (e.g., Hayashi et al., 2003; also see historical review in Moore, 2000). This leads to runaway, exponential mass loss (Tormen et al., 1998; Klypin et al., 1999a). Although the satellite galaxy is initially insulated from this mass loss, eventually, its tidal radius decreases so much that even the galaxy is torn apart (Penarrubia et al., 2008; Smith et al., 2016). Modifying the nature of dark matter can change the abundance, properties, and durability of subhalos (e.g., Lovell et al., 2014; Nadler et al., 2021), which makes observations of satellite galaxies a rich cosmological probe. Tantalizingly, there is no shortage of apparent conflicts between observed satellite galaxies and the predictions of \(\Lambda\)CDM (see reviews in Bullock & Boylan-Kolchin, 2017; Bechtol et al., 2022). Relative to observations, simulated satellite groups _appear_ to be too diffusely concentrated around their hosts (e.g. Carlsten et al., 2022, see Section 5.2 for further review), to be too isotropically distributed (e.g. Pawlowski, 2018), and to potentially have incorrect dark matter distributions (e.g., Oman et al., 2015; Hayashi et al., 2020). Historically, a large body of literature has been written on the potential tension between the observed abundance of satellite galaxies and the simulated abundance of subhalos (e.g., Moore et al., 1999; Klypin et al., 1999; Boylan-Kolchin et al., 2011), but both the original formulation of this problem (the "missing satellites problem") and a more challenging reformulation ("too big to fail") seem to be resolved by a combination of improved observational programs and better modeling of selection effects (e.g., Newton et al., 2018; Kim et al., 2018; Drlica-Wagner et al., 2020; Nadler et al., 2020), more realistic treatments of galaxy formation physics (e.g., Benson et al., 2002; Somerville, 2002; Kravtsov et al., 2004; Wetzel et al., 2016; Lovell et al., 2017), and accounting for the impact of the central galaxy's potential on satellite disruption (e.g., Brooks et al., 2013; Garrison-Kimmel et al., 2017). Satellites also impact our ability to infer cosmological parameters from large-scale clustering statistics (e.g., the probability of pairs of galaxies being separated by a given distance). Altering the properties of dark energy and other cosmological parameters changes the rate that structure forms and can lead to large changes in these statistics, particularly at "small" scales (\(r\lesssim 10\) Mpc; e.g., Wechsler and Tinker, 2018). However, the durability of satellites directly impacts these clustering statistics. This is true even at scales well beyond the radius of an individual halo due to satellites in neighboring halos adding weight to the correlation function (the so-called "two-halo term," e.g., Cooray and Sheth, 2002). As a result, the impact of cosmology on some clustering statistics can be canceled out by corresponding changes in the model used for satellites (e.g., Wechsler and Tinker, 2018). Some popular models that attempt to match these clustering statistics by populating simulated subhalos with galaxies cannot match observations self-consistently unless one assumes that galaxies and their subhalos far outsurvive their simulated counterparts (Campbell et al., 2018, Appendix B in Behroozi et al., 2019). There is controversy over whether similar assumptions are needed to match the radial distribution of satellites (see overview in Section 5.2). Does this mean that satellite galaxies are a swan song for \(\Lambda\)CDM? Has such a far-reaching theory been undone by its smallest and most humble predictions? Not necessarily. The behavior of satellite galaxies and subhalos is a non-linear process that is primarily understood through numerical simulations. To make predictions for satellite populations from simulations, one must be able to configure those simulations in a way that allows for subhalo evolution to be properly resolved and must be able to reliably extract subhalo information from simulation outputs. Both steps are non-trivial. The first step, running numerically reliable simulations, relies on _convergence testing_. Convergence testing is a form of correctness testing that compares simulation behavior at varying resolutions (e.g., Ludlow et al., 2019). The true behavior of a system cannot depend on any purely numerical parameter, so simulation results are not correct in any region of numerical parameter space where small changes in numerical parameters lead to meaningful changes in those results. _Convergence_ -- agreement between resolution levels -- is a necessary but insufficient condition for correctness. Convergence without correctness is called _false convergence_, and false convergence has been observed even in some of the largest cosmological simulations ever run (Mansfield and Avestruz, 2021). Currently, one of the most pressing issues in this form of testing is whether subhalo populations converge at the resolution levels typically analyzed in cosmological simulations. Subhalos in cosmological simulations tend to disrupt quickly, even at high resolutions (van den Bosch, 2017; Jiang and van den Bosch, 2017; Han et al., 2016; Behroozi et al., 2019; Diemer et al., 2023), but this does not seem to be the correct behavior of subhalos in \(\Lambda\)CDM. Rapid disruption is certainly the correct behavior for very large subhalos (\(m_{\rm peak}/M_{\rm vir}\gtrsim 0.1\); Darragh-Ford et al., in prep). Dynamical friction quickly saps orbital energy from these subhalos, causing them to sink to the centers of their hosts within a few orbits (e.g., Vasiliev et al., 2022) and to melt into their hosts' smooth matter distributions. However, dynamical friction is far weaker for low-mass subhalos (e.g., van den Bosch et al., 2016; see also Section 5.2 for extended discussion) and generally does not cause these subhalos to sink to the host's center on observationally relevant timescales. These low-mass subhalos can still experience true disruption under certain alternative cosmologies and baryonic physics formulations that cause low-density central cores in subhalos, as the lowered density makes them less resilient to tidal fields (e.g., Penarrubia et al., 2010; Errani et al., 2023). Historically, there has been some debate over whether the same can occur in the "cuspy" high-density centers of pure-\(\Lambda\)CDM subhalos (see e.g., Errani and Navarro, 2021 for review), but modern high-resolution, idealized simulations strongly predict that this is not the case: low-mass subhalos can survive as shrinking bound remnants for essentially arbitrarily long periods of time (Penarrubia et al., 2010; van den Bosch et al., 2018; Errani and Penarrubia, 2020; Errani and Navarro, 2021). Even tidal shocks from disc potentials are not able to fully disrupt these subhalos (Green et al., 2022). Because changes in cosmology and galaxy formation physics can decrease the durability of subhalos, simulation techniques that erroneously lead to rapid disruption are particularly pernicious, allowing pure-\(\Lambda\)CDM simulations to falsely emulate these effects and thus hampering analysis that compares these types of models. To make matters worse, convergence testing between cosmological simulations suggests that modest and easily achieved resolution levels are enough to ensure convergence in subhalo abundances (e.g., Mansfield and Avestruz, 2021; Nadler et al., 2023, \(\gtrsim 10^{2}\) to \(10^{3}\) particles depending on the statistic). But idealized simulations suggest that orders of magnitude more particles are needed to prevent numerical effects from causing subhalos to lose mass too quickly, especially for old subhalos (e.g., van den Bosch et al., 2018, \(\approx 10^{5}\) particles needed, see also Sections 4.3, 4.4, and 4.5 for extended discussion). This tension leads to an uncomfortable question: is the apparent reliability of subhalo analysis in cosmological simulations merely false convergence? Assuming that subhalos can be simulated reliably, they must also be identified within simulation outputs. _Halo finders_ are software packages that attempt to identify halos and their subhalos within simulations. Finding isolated halos is mostly a solved problem (Knebe et al., 2011) -- except for ambiguities about halo boundaries (e.g., More et al., 2011, 2015; Diemer, 2021) and about the complexity of mergers between equal-mass halos (e.g., Behroozi et al., 2014) -- so one of the most important properties of a halo finder is how effective they are at identifying and measuring the properties of subhalos. Most halo finders work primarily within a single snapshot, requiring a second tool, a _merger-tree_ code that connects halos and subhalos across time. The split between the halo finder and merger tree is not always clear: some halo finders use information from previous timesteps or may even explicitly track a halo/subhalo's particles over time (see Section 7 for more details). The wide variety of subhalo finders is least partly caused by inherent difficulty in finding subhalos. Subhalos are enveloped in dense streams of their own lost matter, they must be identified against the complex background of the host's density field and can be confused with non-subhalo structure within the host halo, such as fluctuations and the sloshing of dark matter as the host settles into equilibrium after a major merger. Testing halo finders is also difficult: beyond convergence testing (see above), one option is to test whether finders can recover idealized subhalos manually placed into a host halo (e.g., Knebe et al., 2011), and a second is to compare the performance of different halo finders across realistic halos (e.g., Knebe et al., 2011; Onions et al., 2012, 2013; Srisawat et al., 2013; Avila et al., 2014; Behroozi et al., 2014; Elahi et al., 2019). The former method suffers from the fact that much of the difficulty in subhalo finding comes from the complex interplay between host and subhalo, or subhalo and subhalo remnant, meaning that idealized tests will overestimate a tool's reliability. The latter method suffers from the fact that the researcher usually does not know the correct answer ahead of time. If two packages disagree, how does one know if one is under-predicting, the other is over-predicting, or both are wrong? In this paper, we aim to make significant progress on these questions. After outlining the data, tools, and definitions used in this paper in Section 2, we present a new subhalo-finding method based on "particle-tracking" in Section 3, Symmet. In Section 4, we perform extensive testing on the reliability of this method and on the convergence properties of subhalos in general. In Section 5, we investigate the impact of our method on subhalo populations. In Section 6, we argue that our subhalo finder (and any subhalo finder with similar performance) will no longer be the limiting factor for analyzing the abundance of satellite galaxy populations. In Section 7, we compare with other methods, and in Section 8, we provide our conclusions. Throughout this paper, we use lower-case letters to label the properties of subhalos (e.g., \(m\), \(n\), \(v_{\rm max}\), \(r\)), and upper-case letters to label the properties of central/host halos (e.g., \(M_{\rm vir}\), \(R_{\rm vir}\)). These central halos are sometimes referred to as "main subhalos" in the literature. ## 2 Simulations, Codes, and Definitions The analysis in this paper makes extensive use of five of the Symphony simulation suites: SymphonyLMC, SymphonyMilkyWay, SymphonyMilkyWayHR, SymphonyGroup, and SymphonyL-Cluster (Mao et al., 2015; Bhattacharyya et al., 2022; Nadler et al., 2023). The full details of these simulations can be found in Nadler et al. (2023), and we list the most important parameters of these simulations in Table 1. Our scientific results focus primarily on characterizing the average subhalo populations in SymphonyMilkyWay. In some cases where numerical behavior does not depend on central halo mass or particle mass, we stack all the suites together to improve number statistics. Some analysis requires isolating the impact of resolution from subhalo mass, in which case we compare the high-resolution resimulations in SymphonyMilkyWayHR with a subset of SymphonyMilkyWay. SymphonyMilkyWayHR consists of the four central halos in SymphonyMilkyWay with the smallest Lagrangian regions. These halos were resimulated with particle masses that were eight times smaller and force-softening scales that were two times smaller than the fiducial suite. This selection process allowed for lower cost resimulations, but also means that these four objects are not representative of the entire Milky Way-mass sample. Most notably, our testing found that this high-resolution subsample has fewer high-mass subhalos and a different distribution of subhalo mass loss rates than the full suite. This means that for some resolution tests, this suite cannot be directly compared against the full SymphonyMilkyWay suite and needs to be compared against only their fiducial-resolution re-simulations. The original SymphonyMilkyWayHR suite contained five hosts, but we remove the fourth SymphonyMilkyWayHR host, Halo530, from both the fiducial and high-resolution simulation sets when performing matched analysis. Several high-mass subhalos and their sub-subhalos that were accreted in the high-resolution run were never accreted in the fiducial resolution run, as determined by manual inspection and position-based cross-matching. This difference leads to very different subhalo populations. No similar mismatches were found in any other host pairs. We make substantial use of the Rockstar subhalo finder (Behroozi et al., 2013) and consistent-trees merger tree code (Behroozi et al., 2013). Both tools are widely used, and a common reading of the testing literature is that they perform at least as well as most other subhalo finders and merger tree codes, respectively (see discussion and caveats in Section 7). To simplify language, we refer to the combined Rockstar+consistent-trees pipeline as "RCT" and both steps simply as Rockstar in figures, as is commonly done in the literature. We also make use of the Subfind halo finder (Springel et al., 2001). ### Halo Property Definitions We define a halo as becoming a _subhalo_ at its snapshot of first infall and as a _central halo_ before this point. A _host halo_ is a central halo that a subhalo has fallen into at some point in the past. _First infall_ is defined as the first snapshot at which a subhalo is within the _virial radius_ (\(R_{\rm vir}\), see below) of a more massive halo. In practice, this definition becomes complicated in the presence of halo finder errors, but we correct these issues using the methods described in Appendix A.1. A consequence of this definition is that it includes _splashback_ subhalos (subhalos whose orbits have temporarily taken them outside the virial radius of their host halo, e.g., Diemer, 2021 and references therein) and flyby subhalos (former subhalos who have truly been ejected from their host, often due to three-body interactions; e.g., Ludlow et al., 2009). True "flyby" subhalos are rare: only 1-2% of all subhalos that have left their host's virial radius are outside the extended splashback surface (Mansfield and Kravtsov, 2020), so classifying both objects as subhalos is a reasonable approximation. We take two definitions of halo mass. Central halos are characterized by their virial mass, \(M_{\rm vir}\), the total bound mass within the virial radius, \(R_{\rm vir}\), defined relative to a characteristic density, \(\rho_{\rm vir}\), such that \[M_{\rm vir}=\frac{4\pi}{3}\rho_{\rm vir}R_{\rm vir}^{3}. \tag{1}\] We adopt the Bryan and Norman (1998) definition of \(\rho_{\rm vir}\). For the cosmology used by SymphonyMilkyWay, \(\rho_{\rm vir}=99.2\rho_{c}\) at \(z=0\). For subhalos, our definition of subhalo mass is dependent on the subhalo finder. RCT subhalo masses are virial masses computed using only the bound particles within that subhalo's local phase-space overdensity. Symfind subhalo masses are the sum of the masses of all bound particles within that subhalo's tracked particle set. We label both masses as \(m\). There are meaningful differences between these two mass definitions (see Appendix D). RCT only uses a single "unbinding pass" to compute boundedness, increasing masses by \(\approx 5\%\) relative to Symfind's full unbinding, and RCT also tends to include some host particles within the subhalo, further increasing masses by \(\approx 5\%\). The difference between the total bound mass and the total bound mass within the virial radius is small for subhalos: we find an \(\approx 2\%\) effect in Symfind. In some places, we also characterize subhalo masses via \(v_{\rm max}\), the maximum value of the rotational velocity \(v_{\rm rot}(r)=\sqrt{G\;m(<r)}/r\) for \(r>\epsilon\). \(v_{\rm max}\) is closely related to \(m\) in central halos, but decreases more slowly than \(m\) in disrupting subhalos (see Section 4.4). There are numerous ways to characterize a subhalo's mass prior to infall. In this paper, we use \(m_{\rm peak}\) and \(v_{\rm peak}\), the maximum values of \(m\) and \(v_{\rm max}\), respectively, _prior to the snapshot when the subhalo first became a subhalo_. This latter condition is non-standard and has been introduced to avoid certain halo finder errors (see Fig. 2, Section 6.2, and Appendix A.1). In some places in this paper, we compare against studies that used alternative definitions of a halo's pre-infall mass such as \(v_{\rm infall}\) (the value of \(v_{\rm max}\) at the snapshot of first infall), \(m_{\rm infall}\) (the value of \(m\) at the snapshot of first infall), and \(v_{\rm Mpeak}\) (the value of \(v_{\rm max}\) at the snapshot when \begin{table} \begin{tabular}{l|c|c|c|c} \hline Simulation & \(N_{\rm host}\) & \(M_{\rm vir}\) & \(m_{p}\) & \(\epsilon\) \\ & & \((M_{\odot})\) & \((M_{\odot})\) & (kpc) \\ \hline \hline SymphyLMC & 39 & \(10^{11.02}\) & \(5.0\times 10^{4}\) & 0.08 \\ SymphonyMilkyWay & 45 & \(10^{12.09}\) & \(4.0\times 10^{5}\) & 0.17 \\ SymphonyMilkyWayHR & 4 & \(10^{12.07}\) & \(5.0\times 10^{4}\) & 0.08 \\ SymphonyGroup & 49 & \(10^{13.12}\) & \(3.3\times 10^{6}\) & 0.36 \\ SymphonyL-Cluster & 33 & \(10^{14.62}\) & \(2.2\times 10^{8}\) & 1.2 \\ \hline \end{tabular} \end{table} Table 1: The most important parameters of the simulation suites used in this paper. The first column gives the name of the simulation, the second gives the number of unique hosts in the suite, the third gives the median host mass, and the final two columns give the particle mass and comoving, Plummer-equivalent force softening scale, respectively. \(m\) reaches its maximum pre-infall value). The distinction between these different definitions is at the few-to-ten percent level and matters for certain classes of empirical models (e.g., Reddick et al., 2013), so we switch to the appropriate definition when necessary. ### Merger Tree Terminology The evolution of halos over time is represented by a structure called a _merger tree_. The structure is tree-shaped with respect to time because halos can merge together over time but generally do not split apart unless a serious error has occurred in the halo finder. We briefly define the most important terminology here. A _halo_ is a structure that is found within a single snapshot. Every halo is matched with at most one halo in a subsequent snapshot: the former halo is called a _progenitor_, and the latter is called a _descendant_. A halo with no progenitors is called a _leaf_ halo, and one with no descendants is called a _root_ halo. Some unbroken paths which start at leaves and progressively pass from descendant to descendant are called _branches_ (see below). A halo can have multiple progenitors, an event called a _tree-merger_. A common source of confusion is that there are three similar but distinct events commonly referred to as "mergers" in the literature. The first is when a subhalo first falls into a host halo. The second is after this subhalo has lost so much mass that the halo finder cannot track it anymore. This second event can occur many Gyr later and is highly dependent on the halo finder and merger tree code used. The third is when the galaxy hosted by a subhalo merges with its host galaxy's stellar halo. We refer to the first type of events as "mergers" and the second type as "tree-mergers." We do not analyze galaxy mergers directly in this paper. Still, their existence is quite important to evaluating the quality of merger tree codes, as we discuss in Section 6. All merger trees have a method for choosing which of a halo's progenitors are disrupting subhalos and which progenitor is the same halo at a previous time. This latter progenitor is called its _main progenitor_ and is usually the more massive of the two halos. A branch consisting of only main progenitors is called a _main branch_. Qualitatively, a main branch represents the evolution of a single halo over time. Different authors use different terminology when defining which linkages can be considered part of the same branch and how large branches are (for example, if A is the main progenitor of B, but a separate halo, C, merges with A to form B, is C part of the same branch as B? What about B's descendants?). In this paper, we take the convention that a halo can only be a member of one branch and that linkages coming from tree mergers are not part of any branch, even though they are part of the connectivity of the tree. A consequence of this definition is that the term branch can _only_ refer to paths along main branches, allowing us to use the terms "branch" and "main branch" interchangeably. ## 3 Methods ### Subhalo Finding Overview At a high level of abstraction, our subhalo-finding method, Symfind, has three steps. In the first step, we use an existing halo catalog to identify and track all the particles associated with a subhalo prior to infall. In the second step, we use an existing subhalo finder re-identify the subhalo _using only its tracked particles_ after infall, rather than trying to find it within the background of host particles. Finally, the position and velocity identified by that halo finder are used to calculate subhalo properties with our own methods using only the tracked particles. This approach falls within a larger family of similar techniques which are generally called "particle-tracking" subhalo finders. These use a subhalo's pre-infall particles to find that subhalo after infall (e.g., Tormen et al., 1998; Kravtsov et al., 2004; Han et al., 2012, 2018; Springel et al., 2021; Diemer et al., 2023, see also Section 7.2). They stand in contrast to "single-epoch" subhalo finders, which identify subhalos in a single snapshot and then attempt to connect objects over time afterward (see Section 7.1). Particle-tracking is generally expected to be an effective subhalo-finding method because focusing only on previously accreted particles removes all host particles from consideration, vastly simplifying subhalo finding and reducing the chance of errors. Although our general framework is simple, there are many questions that need to be addressed. Could numerical artifacts in the input halo catalogs hamper our ability to associate particles with halos? What exactly does it mean for a particle to be "associated" with a halo? If there are multiple density peaks in the tracked particles, how do we decide which one belongs to the subhalo? Is it possible for us to find a density peak that isn't actually a subhalo? There are a large number of design decisions in Symfind which are devoted to addressing and resolving these concerns, and many of these decisions are quite different from existing particle-tracking methods. In the list below, we outline the general structure of these decisions and point the reader to the relevant portions of Appendix A, where each point is discussed in detail. Fig. 1 illustrates the major steps of our algorithm. 1. (_Appendix A.1; Fig. 1, Panel I_) We re-analyze input RCT merger trees to identify and correct for various errors: spurious phase space overdensities that are misidentified as subhalos (see Appendix B), errors during subhalo disruption that can cause subhalos to spike in mass, and errors during major mergers that can cause central halos to appear to become subhalos too quickly due to physical switching of mass between the primary and secondary halo during the merger. 2. (_Appendix A.2; Fig. 1, Panel II_) We track all the particles that were ever accreted by each subhalo, according to \(R_{\rm vir}\). Particles that were previously accreted by larger halos are not tracked for subsequent smaller halos that they are accreted by. We break particles into "_smoothly_" and "_non-smoothly_" accreted sets. Smoothly accreted particles are ones that have never been accreted by another halo and non-smoothly accreted particles are ones that have. 3. (_Appendix A.3; Fig. 1, Panel III_) Once a central halo becomes a subhalo, we stop using RCT catalogs and calculate subhalo properties from the particles associated with that subhalo branch. Non-smoothly accreted particles can be associated with multiple branches, meaning our method can analyze nested substructures. At the snapshot of infall, we identify the \(N_{\rm core}\) most gravitationally bound smoothly accreted particles ("_core_" particles). These particles will be used in later snapshots to confirm the location of the subhalo. 4. (_Appendix A.3; Fig. 1, Panel IV_) We use an existing subhalo finder to identify density peaks within the tracked particles for each subhalo. Currently, we use Subfind, Springel et al. 2001a, using a density kernel over the \(k\) nearest neighbors to estimate densities. Subfind is used chiefly due to implementation simplicity, and we expect to replace this with Rockstar in the future. 5. (_Appendix A.4; Fig. 1, Panel V_) We find which density peak each of the original \(N_{\rm core}\) core particles is contained within. We take the subhalo's true position and velocity as the position and velocity of the peak containing the most core particles. The core particles are only used to select the peak and could, hypothetically, be located in the peak's outskirts. 6. (_Appendix A.5_) Using this peak's position and velocity, we calculate subhalo properties for all tracked particles that remain gravitationally bound after iterative unbinding. Both smoothly and non-smoothly accreted particles are used to calculate subhalo properties and binding energies. 7. (_Appendix A.6_) We count a subhalo as disrupted/merged if it contains no bound core particles within its half-mass radius or if its half-mass radius intersects with its host's center. This first condition generally means either (i) that the core particles have completely dispersed and the "peak" is some random fluctuation in the extended tidal tail, (ii) that the subhalo has lost so much mass that its the core particles have become unbound, or (iii) that the peak's velocity is poorly determined. The latter of these two conditions generally means that the subhalo has sunk to its host's center through dynamical friction. There are some other very rare conditions that can lead to subhalo disruption, as well. After disruption, we continue trying to re-find the subhalo for the rest of the simulation and interpolate its Figure 1: Cartoon illustrating the major steps in our subhalo-finding method, Symfind. _Panel I:_ First, we annotate an input merger tree (for this paper, input catalogs are generated by Rockstar), identifying and correcting various errors (Appendix A.1). Here, the red X’s indicate portions of the input merger tree that would be removed or corrected. _Panel II:_ Next, using this annotated tree, we find the first halo branch that every particle ever “smoothly” accreted onto. We also track the “non-smooth” accretion of particles from lower mass halos to higher mass halos, but not the inverse (Appendix A.2). Here, solid lines show particles that are tracked for this subhalo and dashed show particles that are untracked. _Panel III:_ Once a halo becomes a subhalo, we stop using the input merger tree and compute properties from the tracked particles. At the snapshot of the first infall, we flag a subset of highly bound particles as “core” particles (Appendix A.3). Here, core particles are in red. _Panel IV:_ Using an existing halo finder (currently Subfind), we identify density peaks within the tracked, smoothly accreted particles (Appendix A.3). _Panel V:_ The density peak with the largest number of core particles is taken to be the true density peak (Appendix A.4). Properties of the subhalo are then calculated relative to this true density peak using all gravitationally bound, tracked particles (Appendix A.5), and several tests are run to assess whether the subhalo has disrupted/merged with its host (Appendix A.6). properties during snapshots when it was erroneously marked as disrupted. ### Fiducial Values, Application to Symphony, and Data Release We have applied this subhalo-finding method to the SymphyLMC, SymphonyMilkyWay, SymphonyGroup, and SymphonyL-Cluster zoom-in suites (Nadler et al., 2023) with \(k=16\) and \(N_{\rm core}=32\). These values were chosen by searching a wide range of parameter values, as described in Appendix C. Only subhalos with \(n_{\rm peak}>300\) are included. We have made the resulting halo catalogs publicly available at [http://web.stanford.edu/group/gfc/symphony/](http://web.stanford.edu/group/gfc/symphony/). This website also contains Rockstar+consistent-trees catalogs processed with the steps described in Appendix A.1 and partial particle snapshots containing the tracking and halo association information described in Appendix A.2. This website also contains extensive documentation and tutorials on using these data. The pipeline for generating these catalogs will be made available upon request. We are not making a general public code release at this time, but researchers interested in assigning a name to this specific subhalo-finding method can refer to it as Symfind, the Symphony halo finder. We are not making a public code release because Symfind currently does not have runtime performance that would allow it to be run on moderate-size cosmological simulations. This is not a fundamental limitation in the algorithm and will be addressed in future work. Furthermore, there are some algorithmic changes to Symfind that may make it well-suited to being run efficiently on very large cosmological simulations, as we discuss briefly in Appendix D. Finally, we remind readers that these catalogs make heavy use of output from Rockstar(Behroozi et al., 2013), consistent-trees(Behroozi et al., 2013), and some algorithms from Subfind(Springel et al., 2001). ## 4 The Reliability of Tracked Subhalos As we discussed in Section 1 and as we will further discuss in Section 7.1, there are substantial holes in our current ability to quantify the reliability of subhalo finders. Existing tests typically look at the performance of subhalo finders on statistics where one does not already know the expected result, such as a subhalo mass function. Because the values of these statistics are unknown _a priori_, these tests generally take the form of either checking for internal consistency/convergence in subhalo finder results as resolution is increased (e.g., Nadler et al., 2023) or as a comparison between the results of different subhalo finders (e.g., Onions et al., 2012). But convergence is not the same as correctness, and noting that two subhalo finders arrive at the same (or different) results cannot prove that either is correct. In this Section, we lay out a series of systematic tests which do not fall victim to these issues and apply those tests to Symfind and RCT. These tests are a combination of qualitative inspection (Section 4.1 and Appendix E), characterization of the conditions that cause a subhalo finder to lose track of subhalos (Section 4.2), and quantification of when subhalos deviate from the predictions of high-resolution idealized simulations (Sections 4.3, 4.4, and 4.5). By combining these tests, we are able to identify subhalo populations which we can guarantee that a subhalo finder will be able to locate and can also guarantee that certain properties of these subhalos will be correctly recovered. We argue that the same level of guarantees cannot be made with traditional convergence testing. With these tests, we demonstrate that Symfind does not falsely converge and is capable of following subhalos to orders-of-magnitude smaller masses than RCT is capable of. RCT, regrettably, does falsely converge. ### Evolution of subhalo properties: a case study Before performing statistically rigorous tests on large populations of subhalos, we first consider the qualitative behavior of a representative subhalo drawn from a pool of thousands of subhalos that we have visually inspected. Visual inspection is not a novel test, but it is an important one because it can show that a given subhalo finder is not misidentifying random phase-space detritus as true subhalos. It also qualitatively demonstrates many of the issues that we will quantify in later testing. Fig. 2 shows the evolution of a typical subhalo resolved with \(\approx 2\times 10^{4}\) particles at its peak mass. The top panel shows the distance between the subhalo and its host over time. The dashed black line shows the virial radius of the host, and the colored lines show the positions of the subhalo tracked by RCT and Symfind. There are several snapshots where the RCT catalog continues to have entries for this halo. Still, manual inspection and core-particle-based tests described in Appendix C show that these "halos" are unassociated with the subhalo's particles. During the period where RCT has reliable estimates of the subhalo's position, it agrees exactly with the position of the Symfind subhalo. As the subhalo approaches its third pericenter, RCT experiences an error that causes a sudden jump in the subhalo's apparent position. This is soon followed by the apparent disruption of the subhalo. However, Symfind continues to follow the halo for many more orbits. The bottom panel of Fig. 2 shows the same subhalo with the same color scheme, except that it shows the subhalo's mass. During the period where RCT reliably tracks the subhalo, both it and Symfind find that the subhalo mass is decreasing approximately exponentially. Symfind masses are slightly noisier and lower than RCT masses (see Appendix D). RCT's third-pericenter error corresponds to a sharp increase in mass. Symfind continues to follow the mass loss unabated past the point where this error occurs and the subhalo continues to lose mass at the same exponential rate through the following orbits. Fig. 3 shows an image of this subhalo several snapshots after the RCT branch disrupts. The projected density field generated by all particles that fell into the host halo with this subhalo is shown in pink. This density field is estimated through a standard 2D SPH density kernel applied to the 128 nearest particles (e.g., Springel, 2010). The radius of the host halo is shown in white, and the half-mass radius of the subhalo of bound particles is shown in black. The inset shows a zoomed-in view of the subhalo, except that only bound particles are shown, and the SPH kernel is now applied to the 32 nearest particles. The 32 most-bound particles during the infall snapshot are shown as black dots. The inset shows a self-bound, roughly spherical structure that contains the same particles that have been at the center of the subhalo since infall. In short, particle-tracking is following a real subhalo, and it is the target subhalo. Figure 3: Projected dark matter density of the subhalo shown in Fig. 2 immediately after it disrupts within the Rockstar catalog. In the main image, the white circle is the virial radius of the host, the black circle is the half-mass radius of the subhalo according to Symfind, and the color map shows the logarithm of the projected density of all particles that fell in with the subhalo, estimated by an SPH kernel. The majority of this subhalo’s particles have been lost into tidal tails. The inset panel shows the region around the subhalo in more detail. The black dots show the 32 most-bound particles identified during the snapshot when the subhalo was first accreted. These “core” particles are still well-clustered and sit in the center of a fully bound, five-thousand-particle structure; thus the object identified by Symfind is a real subhalo. Figure 2: The evolution of a representative subhalo over time, as measured by both Rockstar (red) and Symfind (blue). In the top panel, the Rockstar curve is dashed during the period when it overlaps with the Symfind curve. Snapshots during which Rockstar has identified an incorrect subhalo center are shown in orange (see Appendix C), and the virial mass/virial radius of the host halo is shown in black. When followed with Rockstar, the halo survives two orbits before disrupting. Rockstar incorrectly associates this subhalo’s branch with an unrelated density peak for its last few snapshots, leading to an a physical change in mass and position. Symfind agrees with Rockstar while Rockstar reliably tracks the subhalo and continues to follow the halo for many more orbits and a further factor of ten in mass loss. Note that without post-processing checks, the Rockstar branch reaches \(m_{\text{peak}}\) during its final, erroneous snapshot. More example halo trajectories can be found in Appendix E. To summarize, Symfind is capable of following this subhalo far longer than RCT and the long-term evolution of its inferred properties is reasonable. Manual inspection of the particle distribution shows that the object being followed really is the original subhalo. Taken together, this means that the difference between RCT and Symfind is not simply an unresolvable difference in definitions: this subhalo _does_ outsurvive its RCT branch, and particle-tracking correctly follows this subhalo. An extensive manual review of individual subhalos shows that the qualitative behavior of this subhalo is typical. In Appendix E, we show eight randomly selected subhalo trajectories that are all qualitatively similar to Fig. 2. Beyond this, we have manually inspected several thousand subhalo trajectories, two hundred images of heavily disrupted subhalos, and several dozen movies. The longer survival time of this subhalo and the fact that it tracks a well-defined, bound remnant containing the most-bound particles identified at infall are representative of other subhalos with similar peak resolution levels. The same is generally true at other resolution levels, although the relative advantage of Symfind decreases as particle counts decrease. Qualitative assessment has limitations, so we also perform quantitative analysis on the minimum masses reached prior to disruption by subhalos followed by RCT and Symfind in Section 4.2. A typical subhalo at this resolution level survives to masses \(\approx 30\) to 100 smaller than subhalos tracked by RCT, meaning that the difference in final masses seen in Fig. 2 is close to the typical difference in final masses that one would see in a subhalo dataset that was not right-censored by the end of the simulation. As we discuss in Section 6.2, at the resolution level of the subhalo shown in Fig. 2, roughly a third of all RCT subhalos experience a similar error during their final snapshot. So while such an error is common, it does not always occur during RCT disruption. ### Subhalo survival thresholds Idealized, high-resolution simulations show that, in the absence of strong dynamical friction, some portion of low-mass subhalos should survive as bound remnants for essentially arbitrarily long periods of time (Errani et al., 2023, and references therein). Dynamical friction is quite weak for low-mass subhalos (e.g., van den Bosch et al., 2016), giving us our first opportunity to perform a quantitative correctness test where we already know what the correct behavior of the finder should be. In this Section, we determine how long RCT and Symfind can follow subhalos before losing track of them. When this analysis is restricted to a low-mass subhalo population, all such losses will be artificial in origin, caused by some combination of subhalo finder algorithm and simulation numerics. This means that this analysis can put limits on which subhalo populations are safe to treat as complete. We estimate "survival curves" for subhalos followed by both RCT and particle-tracking. The survival curve is the probability that a subhalo will disrupt below some mass ratio \[\mu\equiv m/m_{\rm peak} \tag{2}\] and can be computed via the Kaplan-Meier estimator (Kaplan and Meier, 1958). As discussed in Section 2.1, \(m_{\rm peak}\) is defined using only masses prior to first infall, meaning that it does not suffer from the sort of RCT mass fluctuation error shown in Fig. 2. This gives the expected distribution of _disruption mass ratios_, \(\mu_{\rm disrupt}\), the smallest mass fractions subhalos achieve before dropping out of the catalog. Survival curves are a standard analysis tool in the medical sciences and in engineering analysis, where they are used, for example, to estimate the distribution of lifespans of a set of patients or the distribution of times-until-failure for a set of machines. The biggest problem that one encounters when constructing a survival curve is statistical censoring. In \(\Lambda\)CDM simulations, many subhalos survive past the last snapshot, meaning that simply building a histogram of the minimum mass reached by every subhalo will underestimate the typical disruption mass because it mixes disruption masses with the surviving \(\mu\) distribution in the final snapshot. Restricting the analysis to subhalos that disrupt selects for a sample with shorter survival times than average. To address this problem, the Kaplan-Meier estimator breaks up the minimum mass that any subhalo was observed to achieve into intervals, estimates the instantaneous probability of failure within each interval, and multiplies those instantaneous probabilities together to get a cumulative probability. The estimator is more accurate with smaller intervals, so one typically sorts the data and inserts one interval between each consecutive pair of measurements. This estimator can be written as, \[\widehat{\rm Pr}(\mu_{\rm disrupt}<\mu_{f,i})=\prod_{0\leq j\leq i}^{j}\left(1- \frac{d_{j}}{N(\leq\mu_{f,j})}\right). \tag{3}\] Here, \(\widehat{\rm Pr}(\mu_{\rm disrupt}<\mu_{f,i})\), is the probability that a subhalo will have a disruption mass ratio, \(\mu_{\rm disrupt}\), less than some value \(\mu_{f,i}\), the final mass ratio of one of the subhalos in the dataset. To estimate this, one iterates over all the final mass ratios of subhalos with \(\mu_{f,j}\geq\mu_{f,i}\), indexed by \(j\) in order of decreasing \(\mu_{f,j}\), \(d_{j}\) is an indicator variable that is 1 if the final mass of the subhalo, \(j\), is caused by disruption and 0 if it is caused by the end of the simulation, and \(N(\leq\mu_{f,j})\) is the number of subhalos where \(\mu_{f}\leq\mu_{f,j}\). We then interpolate \(\widehat{\rm Pr}(<\mu_{f,i})\) to get a function that is continuous in \(\mu\), giving us \({\rm Pr}(\mu_{\rm disrupt}<\mu)\). We estimate the standard error on survival probabilities with Greenwood's formula (Greenwood, 1928) \[\widehat{\mathrm{Var}}[\widehat{\mathrm{Pr}}(<\mu_{f,i})]= \widehat{\mathrm{Pr}}(<\mu_{f,i})^{2}\times \tag{4}\] \[\sum_{0\leq j\leq i}^{j}\frac{d_{j}}{N(\leq\mu_{f,i})(N(\leq\mu_{ f,j})-d_{j})}.\] \(\widehat{\mathrm{Var}}[\widehat{\mathrm{Pr}}(<\mu_{f,i})]\) is similarly interpolated to get a function that is continuous in \(\mu\). The left panel of Fig. 4 shows survival curves for RCT subhalos and Symfind subhalos at high resolutions (\(10^{4.5}<n_{\mathrm{peak}}<10^{5}\)). Here, we have combined all the Symphony simulation suites for improved number statistics because testing shows that the shape of survival curves depends only on \(n_{\mathrm{peak}}\) and not on \(m_{\mathrm{peak}}\) (see Appendix F). Both methods have wide distributions of \(\mu_{\mathrm{disrupt}}\), spanning about two decades in \(\mu\). However, RCT subhalos disrupt at much higher masses than Symfind subhalos, and even with this width, almost all Symfind subhalos outsurvive even the longest-lasting RCT subhalos. The distribution of disruption masses is about 30 to 100 times lower when using Symfind than when using RCT. To characterize these survival curves with a single number, we compute the 10%, 50%, and 90% quantiles in the \(\mu_{\mathrm{disrupt}}\) distribution as a function of \(n_{\mathrm{peak}}\) for both RCT and Symfind. We once again combine the four Symphony simulation suites. For each value of \(n_{\mathrm{peak}}\), we select the 2,000 subhalos with \(n_{\mathrm{peak}}\) closest to each target value. For each method and quantile, we fit a low-order exponential polynomial to \(\mu_{\mathrm{disrupt}}\) as a function of \(n_{\mathrm{peak}}\) : \[\mu_{\mathrm{disrupt}}(n_{\mathrm{peak}};\,q)=10^{\alpha_{3}x^{3}+\alpha_{2}x^ {2}+\alpha_{1}x+\alpha_{0}}. \tag{5}\] Here, \(q\) is the quantile in the distribution, \(x\equiv\log_{10}(n_{\mathrm{peak}})\) and \(a_{0}\) through \(a_{3}\) are fit parameters. The fits were performed through least squares minimization in \(\log_{10}(\mu)\)-space via the Levenberg-Marquardt algorithm. Higher-order terms in the fit were manually set to zero in cases where reasonable qualitative agreement could be achieved with lower-order fits. We show the best-fitting values for each method and target quantile in Table 2. These \(n_{\mathrm{peak}}\)-dependent quantiles of the \(\mu_{\mathrm{disrupt}}\) distribution and their fits are shown in the right panel of Fig. 4. RCT disruption thresholds are essentially independent of \(n_{\mathrm{peak}}\), with the median subhalo being lost from the catalog at roughly a tenth of its peak mass and a sizable sub-population of subhalos being lost after losing only a third of its peak mass. This independence from \(n_{\mathrm{peak}}\) will result in false convergence: resimulating a subhalo with more particles will, on average, not Figure 4: The amount of mass subhalos lose before subhalo finders lose track of them, measured across the entire Symphony suite. _Left_: The probability that subhalos with \(10^{4.5}<n_{\mathrm{peak}}<10^{5}\) disrupt before reaching a given mass loss ratio, \(\mu\equiv m/m_{\mathrm{peak}}\), when subhalos are followed by Rockstar (red) and by Symfind (blue). Survivor bias is accounted for using the Kaplan–Meier estimator, and 1-\(\sigma\) confidence intervals (shaded bands) are calculated through Greenwood’s formula. At this resolution level, Symfind can follow subhalos to masses about 30-100 times smaller than Rockstar, depending on the quantile of the distribution. _Right_: The distribution of \(\mu_{\mathrm{disrupt}}\) as a function of \(n_{\mathrm{peak}}\) for Rockstar and Symfind. Different quantiles in the \(\mu_{\mathrm{disrupt}}\) distribution are shown as shades of red and blue. Fits according to Eq. 5 and Table 2 are shown as dashed lines. At a fixed \(n_{\mathrm{peak}}\) and quantile in the \(\mu_{\mathrm{disrupt}}\) distribution, Symfind can follow subhalos to factors of three-to-one-hundred times lower in mass than Rockstar. Rockstar disruption masses do not decrease as resolution increases, which can lead to the false impression of convergence in many types of numerical tests. Only minor mergers with \(m_{\mathrm{peak}}/M_{\mathrm{vir}}<0.1\) are shown; major mergers experience substantial dynamical friction and physical disruption on short time scales and require separate, dedicated analysis. change the mass at which it drops out of the RCT catalog. Because of this, some subhalo statistics (e.g., Section 5.2, Appendix G) may appear not to change with increasing resolution, giving the impression of numerical reliability when one is actually only seeing resolution-independent limitations in the subhalo finder. Symmfind can follow subhalos substantially longer, following subhalos to masses that are \(\approx 3\)-to-6 times smaller than RCT for \(n_{\rm peak}=300\) subhalos and roughly a hundred times smaller for \(n_{\rm peak}=10^{5}\) subhalos. Symfind tracks subhalos to smaller masses as resolution increases and thus does not suffer from this form of false convergence. In Section 6 we discuss what mass scales one would want a subhalo finder to be able to follow subhalos to if one wishes to study the population statistics of satellite galaxies and avoid the use of "orphan" modeling. We note that in this case, one is not interested in the median \(\mu_{\rm disrupt}\), but in a higher quantile of the \(\mu_{\rm disrupt}\) distribution, such as 90%. If, for example, the median subhalo is being lost from the catalog at the same time one would expect its satellite galaxy to disrupt, that still means that half of all subhalos are being lost too quickly and would require orphan modeling to account for. This does not mean that the lower quantiles in the \(\mu_{\rm disrupt}\) distribution are useless: for studies that do not need to track individual subhalos, one could weight the contribution of a given subhalo to the statistic of choice by the inverse of \(\Pr(\mu_{\rm disrupt}<\mu)\). But doing so would require confirming that the \(\mu_{\rm disrupt}\) distribution does not depend on any quantities of interest for this statistic (e.g., Appendix F). One would also need to be careful of numerical (as opposed to purely halo-finding-based) effects in the deep mass-loss regime when doing so. Although the analysis in this Section has shown the minimum masses that can be resolved with our method and with RCT, merely being able to resolve a subhalo with a subhalo finder does not mean that the subhalo is a numerically reliable analysis target. In Sections 4.3, 4.5, and 4.4, we establish the regimes where subhalo masses, abundances, and \(v_{\rm max}\) values are well-resolved. ### Idealized Numerical Reliability Limits Having established how long Symfind and RCT can follow a subhalo, we move on to testing when the properties of these subhalos can no longer be properly measured, either due to failures in the simulation or failures in the subhalo finder. Before performing any empirical testing, we first review the main causes of non-convergence of subhalos and combine several estimators of these effects based on idealized simulations and first-principles arguments and summarize these limits with Eq. 10. These combined limits will be used as a component of the empirical testing in Sections 4.4 and 4.5, although we establish that these limits are too conservative for some subhalo properties. For a simulation with well-calibrated time stepping (see Section 6.1 in Mansfield & Avestruz, 2021, for discussion), three major issues impact the disruption of subhalos. The first is excessive force softening. Force softening suppresses rotation curves at scales many times larger than \(\epsilon\) (e.g., Appendix B in Mansfield & Avestruz, 2021), and this suppression means that subhalos have smaller enclosed densities at fixed radii, leading to smaller tidal radii and more rapid mass loss. Using idealized simulations, van den Bosch et al. (2018) find that subhalos above the limit \[\frac{m}{m_{\rm infall}}>\frac{1.79}{1.284}\left(\frac{\epsilon\,r_{1/2}}{f(c _{\rm infall})\,r_{s,\rm infall}^{2}}\right) \tag{6}\] are largely unaffected by this process. Here, \(\epsilon\) and \(r_{1/2}\) are the instantaneous Plummer-equivalent force-softening scale and the half-mass radius of the subhalo, respectively, while \(c_{\rm infall}\) and \(r_{s,\rm infall}\) are the subhalo's NFW concentration, and NFW scale radius, respectively. \(f(x)=\ln{(1+x)}+1/(1+x)\). A corrective factor of 1.284 has been applied to account for the conversion between the Plummer force kernels used in van den Bosch et al. (2018) and the Gadget spline-based force kernels used by Symphony. Gadget force kernels are already expressed in "Plummer-equivalent" units, but the traditional conversion factor is based on matching the depth of particles' potentials at small radii (Springel et al., 2001) and does not do a good job describing the large-radius impact of force softening (Mansfield & Avestruz, 2021). The second source of numerical biases is discreteness noise. Once a subhalo has sufficiently few particles, Poisson fluctuations cause the mass loss rate to experience excessive noise. Fluctuations that temporarily increase the mass loss rate cause the subhalo to expand in response to the excess mass loss, which leads to smaller tidal radii and larger future Poisson fluctuations. Meanwhile, fluctuations that decrease the mass loss rate leave the subhalo relatively unchanged and thus have little impact on its future evolution. The asymmetric impact of fluctuations on the future evolution of the subhalo leads to an instability that can cause runaway mass loss at low resolutions. Using idealized simulations, van den Bosch et al. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline Method & \(q\) & \(a_{3}\) & \(a_{2}\) & \(a_{1}\) & \(a_{0}\) \\ \hline \hline Rockstar & 0.9 & — & 0.0532 & -0.3415 & 0.0301 \\ & 0.5 & — & — & 0.0860 & -1.4505 \\ & 0.1 & -0.1969 & 2.4327 & -9.7238 & 10.7231 \\ \hline Symfind & 0.9 & — & — & -0.3756 & -0.3473 \\ & 0.5 & — & — & -0.5034 & -0.5054 \\ & 0.1 & — & — & -0.8121 & 0.0526 \\ \hline \end{tabular} \end{table} Table 2: Best fitting values for different quantiles, \(q\), of the \(n_{\rm peak}\)-dependent \(\mu_{\rm disrupt}\) distribution for both Rockstar and Symmfind according to Eq. 5. Fits are only calibrated to the range \(300\leq n_{\rm peak}\leq 2\times 10^{5}\) and it is unlikely that the fits extrapolate straightforwardly outside this range. (2018) find that subhalos above the limit \[\frac{m}{m_{\rm infall}}>0.32\left(\frac{n_{\rm infall}}{10^{3}}\right)^{-0.8} \tag{7}\] are largely unaffected by this process. Here, \(n_{\rm infall}\) is the number of particles the subhalo had at infall. The third source of numerical biases is numerical relaxation. Dark-matter-only simulations are designed to approximate a perfectly collisionless fluid, but their discretization into particles means that simulated particles can scatter off one another, allowing flows of energy and mass across a halo over the relaxation timescale \(t_{\rm relax}(r)\). Generally, regions of the halo where \(t_{\rm relax}(r)\) is less than the age of the system are considered unresolved (e.g., Power et al., 2003). Excessively small force-softening scales can lead to catastrophic "large-angle" scattering (e.g., Fig. 6 in Knebe et al., 2000) and can lead simulations to fail to conserve energy. However, for simulations like the Symphony suite where the force softening scale has been set large enough to suppress this effect, \(t_{\rm relax}\) is set by the superposition of many large-distance "small-angle" scatterings (e.g., Ludlow et al., 2019) and the primary effect of force softening becomes a weak, logarithmic suppression of the Coulomb logarithm as \(\epsilon\) becomes larger. Following Ludlow et al. (2019), this suppression leads to relaxation times that scale with the local orbital time as \[\frac{t_{\rm relax}(r)}{t_{\rm orbit}(r)}=\frac{N(<r)}{4}\left(\ln\left(\frac{r ^{2}}{\epsilon^{2}}+1\right)+\frac{\epsilon^{2}-2r^{2}}{3(\epsilon^{2}+r^{2}) }-\ln\left(\frac{3}{2}\right)\right)^{-1}. \tag{8}\] Here, \(t_{\rm orbit}\) is the circular orbit time, \(t_{\rm orbit}\equiv 2\pi r^{3/2}/\sqrt{GM(<r)}\), and \(N(<r)\) is the number of particles with radii smaller than \(r\). For each particle in the halo, we calculate \(t_{\rm relax}(r)\) using that particle's position at the snapshot of the subhalo's infall. From this, we can calculate a relaxation limit \[m(t_{0})>\sum_{i\in{\rm bound}}m_{p}\cdot H(t_{0}-t_{\rm infall,i}-t_{\rm relax,i}). \tag{9}\] Here, the sum goes over all particles that were bound to the subhalo at the subhalo's snapshot of first infall, \(H(x)\) is a Heaviside step function that is \(0\) for \(x\leq 0\) and \(1\) otherwise, \(t_{0}\) is the current age of the universe, and \(t_{\rm infall}\) is the particle accreted onto the subhalo (not the snapshot that the subhalo accreted onto the host). In other words, when the total relaxed mass within the halo is equal to its current subhalo mass, we consider its mass loss to be unconverged. The use of particle-tracking allows for more accurate estimates of how long a particle has been orbiting a halo, but a similar methodology that approximates the orbiting time of particles has been widely used to model the unconverged inner regions of high-resolution, isolated halos (e.g., Power et al. Figure 5: The convergence limits of subhalos in Symfind. _Left:_ Comparison between several idealized convergence limits and the mass-loss-history of a fairly well-resolved \(n_{\rm peak}\approx 10^{4}\) subhalo. Lines compare the force-softening limit (Eq. 6; orange), the discreteness limit (Eq. 7; blue), and the two-body relaxation limit (Eq. 9; red). Each limit predicts that the subhalo is only converged when the black curve is above the respective colored curve. _Right:_ The median limiting particle count, \(n_{\rm lim,ideal}\) for subhalos as a function of \(n_{\rm peak}\). All Symphony suites are combined in this Figure. The colored curves show the limiting particle counts for each method shown in the left panel and the black curve shows the maximum limit across the three on a halo-by-halo basis. The 68% spread around the black curve is shown as a gray-shaded region. The purple curves show fits of Eq. 10 to the median (dashed purple line) and 68% spread (dotted purple lines). 2003; Springel et al., 2008; Ludlow et al., 2019). Because these studies do not track individual particles, they generally assume that particles have been orbiting their host halo for a timescale comparable to the age of the universe and correct for this assumption with an empirical multiplicative factor. (This empirical factor also effectively corrects for any order-unity inaccuracies in the Coulomb logarithm, so in principle, one would want to re-calibrate this corrective factor using the true orbital times of particles, although we do not attempt to do so here.) It is particularly important to account for the individual orbiting times of particles for subhalos because an old subhalo with a small \(m/m_{\rm peak}\) will have more relaxed mass than a recently accreted subhalo with large \(m/m_{\rm peak}\), even if the two have identical instantaneous properties. Therefore, failing to account for individual orbiting times would lead to incorrect convergence limit trends with \(n_{\rm peak}\). Each of these limits is different on a halo-by-halo basis. To account for noise in \(m(t)\), we count a subhalo as having passed a particular convergence limit if there are no subsequent snapshots where \(m(t)\) is above that limit. We show a typical high-resolution (\(n_{\rm peak}>10^{4}\)) subhalo in Fig. 5. Time is normalized by the crossing time at infall, \(t_{\rm cross}\equiv 2~{}R_{\rm vir}(t_{\rm infall})/V_{\rm vir}(t_{\rm infall})\). For each subhalo in all the Symphony hosts considered in this paper, we calculate all three limits, find the most restrictive of the three and show the results in the right panel of Fig. 5. We then use the Kaplan-Meier method (see Section 4.2) to estimate the median values and 68% scatter of the particle counts at which subhalos pass these limits as a function of \(n_{\rm peak}\). We show the median number of particles in a subhalo at the time that it crosses under each limit are shown as well as the median and distribution of the most restrictive of the three limits. We have constructed these curves for each individual Symphony suite and found them to be in good agreement with one another, justifying the choice to combine all four suites together. Comparing the three individual limits, we see that Symphony's \(\epsilon\) values were reasonably well-calibrated for subhalo studies. Increasing \(\epsilon\) increases the amplitude of Eq. 6 while decreasing Eq. 9. Both are less restrictive than Eq. 7 across essentially our entire resolution range. This means that increasing or decreasing \(\epsilon\) would likely only worsen the convergence properties of our subhalos. As we discuss in Sections 4.4 and 4.5, there are some subhalo properties whose reliability is well described by these limits, but there are other properties that appear to be reliable below these limits. We fit the moments of the combined idealized limit distribution against the form \[n_{\rm lim,ideal}(n_{\star};\ q)=10^{b_{2}x^{2}+b_{1}x+b_{0}} \tag{10}\] Here \(q\) is the target quantile, \(x\equiv\log_{10}(n_{\star})\), \(n_{\star}\) is either \(n_{\rm peak}\) or \(n_{\rm infall}\), and \(b_{2}\), \(b_{1}\), and \(b_{0}\) are fit parameters. The best-fitting parameters for \(q=0.16\), \(0.50\), and \(0.84\) are shown in Table 3. We caution readers before using these fits: the relative importance of different numerical limits depends strongly on force softening, so this fit may not be applicable to all simulations. That said, it is likely that this fit is correct for other simulations that are also primarily limited by discreteness noise. Whether or not these predicted limits are consistent with the picture painted by RCT convergence tests depends on how the convergence tests are performed. Convergence tests performed on the _instantaneous_ SHMF show that convergence is achieved at \(\approx 300-1000\) particles (e.g. Nadler et al., 2023), which approximately matches the range spanned by \(n_{\rm lim,ideal}\). The picture painted by \(m_{\rm peak}\) SHMFs is more complicated because RCT \(m_{\rm peak}\) SHMFs appear to very roughly converge at modest particle counts, while \(n_{\rm lim,ideal}\) should impact convergence behavior at all \(n_{\rm peak}\) values. We discuss this in detail in Appendix G, but put briefly: the fact that RCT rapidly loses track of subhalos prevents non-convergence from being relevant at high \(n_{\rm peak}\) values. This Appendix should be read in concert with Section 5.1. In the following Sections, we test how well \(n_{\rm lim,ideal}\) describes the convergence behavior of subhalos in practice. ### The numerical reliability of \(v_{max}\) in disrupting subhalos In this Section, we show that Eq. 10 does a good job at describing when the \(v_{\rm max}\) values of subhalos are properly recovered by Symfind and show that above this limit, our halo catalogs are in good agreement with the predictions of idealized simulations. As subhalos orbit their hosts, they lose high-energy particles close to their tidal radii before losing small-radius, low-energy particles (e.g., Penarrubia et al., 2008; Green & van den Bosch, 2019; Errani & Navarro, 2021). To characterize the relative mass loss at small and large radii, authors often \begin{table} \begin{tabular}{c|c|c|c|c} \hline \(n_{\star}\) & \(q\) & \(b_{2}\) & \(b_{1}\) & \(b_{0}\) \\ \hline \hline \(n_{\rm peak}\) & 0.16 & 0.00013 & 0.2109 & 1.8675 \\ & 0.50 & -0.01853 & 0.3861 & 1.6597 \\ & 0.86 & -0.07829 & 0.9338 & 0.7743 \\ \hline \(n_{\rm infall}\) & 0.16 & -0.01294 & 0.2938 & 1.7376 \\ & 0.50 & -0.01853 & 0.3861 & 1.6597 \\ & 0.84 & -0.07829 & 0.9338 & 0.7743 \\ \hline \end{tabular} \end{table} Table 3: Best fitting-values for several moments of the resolution-dependent \(n_{\rm lim,ideal}\) distribution (Eq. 10). \(n_{\star}\) is the definition of subhalo resolution used for the fit, \(q\) is the quantile of the \(n_{\rm lim,ideal}\) distribution, and \(b_{2}\), \(b_{1}\), and \(b_{0}\) are parameters in a log-quadratic fit (Eq. 10). We caution readers that these parameters may change for simulations where \(\epsilon\) is set too large or too small or simulations where time-stepping is too coarse. consider the functional form \[\frac{v_{\rm max}}{v_{\rm max,infall}}=2^{\xi}v_{\rm max,infall}\frac{(m_{j}m_{\rm infall })^{\nu}}{(1+m/m_{\rm infall})^{\xi}}, \tag{11}\] following (Penarrubia et al., 2008, 2010). Initially, studies of this relation relied on relatively small samples of idealized simulations of orbiting subhalos that were run at very high resolutions. These studies convincingly showed that the relationship between \(v_{\rm max}\) and \(m\) remains unchanged at a fixed subhalo mass loss fraction under changes in host properties and subhalo mass loss rate (Penarrubia et al., 2010), in agreement with full dark matter zoom-in simulations (Kravtsov et al., 2004). However, the relation depends on subhalo concentration and remained poorly understood due to the limited sizes of these simulation suites and the limited ranges of infall concentrations employed by them. This was rectified with the running of DASH (Ogiya et al., 2019), a collection of thousands of idealized simulations that span a wide range of initial subhalo parameters. Using DASH, Green & van den Bosch (2019) developed an accurate model for \(\xi(c_{\rm infall}\), \(m/m_{\rm infall})\) and \(\nu(c_{\rm infall}\), \(m/m_{\rm infall})\), where \(c_{\rm infall}\) is the concentration of the subhalo at infall. Note that in Green & van den Bosch (2019), the variable we have labeled \(\xi\) was labeled as \(\mu\). Given the massive size of the simulation suite used to calibrate this model, the range of subhalo parameters explored, and the detailed tests of this model, we consider it to be the most accurate representation of the "true" predictions of idealized simulations. In Fig. 6, we compare the evolution of \(v_{\rm max}\) predicted by these models and what we find in our Symfind catalogs. We break our subhalos into groups according to the instantaneous number of particles, \(n=m/m_{p}\). For each bin, we evaluate at Eq. 11 using the model presented in Green & van den Bosch (2019) for the distribution of infall concentrations in each resolution bin. For simplicity, we only show this model for the highest resolution bin in the top panel, but each curve is compared against its own concentration distribution in the bottom bin. We have constructed this plot separately for each Symphony simulation, and the ratio between \(v_{\rm max}/v_{\rm max,infall}\) in our simulated subhalos and in the idealized models does not depend on subhalo mass, only resolution, so we stack all the suites together to improve number statistics. Above the resolution limits predicted by Eq. 10, \(v_{\rm max}\) evolution is consistent with the predictions of idealized simulations. Below these limits, \(v_{\rm max}\) skews low relative to these predictions, and the bias increases as the resolution is decreased. In other words, our simulations converge towards the behavior of high-resolution idealized simulations and reach agreement at the resolution limits where these idealized simulations predict they should. The idealized simulations described above also make strong predictions for the evolution of \(r_{\rm max}\), the radius at which the circular velocity profile is highest. We find that this quantity is very noisy in our catalogs, even for isolated halos and subhalos experiencing slow mass loss, regardless of whether we use Symfind or RCT. Thus we do not perform tests on it here. In summary, the tests performed in this Section support restricting analysis that relies on the \(v_{\rm max}\) values of subhalos to the regime suggested by numerical limits discussed in Section 4.3: \[n_{\rm lim,vmax}(n_{\rm peak})=n_{\rm lim,ideal}(n_{\rm peak}) \tag{12}\] We discuss how this translates to constraints on a subhalo population selected by pre-infall masses in Section 6. We argue that using current models of galaxy disruption, only subhalos with \(n_{\rm peak}>3\times 10^{4}\) will have \(n>n_{\rm lim,vmax}\) (and thus resolved \(v_{\rm max}\)) until the point of likely galaxy disruption. ### Mass loss rates Figure 6: The median relationship between \(v_{\rm max}/v_{\rm max,infall}\) and \(m/m_{\rm infall}\) for subhalos, stacked across the entire Symphony suite, as measured by Symfind. The red curve shows the predictions of high-resolution idealized simulations (Green & van den Bosch, 2019), performed over a range of orbital parameters and matched to the infall concentration distribution of our subhalo sample. The blue curves show the relation when calculated above different particle-count cutoffs. The ratios between these two curves are shown in the bottom panel. For simplicity, we only plot the Green & van den Bosch (2019) model for the \(n>10^{4}\) bin in the top panel, but in the bottom panel, each bin is compared against models with a matching concentration distribution. Curves transition from thick to thin at the \(m/m_{\rm infall}\) values where Eq. 10 predicts that the median halo is no longer converged. Our simulations are in good agreement with idealized simulations for converged subhalos; \(v_{\rm max}\) is biased low when numerical models predict resolution effects should be important. The previous Section established limits where the \(v_{\rm max}\) of subhaloes can be properly resolved, and in this Section, we construct similar limits for subhalo mass loss rates. Unfortunately, subhalo mass loss rates are more complicated than the connection between mass loss and the decrease in \(v_{\rm max}\), so there is no idealized equivalent to a tidal track that we can simply compare against, as we did in Section 4.4. Instead, we compare against a matched sample of higher resolution subhaloes _which are predicted to be converged according to Eq. 10_. As we demonstrated in Section 4.4, the evolution of \(v_{\rm max}\) -- a more numerically challenging problem than mass loss rates -- is converged and in agreement with idealized simulations above this threshold, so it is highly unlikely that this comparison could suffer from false convergence. Subhalo mass loss rates are particularly susceptible to survivor bias. Survivor bias is an effect one finds in samples with statistical censoring (see Section 4.2) where censoring is connected with the statistic being measured in some significant way. To review, in the case of Section 4.2, the statistic we wanted was the distribution of disruption masses, but some long-lived subhalos survived past the end of the simulation, meaning that long-lived subhalos were more likely to be censored, biasing the distribution of fully disrupted subhalos to higher disruption masses than the full sample. The solution to this was to use the Kaplan-Meier estimator (Eq. 3). Subhalo mass loss rates face a similar, but more subtle version of survivor bias, as noted in Han et al. (2016).1 This is because long-lived subhalos will tend to have slower mass loss rates, thus correlating survival times and \(m(t)\). To illustrate this, we construct a toy model for subhalo mass loss, where all subhalos have an exponential mass loss rate \(\bar{m}(\bar{t})=\exp\left(-\alpha\bar{t}\right)\), where \(\bar{m}(\bar{t})\) is the unitless toy mass relative to the infall mass, \(\bar{t}\) is the unitless toy time since accretion, and \(\alpha\) is a free parameter controlling the mass loss rate. We allow for random scatter in \(\alpha\) and cause subhalos to be censored at random \(\bar{m}(\bar{t})\) values. The specifics of the distributions used do not change any conclusions that we draw from this toy model, so we pick them for visual clarity. In this case, we generate \(\alpha\) uniformly at random between 0.25 and 1, and generate the censoring masses from a log-normal distribution with median \(10^{-1.5}\) and \(\sigma=\ln\left(10\right)\) and do not allow subhalos to have disruption masses greater than 1. This leads to a median disruption mass of approximately \(10^{-1.5}\) and a median disruption time of approximately 2.5. We draw \(10^{3}\) random realizations from this model. Footnote 1: Survivor bias can take many forms and is often very subtle. For example, during World War II, the statistician Abraham Wald wrote a series of memos designed to assess what portions of Allied bomber planes needed to be reinforced with more armored plating (Mangel and Samaniego, 1984). He developed a sophisticated statistical system for this, with the conclusion that it was most important to reinforce parts that were rarely shot on returning bombers. The planes that _did_ get shot in those places never returned. Fig. 7 shows a subset of 30 realizations, along with what the median of the \(\bar{m}(\bar{t})\) distribution would be without censoring. We also show two simple methods for stacking \(\bar{m}(\bar{t})\) to estimate the median: (1) taking the median of all uncensored subhalos, and (2) taking the median of all subhalos regardless of censoring, but setting \(\bar{m}(\bar{t})=0\) after censoring. The former method biases high, and the latter method biases low. The same effect can be seen in the radial distribution of \(m/m_{\rm infall}\) for subhalos in Fig. 9 of Han et al. (2016). These biases relative to the true median of \(\bar{m}(\bar{t})\) start early, well before the median disruption mass. This is problematic if one wants to characterize, e.g., the sharp downturn seen in Fig. 5 for a statistical sample of subhalos. when present, this sharp downturn is always terminated by the subhalo finder losing track of the object, meaning that the location of the feature is very close to the median disruption mass. This is also problematic for resolution tests: increasing resolution will change Figure 7: A toy model for the evolution of a toy mass, \(\bar{m}\), as a function of toy time, \(\bar{t}\), which illustrates the impact of survivor bias. A population of \(10^{3}\) mock subhalos was generated with random exponential timescales and random mass scales at which they are lost by the subhalo finder. 30 random subhalos are shown in black. The true median of the population is shown in red. In orange and blue, we show two flawed methods for estimating the population median. Orange shows the median mass of all surviving subhalos, and blue shows the median assuming that \(\bar{m}(\bar{t})=0\) after censoring (disruption). Both methods are heavily biased, and this bias begins well before the median disruption mass and disruption time for the sample (\(\bar{m}=10^{-1.5}\) and \(\bar{t}=2.5\), respectively). The purple curve shows the Kaplan–Meier-based estimator described in Section 4.5 and the purple shaded contour shows the 68% confidence interval on this median. Kaplan–Meier remains an unbiased estimator to far lower masses, and we thus use it in Fig. 8. the median disruption mass and even far above the typical disruption mass this could lead to a slope change that could be interpreted as non-convergence. To account for this bias, we once again use the Kaplan-Meier estimator (Eq. 3) and Greenwood's formula (Eq. 4). We aim to measure the distribution of \[m\big{(}(t-t_{\rm infall})/t_{\rm cross}\big{)}/m_{\rm peak}\equiv\mu\big{(}(t- t_{\rm infall})/t_{\rm cross}\big{)} \tag{13}\] for a range of fixed values of \(t/t_{\rm cross}\), where \(t_{\rm cross}\equiv 2\,R_{\rm vir}(t_{\rm infall})/V_{\rm vir}(t_{\rm infall})\) is the crossing time at infall. In this case, the measurement is considered uncensored if the subhalo survives until \((t-t_{\rm infall})/t_{\rm cross}\), and is otherwise censored at the final \(\mu\) value achieved prior to the simulation ending or the halo finder losing the subhalo. We estimate a 68% confidence interval by constructing two bounding probability distributions \(\widetilde{\rm Pr}(<\mu)\pm\widetilde{\rm Var}[\widetilde{\rm Pr}(<\mu)]\), both bounded between 0 and 1. We then estimate the confidence interval of \(\widetilde{\rm Pr}(<\mu)\) as the medians of these two distributions. We show this estimator in purple in Fig. 7. It is able to recover the median for our toy model to well below the median disruption mass. Using this Kaplan-Meier-based method, we find the median \(m/m_{\rm peak}((t-t_{\rm infall})/t_{\rm cross})\) evolution in three resolution bins across the range \(10^{2.5}<n_{\rm peak}<10^{4}\). We measure these distributions for both our high-resolution SymphonyMilkyWayHR halos and their paired fiducial-resolution resimulations and show the results in Fig. 8. For all three mass bins, we find that mass loss rates are converged below the mass scales predicted by Eq. 10 and are consistent with being reliable to much lower resolutions: until \(\approx n_{\rm lim,ideal}(8\,n_{\rm peak})\). We do not attempt to develop a model that explains this fortunate, but unexpected, convergence in this paper. These limits are a combination of three independent effects that all select similar mass scales, and these limits were effective at predicting when internal subhalo densities are biased low. Thus, it is surprising to not see a similar bias in mass loss rates given the dependence of tidal forces on enclosed density. However, we note that: 1. This is unlikely to be an example of false convergence (i.e. a situation where both the high- and low-resolution simulations are both incorrect but happen to agree). This is because the high-resolution curves are above their own formal resolution requirements in the region of agreement, and those formal limits were shown to be good predictors of \(v_{\rm max}\) convergence behavior in Section 4.4. 2. Formal convergence and practical convergence are not the same as one another. Slight differences between the fiducial and high-resolution curves that are smaller than our \(1\,\sigma\) error contours can be seen as \(t\) increases, particularly in the \(10^{3}<n_{\rm peak}<10^{3.5}\). It is possible that non-convergence in internal structure translates into a weak non-convergence in mass-loss rates that is consistent with our error bars. 3. The majority of the simulations used to calibrate Eq. 6 and Eq. 7 in van den Bosch et al. (2018) had \(n_{\rm peak}>10^{4}\), so it is possible these limits scale more gently with resolution in the low particle-count regime. Because of this third point, we strongly urge readers not to extrapolate these results to higher resolutions without further testing. Figure 8: Subhalo mass loss rates compared against idealized numerical limits. Each panel shows a set of Symfind subhalos from the SymphonyMilkyWayHR suite (red) and their mass-matched subhalos from the fiducial-resolution SymphonyMilkyWay (blue), grouped by \(n_{\rm peak}\) of the fiducial-resolution subhalos. The Kaplan-Meier estimator has been used to correct for survivor bias and to estimate \(1\,\sigma\) uncertainties (shaded bands). The red and blue arrows show the times at which mass loss rates in the corresponding simulations are predicted to become unconverged according to Eq. 10 a fit against idealized numerical limits. These limits underestimate how long mass loss rates are converged. Instead, our fiducial simulations stay converged until the predicted limit for the _high-resolution_ simulations, motivating the less conservative empirical limit given in Eq. 14. In summary, the tests performed in this Section support restricting analysis that relies on the instantaneous masses of moderate-resolution subhalos to objects where \(n>n_{\rm lim,mass}\), defined as \[n_{\rm lim,mass}(n_{\rm peak})=n_{\rm lim,ideal}(8\,n_{\rm peak}). \tag{14}\] We discuss how this translates to constraints on a subhalo population selected at by pre-infall masses in Section 6. We argue that using current models of galaxy disruption, only subhalos with \(n_{\rm peak}>4\times 10^{3}\) will have \(n>n_{\rm lim,mass}\) (and thus resolved \(m(t)\) and abundances) until the point of likely galaxy disruption. This is roughly an order of magnitude less demanding in particle count than the requirement for converged \(v_{\rm max}\) across the same mass loss range. ## 5 Subhalo population statistics As discussed in the Introduction, there are good reasons to wonder whether some current subhalo finders may have falsely converged and are missing an appreciable number of real subhalos. In Section 4, we showed that Symfind is able to track subhalos to much smaller masses than one current cutting-edge subhalo finder, RCT, and that RCT exhibits signs of false convergence. In this Section, we show how these two findings propagate into well-worn subhalo statistics: the subhalo mass function and the subhalo radial distribution. We show that, as one would expect from the previous Section, at a fixed \(m_{\rm peak}\), Symfind recovers substantially more subhalos than RCT, particularly at small radii and high resolutions, even though RCT appears to be converged. ### The subhalo mass function First, we consider the subhalo mass function (SHMF), a measure of the abundance of subhalos as a function of subhalo mass. The RCT SHMF is generally thought to be converged (e.g. Nadler et al., 2023) and we explicitly demonstrate that there appears to be qualitative convergence in the RCT SHMF in Appendix G (although, as we discuss in that Appendix, the issue is subtle). The left panel of Fig. 9 compares SHMFs for the SymphMilkyWay suite, measured by RCT and by Symfind as a function of \(m_{\rm peak}\). The SHMF is measured within three different radii: \(R_{\rm vir}\), \(R_{\rm vir}/2\), and \(R_{\rm vir}/4\). We also plot the radio between Symfind and RCT SHMFs, terminating the curve when the RCT sample has fewer than 10 subhalos in it to prevent excess shot noise. The difference between RCT and Symfind significantly changes three of the most important aspects of the SHMF: its amplitude, slope, and radial dependence. Symfind recovers more subhalos than RCT at all masses. The effect is stronger at smaller radii and for higher mass subhalos. Within \(R_{\rm vir}\), 15%-to-40% more subhalos are recovered, depending on subhalo mass. The logarithmic slope of the mass function at low masses, \(\alpha\), is shallower, with \(\alpha_{\rm sym}-\alpha_{\rm RCT}=-0.04\). Within \(R_{\rm vir}/4\), 35% to 120% more subhalos are recovered, and \(\alpha_{\rm sym}-\alpha_{\rm RCT}=-0.11\) These trends occur because of Symfind's ability to follow subhalos to smaller \(m/m_{\rm peak}\) ratios than RCT. Because the radii of low-mass subhalo orbits usually do not evolve much with time, subhalos at small orbits tend to be older, and because they are closer to the center of the host halo, tidal fields are more intense. This means that small-radius subhalos are more likely to have lost a large amount of mass than large-radius subhalos (e.g., van den Bosch et al., 2016; Han et al., 2016). In contrast, high-mass subhalos are typically more recent accretions than low-mass subhalos but are also much more highly resolved. As we discussed in Section 4.2, improving subhalo resolution does not reduce the minimum \(m/m_{\rm peak}\) values that RCT can reach before disruption, while increasing resolution _does_ reduce this threshold for particle-tracking (Fig. 4). Increasing resolution increases the number of recovered subhalos and makes the slope of the SHMF shallower, as we show in the right panel of Fig. 9. This panel shows subhalo mass functions for the SymphonyMilkyWayHR suite, which has particle masses eight times smaller than SymphonyMilkyWay. As discussed in Section 2, these hosts were originally selected with criteria that biased against the presence of high-mass subhalos, as can be seen in Fig. 9. Compared to RCT, Symfind finds between 25% and 130% more subhalos within \(R_{\rm vir}\) and between 60% and upwards of 250% more subhalos within \(R_{\rm vir}/4\). The corresponding \(\alpha_{\rm sym}-\alpha_{\rm RCT}\) values for these two enclosing radii are -0.1 and -0.21, respectively. At these resolutions, the impact of the choice of subhalo finder on subhalo abundances is comparable to the impact of including an embedded high-mass disk potential in the center of the halo (e.g. Garrison-Kimmel et al., 2017). Where does this strong resolution dependence come from? Decreasing particle mass decreases the minimum \(m/m_{\rm peak}\) that low-mass subhalos can be tracked to with our particle-tracking method (Section 4.2). As this limit decreases, the low-mass end of the SHMF will converge towards the "unevolved" mass function, i.e., the \(m_{\rm peak}\) mass function of all objects that have ever been accreted onto the host and its subhalos, regardless of disruption status. In contrast, because RCT's threshold for subhalo disruption stays slowly increases with increasing resolution, its \(m_{\rm peak}\) functions decrease slightly as resolution improves. See Appendix G for extended discussion. One might be concerned by this resolution dependence: does this mean that \(m_{\rm peak}\) functions are unconverged at all masses, even at high-resolution levels? Yes. Unless a simulation has such a heroic level of resolution that it and its subhalo finder are able to resolve the _entire_ unevolved mass function, raw \(m_{\rm peak}\) mass functions will always be formally unconverged. verged, as there will always be some subhalos which have passed below the resolution floor that could be recovered if the simulation had more particles. But this is not as dire a circumstance as it might seem. The \(m_{\text{peak}}\) function is not an observable quantity: one of the primary reasons why simulated predictions for the \(m_{\text{peak}}\) function are interesting is because they are often used as a crucial component when modeling the stellar mass or luminosity of dark matter subhalos. Using a model where stellar masses are assigned to subhalos based purely on their \(m_{\text{peak}}\) values implicitly assumes that galaxies never disrupt and never lose mass as long as some portion of their original subhalo survives. But galaxies _do_ eventually disrupt and lose mass themselves (see Section 6.1.2). Thus, the process of making reliable predictions for satellite galaxy abundances based on their \(m_{\text{peak}}\) values depends on simultaneously accounting for subhalo finder limitations, numerical limits, and one's explicit choice of a galaxy disruption model. We discuss the intersection between these three factors in Section 6. Given the unresolved modeling caveats discussed in this later Section, we defer quantitative predictions for the converged stellar mass function to future work. ### The subhalo radial distribution A great deal of literature has been written on whether the radial distribution of subhalos matches the observed distribution of satellite galaxies. At a fixed _present-day_ subhalo mass, subhalo number densities are much less centrally concentrated than both satellite number densities and the total dark matter density profile of the host (e.g., Nagai & Kravtsov, 2005; Springel et al., 2008). However, selecting subhalos at a fixed mass has little relevance to most observations of satellite radial distributions: satellites stay intact until their subhalos have lost large amounts of mass, meaning that any cut on a fixed present-day mass will necessarily underestimate the number densities of satellites at a fixed stellar mass at small radii where the ratio between stellar mass and subhalo mass will be abnormally high. It is well-known that number density profiles selected by infall masses -- which are more likely to trace galaxy stellar masses -- are more centrally concentrated than those selected by a present-day mass due to the larger amount of mass lost by small-radius subhalos (e.g., Nagai & Kravtsov, 2005; Han et al., 2016) and some authors claim that simply selecting subhalos by infall mass/velocity resolves the problem (e.g., Nagai & Kravtsov, 2005; Kuhlen et al., 2007). But others have found that agreement is only achieved for very high-mass subhalos with high resolutions that experience large amounts of dynamical friction (Ludlow et al., 2009; Bose et al., 2019). Some authors have argued that simulated and observed profiles cannot be brought to match without introducing "orphan" satellite galaxies in post-processing (i.e., Figure 9: The \(z=0\) subhalo \(m_{\text{peak}}\) functions of Milky Way-mass halos, normalized by hosts masses, \(M_{\text{vir}}\). _Left:_ Comparison between the subhalo mass functions measured by Rockstar (dashed) and Symfind (solid) for the fiducial-resolution SymphonyMilkyWay suite. Mass functions within different radii are shown as different colors, and the bottom panel shows the ratio between Symfind mass functions and Rockstar mass functions. Symfind recovers more subhalos than Rockstar, with the difference increasing at smaller radii and higher \(m_{\text{peak}}\). _Right:_ The same as the left panel, except that only mass functions for the four high-resolution SymphonyMilkyWayHR halos are shown. Note that the range of the \(y\)-axis of the lower panel has been expanded due to the larger differences between the two subhalo-finding methods. The same qualitative trends seen at fiducial resolutions are seen at higher resolutions, but roughly significantly more subhalos are recovered by Symfind compared to the fiducial case. See Section 5.1 for discussion. model galaxies that outsurvive their subhalos; e.g., Gao et al., 2004; Newton et al., 2018; Carlsten et al., 2022, see Section 7). It is possible that much of the confusion comes from a combination of subhalo finding and resolution: Green et al. (2021) showed that idealized simulations predict that artificial disruption and subhalo finder limitations should have a large impact on subhalo number density profiles, and Manwadkar and Kravtsov (2022); Pham et al. (2023) showed that with a modified version of RCT (see Section 7.1.1), subhalo number density profiles converge towards the more highly concentrated dark matter density profiles of their host halos as resolution increases. In this Section, we come to a similar conclusion, following a procedure similar to Manwadkar and Kravtsov (2022); Pham et al. (2023). In Fig. 10 we show the cumulative distribution of satellite radii, selected at different peak resolution levels, \(n_{\rm peak}\), both with RCT and with Symfind. To aid in comparison against RCT, the bottom panels in Fig. 10 show the ratio of a given subhalo population's radial CDF against \(\rm CDF_{\rm RCT}(r)\). \(\rm CDF_{\rm RCT}(r)\) is a fit against the CDF of RCT subhalos with \(n_{\rm sub,peak}>10^{2.5}\) using the following expression: \[\rm CDF_{\rm RCT}(r)=10^{c_{4}x^{4}+c_{5}x^{3}+c_{2}x^{2}+c_{1}x}. \tag{15}\] Here, \(x=\log_{10}(r/R_{\rm vir})\), \(c_{4}=-0.1434\), \(c_{3}=-0.6104\), \(c_{2}=-1.6723\), and \(c_{1}=0.6528\). This fit is accurate to the few-per-cent level within the range \(10^{-1.5}<r/R_{\rm vir}<1\). We also show the average enclosed mass profile of the hosts in our sample, \(M(<r/R_{\rm vir})/M_{\rm vir}\). RCT profiles show no dependence on resolution, falling along a profile much less centrally concentrated than the underlying dark matter distribution. In contrast, Symfind leads to subhalo profiles that become more concentrated as resolution increases. Our highest resolution bin, \(n_{\rm peak}>10^{4.5}\) approaches the same shape as the underlying mass distribution, but is still less concentrated. The lack of convergence with increasing resolution makes it possible that even higher resolutions could lead to subhalo number density profiles that trace the underlying mass profile, as predicted by, e.g., Han et al. (2016); Green et al. (2021). The agreement between RCT profiles at varying resolution levels is _false_ convergence. As we showed in Section 4, increasing resolution does not allow RCT to track subhalos to lower \(m/m_{\rm peak}\) ratios, so multi-resolution tests do not result in different radial profiles, giving the incorrect impression that profiles are numerically reliable. There are two effects that could lead to increasingly concentrated subhalo number density profiles with increasing resolution. The first is physical: dynamical friction (e.g., Chandrasekhar, 1943) causes subhalos -- especially large subhalos -- to lose energy over time, leading to contracted orbits. This means that higher-mass subhalos will have smaller orbits than lower-mass subhalos accreted at the same time. The second is purely numerical: in the absence of dynamical friction, subhalos accreted at early times will have smaller orbits than those accreted at later times, which means that as resolution is increased and subhalos can be tracked for longer, number density profiles should become more concentrated. The fact that Rockstar's number density profiles are fixed across resolution is false convergence in either case, but if the first effect dominates, the Symfind trend shown in Fig. 10 is a physical change that is missed by the false convergence; if the latter effect dominates, the particle-tracking trend is numerical and the curves should be interpreted as lower limits on the true CDF. We do not believe Fig. 10 is showing the effects of dynamical friction. First, the effects of dynamical friction on subhalos in the mass range considered here are quite weak (e.g., Fig. 3 and Fig. 7 in van den Bosch et al., 2016). Second, we have re-created Fig. 10 using the smaller particle masses of SymphonyMilkyWayHR and did not see a significant difference in radial CDFs at a fixed resolution as the subhalo's \(m_{\rm peak}/M_{\rm vir}\) changed. Third, Manwadkar and Kravtsov (2022) Figure 10: Radial distribution of subhalos in the SymphonyMilkyWay suite. We compare the radial distribution of dark matter particles (dotted) against Symfind (solid) and Rockstar (dashed). The bottom panel shows the ratios of these curves to a fit to the dashed blue curve, \(\rm CDF_{\rm RCT}\) (Eq. 15). Rockstar subhalos show no appreciable dependence on resolution. This is caused by the effect shown in the right panel of Fig. 4: Rockstar survival mass ratios do not depend on resolution, meaning that high-resolution subhalos are no more adept at surviving in the inner regions of the host than low-resolution subhalos are, giving the false impression of convergence. Symfind finds more centrally concentrated subhalo distributions, and increasing resolution leads to radial profiles that increasingly approach the distribution of dark matter particles. showed that a similar convergence towards the mass profile can be found at a fixed subhalo mass by increasing resolution in a modified version of RCT (see discussion in Section 7.1.1). In Appendix H, we compare this analysis to Subfind radial number density profiles from Han et al. (2016) and show that it is possible that Subfind does not suffer from this type of false convergence, but that it misses a large number of small-radius subhalos at moderate resolutions. Lastly, as discussed previously in Section 5.1, we note that simply making a cut at a fixed \(m_{\rm peak}\) is insufficient to model the spatial distribution of galaxies. If a subhalo finder is capable of tracking a subhalo past the point at which its galaxy would have disrupted, some subset of very low \(m/m_{\rm peak}\) subhalos would need to be removed from the sample (see Section 6.1.2). Given the connection between subhalo mass loss and subhalo position, this means that a flat \(m_{\rm peak}\) or \(v_{\rm peak}\) cutoff could overestimate the radial concentration of the satellite number density profile. We defer a complete treatment of this issue to future work. ## 6 When is a subhalo finder "good enough?" New subhalo-finding techniques have been developed nearly continuously for the past forty years. Many popular methods are described in Knebe et al. (2011), and numerous sophisticated techniques have been developed in the subsequent years (e.g., Han et al., 2018; Elahi et al., 2019). The development of these techniques even predates the time period when dark matter simulations were able to resolve substructure at all (White et al., 1987). Where does this end? Will we continue to make new halo finders for the next forty years? Does such a wide proliferation of methods mean that coming to some sort of consensus on the properties of the subhalos that host satellite galaxies is hopeless? Perhaps not. In this Section, we argue that Symfind _is able to track all numerically-resolved subhalos until the point of likely galaxy disruption_, according to current galaxy-disruption models. This means that improvements in halo finder techniques beyond our method would be unlikely to change predicted galaxy abundances. We do not explicitly model the impact of halo finding on purely gravitational probes of substructure (e.g., gaps in stellar streams, substructure lensing statistics; see, e.g., Section 5 of Bechtol et al., 2022 for review), but argue that _a priori_, one would expect that finding substructure that is relevant for gravitational probes is a less demanding numerical task than finding the subhalos that host visible satellite galaxies at a comparable subhalo mass scale. ### Disruption masses If idealized simulations are correct, low-mass subhalos should survive in some form down to essentially arbitrarily small masses (van den Bosch et al., 2018; van den Bosch & Ogiya, 2018; Errani & Navarro, 2021; Green et al., 2022). In this case, improvements in halo finders could likely always increase the abundance of subhalos one could find at a fixed \(m_{\rm peak}\), but galaxies lose mass along with their subhalos and at a certain point these subhalos no longer contain an observationally relevant galaxy for a given observation. When judging the utility of a halo finder, an important question is where this transition between relevant and irrelevant subhalos occurs for the target observable. We argue that for the purposes of making predictions for visible satellite galaxy populations, once a subhalo finder meets two criteria, improving subhalo finder performance no longer improves our ability to estimate the abundances of satellite galaxies. The first criterion is that the subhalo finder should be able to track subhalos until they are no longer numerically reliable. The second is that the subhalo finder Figure 11: Comparison between subhalo finder disruption limits and other limiting factors for subhalo/satellite galaxy analysis. The two colored curves show \(\mu_{90}\), the \(m/m_{\rm peak}\) ratio at which Symfind can still resolve 90% of the subhalo population (reproduced from Fig. 4). The orange and purple horizontal bands show the range of mass ratios at which galaxies are expected to disrupt from hydrodynamic simulations (Smith et al., 2013, orange) and from empirical modeling (Behroozi et al., 2019, purple). To track all subhalos expected to host galaxies without “orphan” modeling, a subhalo finder must have disruption thresholds below those bands. The solid black curve shows the minimum \(\mu\) at which \(v_{\rm max}\) is still resolved (Eq. 12; 4.4) and the dashed black curve shows the minimum \(\mu\) at which mass loss rates are still resolved (Eq. 14; Section 4.5). Satellite galaxy studies that only require resolved abundances and masses require \(n_{\rm peak}\gtrsim 4\times 10^{3}\) and studies that require resolved rotation curves require \(n_{\rm peak}\gtrsim 3\times 10^{4}\). Symfind is able to identify all resolved subhalos that are likely to host satellite galaxies. should be able to track subhalos until the galaxies within them are likely to have disrupted. #### 6.1.1 Numerical mass limits To estimate the impact of numerical resolution, we compare the 90% \(m/m_{\rm peak}\) disruption thresholds, \(\mu_{90}\), from Fig. 4 for RCT and Symfind against two numerical limits in Fig. 11. These are (1), the minimum \(m/m_{\rm peak}\) at which \(v_{\rm max}\) becomes unconverged (\(n_{\rm lim,vmax}\); Eq. 12) and (2), the limit at which subhalo mass loss rates become unconverged (\(n_{\rm lim,mass}\); Eq. 14). Subhalo finders whose \(\mu_{90}\) curve is below a particular limit are able to find all subhalos that can correctly resolve a particular subhalo property. We only plot \(n_{\rm lim,mass}\) out to \(n_{\rm peak}<10^{4}\) because we do not know whether the relation is reliable beyond this resolution level. As discussed in Sections 4.4 and 4.5, the limit at which \(v_{\rm max}\) can be reliably measured, \(n_{\rm lim,vmax}\), is higher at a fixed \(n_{\rm peak}\) than \(n_{\rm lim,mass}\). Both limits are shown in Fig. 11. RCT's \(\mu_{90}\) disruption threshold is above both limits for most of the resolution range tested here, meaning that there are a large number of resolved subhalos that it cannot find. Symmfind has a \(\mu_{90}\) disruption threshold below the mass-loss numerical limit while \(n_{\rm peak}\lesssim 10^{4}\) and below the \(v_{\rm max}\) numerical limit across the entire resolution range we can reliably probe (although extrapolation suggests that the disruption threshold will only stay below this numerical limit while \(n_{\rm peak}\lesssim 3\times 10^{5}\)). This means that above these two \(n_{\rm peak}\) cutoffs, Symfind may start to miss some resolved subhalos with very low \(m/m_{\rm peak}\). However, these \(m/m_{\rm peak}\) values are low enough that the galaxies that would be hosted by these subhalos may have either disrupted or experienced substantial mass loss. Because of this, the key question about these numerical limits is whether our method fails to find resolved subhalos _that still host galaxies_. We address this question in Section 6.1.2. #### 6.1.2 Galaxy disruption mass loss limits To estimate the impact of galaxy disruption, we consider two estimates of when satellite galaxies disrupt. The first is calibrated off of hydrodynamic simulations, and the second is calibrated off of empirical models. There are several caveats associated with both estimates, which are discussed at the end of this subsection. Smith et al. (2016) performed a study of galaxy mass loss as a function of subhalo mass loss in hydrodynamic simulations and found that galaxy masses exponentially decrease once \(m/m_{\rm peak}\) crosses an exponential threshold, \(f_{0}\). A reasonable approximation of this model is that galaxies disrupt when \(m/m_{\rm peak}<f_{0}\). Smith et al. (2016) finds that small satellite galaxies (\(0.025<r_{\rm eff}/r_{\rm vir}\) where \(r_{\rm eff}\) is the satellite's effective radius at infall) and \(r_{\rm vir}\) is the subhalo's virial radius at infall) disrupt at relatively small masses, \(f_{0}=0.0418\), and that galaxies with large radii (\(0.04>r_{\rm eff}/r_{\rm vir}\)) disrupt quickly, \(f_{0}=0.116\). This range is shown as an orange-shaded band in Fig. 11. Behroozi et al. (2019) fit a galaxy disruption model against observational data that causes satellite galaxies to disrupt once their \(v_{\rm max}\) values pass below some cutoff ratio \(T_{\rm merge}\equiv v_{\rm max}/v_{\rm mpeak}\), where \(v_{\rm mpeak}\) is the value of \(v_{\rm max}\) during the snapshot that the subhalo achieved its maximum mass. \(T_{\rm merge}\) was fit against: \[T_{\rm merge} =T_{\rm merge,300}+(T_{\rm merge,1000}-T_{\rm merger,300})\xi \tag{16}\] \[\xi =0.5+0.5\,{\rm erf}\left(\frac{\log_{10}\left(V_{\rm Mpeak,host} /(1{\rm km\,s^{-1}})\right)-2.75}{\sqrt{2}/4}\right). \tag{17}\] Here, \(V_{\rm Mpeak,host}\) is the \(V_{\rm max}\) of the subhalo's host at the snapshot where the host achieved \(M_{\rm peak}\). Behroozi et al. (2019) found best-fitting values of 0.544 and 0.466 for \(T_{\rm merge,300}\) and \(T_{\rm merge,1000}\), respectively. Note that these limits are calibrated with respect to \(V_{\rm peak}\) of the halo hosting the orphans, not the subhalos that sourced them. To convert these limits into the convention used by Fig. 11, we invert the \(v_{\rm max}/v_{\rm infall}\) to \(m/m_{\rm infall}\) relation used by UniverseMachine (Eq. 15 in Behroozi et al. 2019). The \(v_{\rm max}\) values used in the fit to Eq. 16 largely came from modeled "orphan" galaxies that were spawned after RCT lost track of their original dark matter subhalo and followed forward according to an analytic prescription that was used after the simulation finished running. From this point, subhalo masses were inferred from the orbit-averaged model in Jiang and van den Bosch (2016) and converted into \(v_{\rm max}\) values by differentiating equation 11, adopting values for \(\mu\) and \(\nu\) from Penarrubia et al. (2010). As such, there is no loss of information by recasting these disruption limits in terms of subhalo mass. The resulting range of \(m/m_{\rm peak}\) values at which this model predicts galaxies should disrupt is shown as a purple band in Fig. 11. RCT's disruption threshold is higher than both colored bands at all resolution levels, meaning that a substantial fraction of subhalos are lost before their satellite galaxies would have disrupted. This is unsurprising: galaxy models built on top of RCT catalogs generally need to generate post-disruption "orphan" galaxies to match large-scale clustering statistics (Pujol et al. 2017; Campbell et al. 2018; Behroozi et al. 2019). In contrast, the disruption threshold for our particle-tracking method is lower than both colored bands for all subhalos with \(n_{\rm peak}>10^{3}\), meaning that -- if one ignores numerical effects -- no post-hoc orphan modeling would be needed to follow galaxies to the point of disruption for subhalos with \(n_{\rm peak}>10^{3}\). However, numerical effects increase this lower limit on \(n_{\rm peak}\) for all subhalo finders, including ours. Of the galaxy disruption models tested here, the one that requires the most subhalo mass loss prior to galaxy disruption is UniverseMachine's high-galaxy-mass limit. \(n_{\rm lim,mass}\) (Eq. 14) passes below this limit at \(n_{\rm peak}>4\times 10^{3}\), meaning that for generic galaxy populations, one would want at least this many particles for resolved masses and abundances. \(n_{\rm lim,vmax}\) (Eq. 12) passes below this limit at \(n_{\rm peak}>3\times 10^{4}\), meaning that for generic galaxy populations, one would want at least this many particles for resolved rotation curves. No improvements in subhalo finder techniques would reduce either limit because they come from a combination of the galaxy disruption model and numerics of the simulation. Note that, because this is the most conservative of the limits considered here, it may be possible that for _specific_ galaxy populations less resolution is required. This would need to be calibrated explicitly. Although the bands shown in Fig. 11 are likely qualitatively correct, more work needs to be done to understand their exact ranges. The best-fitting values in Behroozi et al. (2019) are derived relative to the \(v_{\rm max}\) values predicted by UniverseMachine's post-disruption orphan model and are not directly simulated. Any biases in the predictions of this orphan model could lead to biases in the best-fitting disruption thresholds. Smith et al. (2016) calibrated their fit on a set of Milky Way-mass subhalos, meaning that the corresponding band does not account for any mass-dependence of disruption rates. Some level of mass dependence is expected because Milky Way-mass subhalos are more heavily star dominated than more/less massive subhalos (e.g., Wechsler and Tinker, 2018). The Smith et al. (2016) fits were only performed on a single hydrodynamical model (Dubois et al., 2014) using the AdaptaHOP subhalo finder (Aubert et al., 2004) and did not account for survivor bias. Any systematic uncertainties associated with the hydrodynamic scheme or halo finder could propagate into biases in the fitted model. Additionally, both studies were calibrated primarily on high-mass satellite galaxies, and it is unclear how strongly these disruption thresholds depend on galaxy mass. Smith et al. (2016) focused on Milky Way-mass subhalos, and UniverseMachine only included orphan galaxies for which \(v_{\rm max}\) > 80 km s\({}^{-1}\) during the snapshot prior to disruption (Behroozi et al., 2019). _A priori_, one would expect that galaxies with more self-gravity from stars would require more subhalo mass loss prior to disruption. If true, this would mean that subhalo finders capable of tracking the subhalos near the peak of the \(m_{\star}/m\) relation would be even more capable of tracking the subhalos that host low-mass galaxies, all other galaxy properties being held equal. but other galaxy properties -- such as stellar radius -- also impact the ease of disruption, so this requires additional study. ### Error rates Beyond surviving to the masses needed to resolve galaxy disruption, a subhalo finder/merger tree must not attach a subhalo branch to incorrect density peaks and must continue to track the same subhalo through time. As shown in Fig. 2, some RCT subhalos experience an error during their final snapshot where incorrect positions and masses are assigned to the subhalo. To estimate the frequency of these errors, we follow Appendix C and compare the locations of the subhalos found with RCT and with Symfind against one another and with the locations of the 32 most-bound "core" particles from the snapshot of first infall. If the spheres formed by the half-mass radii of the two subhalos do not intersect _and_ the Symfind subhalo has more core particles within its half-mass radius than RCT does, we consider RCT to have encountered an error. All Symfind subhalos that fail the converse check are considered to be disrupted anyway, so a similar test cannot be performed in reverse. Regardless of this, RCT subhalos rarely outsurvive Symfind subhalos, making it difficult to obtain a statistical sample (see Section 4.2), and it is rare that Symfind subhalos are errors relative to RCT subhalos (see Appendix C). The fraction of surviving, \(z=0\) RCT subhalos that encounter errors as a function of distance from their hosts' centers is shown in the left panel of Fig. 12. The same is shown in the right panel for subhalos that have disrupted before the end of the simulation. In the right panel, the final distance between the subhalo and its host is shown on the \(x\)-axis instead of the current-day distance. Averaged across the entire subhalo sample, errors are rare at any specific snapshot, staying at-or-below the 5% level for all \(z=0\) subhalos within \(R_{\rm vir}\). The error rate becomes increasingly relevant for subhalos at \(r/R_{\rm vir}\lesssim 1/20\). The error rate for disrupted subhalos is much higher: \(\approx 25\%\)-\(30\%\) of all subhalos experience an error in their final snapshots. _Most_ subhalos that disrupt at small radii (\(r/R_{\rm vir}\lesssim 1/10\)) experience such an error, even at high resolutions. Counter-intuitively, as subhalo resolution increases, the RCT error rate also increases. There are two potential explanations for this. First, as seen in Fig. 4 and Fig. 11, RCT disruption thresholds slightly increase with increasing resolution, meaning that it is possible that RCT performs slightly better at lower resolutions than it does at higher resolutions. Second, with our methodology, errors in RCT can only be caught if Symfind subhalos outsurvive their RCT counterparts. As shown in Fig. 4, Symfind can track high-resolution subhalos to much lower masses than low-resolution subhalos, meaning that errors will be more easily caught in this regime. As seen in Fig. 2 and Fig. 16, \(m_{\rm peak}\) can be achieved during this class of error, meaning that a subhalo population selected at a fixed \(m_{\rm peak}\) will actually be a mixture of the target population and smaller subhalos that have erroneously scattered into the bin. This is not too large of an effect, though. As discussed in Section 2.1, throughout this paper we use \(m_{\rm peak}\) measured only prior to first infall. Including the entire RCT branch up to the point of disruption would only have increased the size of our sample by 5%. More importantly, the existence of these errors is highly problematic for orphan models. The vast majority of these models create orphans at the last moment that the input halo catalog can track the subhalo (e.g., Moster et al., 2013; Behroozi et al., 2013; see Pujol et al., 2017 for a wider review). This means that 20% to 30% of all orphan galaxies are initialized with incorrect positions, velocities, and masses, leading to similar errors in the inferred properties of galaxy populations. Nearly the entire low-radius orphan population will be generated with incorrect properties. ### Is Symfind "good enough?" For the purposes of finding subhalos that are likely to still host satellite galaxies, yes. Fig. 11 shows that according to current galaxy disruption models, improving subhalo finding techniques would be unlikely to improve estimates of the abundance of resolved subhalos that are likely to still host satellite galaxies. At low particle counts, there are some galaxy-hosting subhalos with low \(m/m_{\rm peak}\) that our method cannot find (12% and 18% of \(n_{\rm peak}=300\) subhalos would be lost from our catalogs before meeting the more conservative of the Smith et al. (2016) and Behroozi et al. (2019) lower limits, respectively). However, all such subhalos are so low-resolution that their mass loss rates and \(v_{\rm max}\) values are not resolved, meaning that they are not reliable analysis targets for abundance studies, regardless of whether they can be found. At high particle counts, some subhalos with very low \(m/m_{\rm peak}\) are resolved but cannot be found with our method. However, the \(m/m_{\rm peak}\) for these subhalos is so low that current models of galaxy disruption predict that the galaxies inside these subhalos should have either disrupted or have lost a substantial amount of mass, meaning that they would either not contribute to observed satellite galaxy abundances or contribute very weakly (see point 2 below). In Fig. 9 and Fig. 10, we showed that our method finds increasingly large subhalo abundances at a fixed \(m_{\rm peak}/M_{\rm vir}\) and increasingly concentrated subhalo radial number density distributions as the resolution is increased, respectively. These increases occurred across all subhalo masses. There is no conflict between these Figures and our conclusion in this Section. These Figures include all subhalos regardless of \(m/m_{\rm peak}\), including subhalos that are unlikely to still host galaxies. Our testing shows that instituting cuts on \(m/m_{\rm peak}\) that mimic the galaxy disruption models shown in Fig. 11 results in population statistics that are converged above \(n_{\rm peak}\gtrsim 10^{3.5}\), but we defer quantitative analysis of this point until future work (see point _(ii)_ below). Figure 12: The fraction of disrupted Rockstar subhalos that experienced an error where they were associated with an incorrect density peak. All the Symphony suites have been combined to improve number statistics. The left panel shows the error rate at \(z=0\) as a function of radius among _surviving_ subhalos, and the right panel shows the error rate among _disrupted_ subhalos as a function of the final radius of the subhalo. This type of error is shown in orange in Fig. 2. These errors can impact the \(m_{\rm peak}\) value of a subhalo and can frustrate attempts to create “orphan” subhalos using the subhalo’s last intact snapshot. A sizable fraction of all disrupted subhalos have experienced such an error. Errors are more common in disrupted subhalos than in surviving subhalos and become more likely at smaller radii. The majority of subhalos that disrupt at very small radii are errors. There are several caveats to this conclusion that we outline below. 1. This analysis has been performed on \(\Lambda\)CDM subhalos and not on subhalos run in hydrodynamic simulations or in alternative cosmologies. As discussed in Section 1, both conditions can lead to decreased central densities that can in turn lead to true disruption for low-mass subhalos for certain models. In this case, the condition for being "good enough" for satellite galaxy observations transforms from being able to follow all resolved, galaxy-hosting subhalos to being able to follow all resolved, galaxy-hosting, and undisrupted subhalos. This would likely relax the requirements for subhalo finders unless the changes in internal density also relaxed resolution requirements by, e.g., increasing the typical relaxation timescale. It is also possible that Symfind, specifically, could encounter trouble in cosmologies or hydrodynamics implementations that cause physical diffusion of highly bound particles to large radii. As such, domain-specific testing should be done before applying our method to non-\(\Lambda\)CDM simulations. 2. Although it is common to discuss galaxy "disruption," as a discrete event, galaxy mass loss is continuous (Smith et al., 2016) and it may be possible that some asymptotic stellar remnant exists even after substantial halo mass loss. This is why we have not made any predictions for the properties of satellite galaxy populations in this paper and defer such analysis to future work. That said, the disruption approximation is not a poor one and is sufficient for the purposes of this paper: galaxies generally maintain their mass until some critical point after which mass loss is rapid (Smith et al., 2016). Once this rapid mass loss begins, the disrupting galaxy is pushed into successively smaller stellar mass regimes. The steepness of the stellar mass function means that the fraction of galaxies at a fixed stellar mass that are heavily disrupted high-mass subhalos should rapidly decrease with decreasing \(m_{*}/m_{*,\rm infall}\). 3. Some observational probes of subhalos are sensitive to present-day subhalo masses and not the stellar masses of their satellite galaxies. _A priori_ arguments suggest that these probes should have more lenient requirements for subhalo finders: at a fixed subhalo mass, the steepness of the subhalo \(m_{\rm peak}\) function means that most subhalos will have large \(m/m_{\rm peak}\) ratios and will thus be more easily tracked. Resolution requirements should also be more lenient and more easily applied, requiring only that the target population of subhalos instantaneously exceeds Eq. 12 or Eq. 14. That said, gravitational probes of mass can be very sensitive to the mass definitions adopted by halo finders (e.g., Mao et al., 2018), a problem that we do not address in this paper. Dedicated analysis is required to resolve this issue quantitatively. 4. As discussed in Section 6.1.2, there are a number of limitations in the galaxy disruption models used here. In particular, these limits were derived for high-mass galaxies, which likely survive for longer than low-mass galaxies due to significant self-gravity and adiabatic contraction. Addressing these limitations could potentially raise or lower the typical masses at which rapid galaxy mass loss begins. ### Recommendations for subhalo finders and orphan models We make no claim that our method is the only existing subhalo finder that is "good enough" to model satellite galaxy abundances. It is certainly possible that some existing methods are able to track subhalos for similarly long periods of time, and we identify several candidate methods that seem likely to be able to do this in Section 7. However, we believe it is important for this to be demonstrated explicitly and for subhalo finders to quantify their reliability limits. This will allow users to make strict, quantitative statements about how large of an impact halo finder biases have on their analysis. We encourage interested authors to do all of the following: * Using the methodology described in Section 4.2, construct disruption threshold curves and compare them against numerical limits and galaxy disruption limits, as in Fig. 11. * Recreate Fig. 6 and confirm that the \(v_{\rm max}/v_{\rm max,infall}\) versus \(m/m_{\rm infall}\) relation converges correctly at high particle counts. A subhalo-finding method that incorrectly loses mass in the outskirts of subhalos could lead to overly optimistic disruption thresholds and would appear as curves that are higher than the expectations of idealized models in this plot. While unfortunate, such an error would not be fatal: one could alternatively characterize survival curves in terms of \(v_{\rm max}/v_{\rm peak}\). * Recreate Fig. 8 and confirm that \(m/m_{\rm peak}\) is converged out to similar or larger mass ratios. A subhalo finder that experiences sudden drops in mass during its final snapshot could lead to overly optimistic disruption thresholds. Such a scenario would manifest as earlier non-convergence in \(m/m_{\rm peak}\). This would not be a fatal problem either: it would merely raise the amplitude of the dashed curve in Fig. 11 for such a finder. * Include a technique that can identify and remove subhalos that were erroneously connected to a branch. We find that using the locations of particles that were highly bound at infall works well, although we also experimented with identifying sharp jumps in masses and positions, and this seems to also work fairly well as long as it is carefully calibrated against manually inspected property evolution tracks. One major benefit of these tests compared to, say, comparing the subhalo mass functions measured by two separate subhalo finders is that the correct behavior of low mass subhalos in \(\Lambda\)CDM is known for all four tests: small subhalos that have not yet sunk to the centers of their hosts should survive indefinitely (e.g., van den Bosch et al., 2018; Errani and Navarro, 2021), \(v_{\rm max}\) should evolve in a predictable manner with decreasing \(m\)(Green and van den Bosch, 2019), mass loss rates should agree with higher resolution mass-loss rates that are known to be converged, and highly bound particles should stay localized within the subhalo. Some of this suggested analysis is non-trivial, so code implementing the first three tests will be made available to readers upon request. The fourth test must be implemented directly within one's subhalo finder. Two simpler, but less conclusive tests would be to construct resolution-dependent subhalo \(m_{\rm peak}\) functions (Section 5.1) and radial distributions2 (Section 5.2) selected by \(m_{\rm peak}\) and not by present-day subhalo mass. Subhalo finders that do not falsely converge should find that the amplitude of the subhalo \(m_{\rm peak}\) function increases with increasing resolution and that radial distributions selected by \(m_{\rm peak}\) become more concentrated with increasing resolution. Footnote 2: As discussed in Section 5.2, the mass-dependent radial distribution of subhalos can be a good but not perfect proxy for the resolution-dependent radial distribution of subhalos at low subhalo masses. Our analysis also has some implications for the creators and users of "orphan" models. * Some subhalo finder+merger tree combinations can return subhalo branches that out-survive the regime where subhalos are numerically reliable. In these cases, it is possible that a well-calibrated orphan model may result in more accurate predictions than a subhalo finder (although this would need to be explicitly demonstrated). One should consider generating orphan galaxies at this numerical limit rather than at disruption. * A subhalo's final snapshot -- the time when many orphan models generate their mock subhalos tracers -- is the time when its properties are characterized the least accurately. Orphan methods would benefit from a methodology that can identify errors in the underlying subhalo catalogs and from generating orphan galaxies significantly before the final snapshot. Finally, because of the existence of post-infall errors that can cause subhalo masses to spike, we recommend that all authors, not just authors of subhalo finders, who are interested in the peak mass of subhalos define that peak mass using only the mass accretion history of subhalos prior to their first infall onto a host, as we do in this paper. Doing so requires special post-processing of merger trees to identify times when central halos are erroneously and temporarily classified as subhalos (see Section A.1). ## 7 Comparison with other subhalo finding methods ### Single-epoch subhalo finders The most common subhalo-finding strategy is to identify objects in each snapshot of a simulation individually and then run a separate merger tree code that connects subhalos across snapshots, usually relying heavily on the IDs of particles contained by halos in each snapshot. This is the general structure used by RCT and dozens of other widely used tools (e.g., most methods listed in Knebe et al., 2011; Srisawat et al., 2013). A detailed comparison of all these methods is beyond the scope of this paper, so we focus on a broader overview. A number of papers have run a range of subhalo finders/merger trees on the same simulations and compared summary statistics to make inferences about the performance of subhalo finders (Knebe et al., 2011; Onions et al., 2012, 2013; Srisawat et al., 2013; Avila et al., 2014; Behroozi et al., 2014; Elahi et al., 2019). All the papers listed above included comparisons with RCT (typically one of the two components of the pipeline, Rockstar or consistent-trees independently), and one might initially hope that this would allow for a sort of transitive comparison with our RCT tests in this paper. But we do not believe such an extension of our results is warranted. Knebe et al. (2011) found that most halo finders predict qualitatively similar mass/velocity functions, large-scale correlation functions, and bulk velocity PDFs for central halos, but did not compare subhalo statistics. Onions et al. (2012) found that most of their tested subhalo finders recovered modestly fewer subhalos at a fixed instantaneous \(m_{200\rm t}\) than Rockstar, and that Rockstar recovered slightly more subhalos at small radii than most other subhalo finders. Onions et al. (2013) focused on subhalo spin, a quantity that we do not consider in this paper. Srisawat et al. (2013) found that many statistical summaries of the properties of merger trees are similar between consistent-trees and many other popular merger tree codes (with the notable exception of HBT, see Section 7.2). They also identify several merger tree features that reduce the chances of errors when present. All these features are present in consistent-trees. Avila et al. (2014) found that consistent-trees is able to correct for some errors that other merger trees miss, but generally concluded that subhalo finders had more influence on the evolution of subhalo properties than tree codes did and that the different tree codes broadly agreed with one another. Behroozi et al. (2014) focused on major mergers, finding that many subhalo finders struggle to correctly follow major mergers, and ultimately recommend methods based on particle tracking that are careful to remove objects that have truly sunk to the centers of their hosts (e.g., condition 2 in Appendix A.6). Elahi et al. (2019) showed that at a fixed instantaneous subhalo mass, Rockstar finds roughly the same number of subhalos as several other subhalo finders and finds roughly the same radial distribution of subhalos at a fixed instantaneous mass. This literature is often summarized as meaning that RCT performs at least as well as most other subhalo finders. If this reading is correct, it would in turn imply that the majority of subhalo finders suffer from problems similar to RCT. In particular, RCT's radial distribution of subhalos has converged to a false solution (Fig. 10) as has its SHMF at different radii (Fig. 18), so one might expect that agreement between RCT and other halo finders on quantities related to the radial profile would imply the same for these finders. While such a pessimistic scenario is certainly possible, the tests described above are actually not very sensitive to subhalo disruption issues. Almost all of these tests compare subhalo populations _at a fixed instantaneous mass_ (at least in part to try to isolate issues in merger trees from issues in subhalo finders) and such selections can be misleading. As discussed in Section 5.2, it is well known that subhalo profiles at fixed instantaneous masses converge quickly to a profile that is much less concentrated than the dark matter halo itself, while subhalos selected by \(m_{\rm peak}\) have more complicated convergence behavior and result in radial profiles more in line with satellite galaxy observations (e.g., Nagai and Kravtsov, 2005). Many authors have also focused on the radial distributions of all the subhalos output by their subhalo finders, meaning that the signal is dominated by the most poorly resolved and error-prone objects. To further underscore the insensitivity of these tests, many of the aforementioned papers compared Subfind and Rockstar, but as far as we can tell, the fact that these two subhalo finders have wildly different convergence behavior (Fig. 10 and Fig. 19) was not reported. In general, the fact that subhalos can lose such a large amount of mass prior to the disruption of their satellite galaxy means that the properties of instantaneous-mass samples are often not predictive of galaxy behavior. To further complicate matters, while some of the aforementioned studies performed idealized tests where the difference in mass definitions between subhalo finders could be compared, tests performed on full cosmological simulations generally did not match subhalos to one another across subhalo finders. This meant that a single shared mass definition could not be used across all the subhalo finders, leading to some additional ambiguity in interpreting many of these tests: when a sample of subhalos is created across multiple subhalo finders, is the same set of subhalos being selected? When considering two different SHMFs, does the difference arise because one finder is identifying more subhalos than the other, or does it arise because they are finding the same subhalos but assigning different masses to them? This issue is avoided in our study: particle-tracking and RCT subhalos are explicitly matched against one another, and \(m_{\rm peak}\) is defined using only the pre-infall mass accretion histories computed by RCT. The exact same samples of subhalos are compared in all of our tests and \(m_{\rm peak}\) is the same value regardless of the subhalo finder. We recommend that authors interested in investigating disruption-related issues in other subhalo finders follow the testing procedure that we lay out in Section 6.4, which focuses on survival thresholds and multi-resolution tests and is very sensitive to subhalo disruption issues. This procedure also has the benefit that one knows what the theoretical predictions of \(\Lambda\)CDM are prior to running the tests. One does not know _a priori_ how \(\Lambda\)CDM predicts the subhalo mass function or radial distribution of subhalos should behave, meaning that even if one detects a difference between subhalo finders using these statistics, it is hard to interpret. #### 7.1.1 The Caterpillar variant of Rockstar As part of the Caterpillar zoom-in simulation suite, Griffen et al. (2016) created a variant of Rockstar after noticing that a large number of visually apparent, high-resolution, low-radius subhalos were missing from Rockstar catalogs. This variant changes the criteria used internally by Rockstar to be less aggressive about removing partially unbound phase-space overdensities. This change leads to the creation of a large number of spurious subhalos, so Griffen et al. (2016) also changed Rockstar's single unbinding pass to a full iterative unbinding and introduced a criterion that removed subhalos very close to the centers of their hosts. This change modestly increases the number of subhalos at a fixed instantaneous mass (Griffen et al., 2016) and leads to profile resolution dependence that is qualitatively similar to Fig. 10(Manwadkar and Kravtsov, 2022). So it is very likely that this variant has better convergence behavior than standard Rockstar, and that Rockstar's false convergence could be related to its procedure for deciding which phase-space overdensities are truly subhalos. It would be interesting to quantify the behavior of this variant with the tests described in Section 6.4. ### Other particle-tracking methods Tracking particles to find bound substructure at later times is an old concept and was used in a number of early subhalo studies (e.g., Tormen et al., 1998). Two recent, cutting-edge software packages within this tradition are HBT+ (Han et al., 2018), a successor to HBT (Han et al., 2012; see also Springel et al., 2021), and the particle-tracking model recently added to the post-processing framework for Sparta(Diemer, 2017; Diemer et al., 2023). All methods of this class have the same basic structure: some set of membership rules are used to associate particles with subhalos prior to infall, and then a second method is used to find the centers and masses of those subhalos later in time using all the tracked particles. HBT+ uses iterative unbinding to find subhalo masses and calculates subhalo properties from the bound remnant. Excessively unbound particles are removed with each snapshot, but a buffer of modestly unbound particles is kept within the tracking set to prevent runaway stochastic mass loss and to allow for some mass flow into and out of the subhalo. A variant of HBT+'s algorithm has also been implemented in Gadget-4 as Subfind-HBT(Springel et al., 2021), although it is safest to treat this as a separate algorithm, as some design decisions are different from HBT and HBT+. Sparta uses a similar technique but does not explicitly calculate boundedness. Instead, it measures masses from raw overdensity radii and keeps a buffer of higher-radii particles. It is unlikely that an unbound subhalo would be able to maintain a well-defined overdensity radius, and the tests in Diemer et al. (2023) suggest that for the majority of subhalos, bound masses are fairly close to these overdensity masses. More work needs to be done to understand how these masses compare for different subhalo populations (e.g., highly stripped, low-radius subhalos with large tidal tails). If a comparison with the methodology outlined in Section 6.4 is to be done, it is also important to better understand the connection between these overdensity masses and bound masses close to the moment of disruption. Our method deviates a bit from this pattern. We continue tracking all particles regardless of previous status, identify substructure within the tracked particles using a traditional subhalo finder (in our case, Subfind), and use particles that were highly bound at infall to select the correct density peak to use as the subhalo's center. Once that center is located, halo properties are calculated independently of the intermediate subhalo finder. There are some small trade-offs associated with this choice (our method can recover from temporary errors easily, but also needs to be more careful about accidentally identifying matter that has been stripped and tidally mixed as the true subhalo center), but we don't believe there is any _a priori_ reason to expect that any one of these particle-tracking algorithms significantly out-performs the others based purely on the high-level description of these algorithm choices. We consider it plausible that these other particle-tracking-based subhalo finders would also pass the criteria outlined in Section 6, but this has not yet been explicitly demonstrated (see Section 6.4). We list some assorted notes on the performance of these methods below: * The original HBT code has been compared against other subhalo finders in a number of previous subhalo finder comparison projects (see overview in Section 7.1). As with RCT, a tempting way to read these tests is that HBT performs comparably or better than most other halo finders on these tests, but, as discussed in Section 7.1, the tests performed in many of these papers are not very sensitive to the types of disruption issues discussed in this paper and missed substantial differences between Subfind and Rockstar, meaning that it is difficult to draw strong conclusions about the disruption behavior of HBT from these tests alone. Interestingly, Srisawat et al. (2013) found that positions of HBT subhalos are substantially more consistent with their previous trajectories than essentially all other tested merger tree codes. It is hard to interpret this result as anything other than a success for HBT. The improved HBT+ was written after many of these tests were performed, but Han et al. (2018) compared HBT+ against the Subfind halo finder and the DTree merger tree code (Jiang et al., 2014). They found that HBT+ recovered somewhat more (\(\approx 10\%\)) low-mass subhalos and 200% to 300% more high-mass (\(m/M_{\rm vir}\gtrsim 0.2\)) subhalos at a fixed _instantaneous_ mass than Subfind. HBT+ also leads to more concentrated subhalo profiles at fixed instantaneous mass. However, this work also finds that Subfind+DTree has a large _unevolved_ subhalo mass function (i.e. the \(m_{\rm peak}\) mass function combining both surviving and disrupting subhalos). Han et al. (2018) convincingly argue that these differences are due to a combination of Subfind missing mass in the outskirts of large subhalos (van den Bosch and Jiang, 2016) and the fragmentation of Subfind+DTree branches. * Tests in Springel et al. (2021) seem to indicate that Subfind-HBT and Subfind subhalos tend to disrupt at qualitatively similar times (Fig. 36 in Springel et al., 2021), that the two methods lead to similar subhalo mass functions (Fig. 38 in Springel et al., 2021), and that the typical low-to-moderate resolution subhalo found by RCT has a longer main branch than a similar subhalo found by Subfind-HBT (Fig. 40 in Springel et al., 2021) while the opposite is true at higher resolutions. This may be evidence that Subfind-HBT performs similarly to traditional single-epoch subhalo finders, or it may indicate that this set of tests is not very sensitive to subhalo finder performance issues. * Diemer et al. (2023) demonstrated that the subhalos found by Sparta substantially out-survive RCT subhalos, and showed that the difference between RCT subhalo mass functions and Sparta subhalo mass func tions is qualitatively similar to the difference between RCT and our method (Fig. 9). One problem that all particle-tracking methods suffer from is detecting when a subhalo has sunken into the center of its host. Even after completing such a merger, the subhalo can remain fully self-bound until the end of the simulation. In our method, any subhalo that is within its own half-mass radius of the host's center and that never leaves the center of the host after this point is considered disrupted. Sparta counts objects that have been within 0.05 \(R_{\rm 200m,host}\) of the host's center for more than half a dynamical time as merged, with a correction factor to account for Poisson noise (Diemer et al., 2023). HBT+ removes subhalos that are too close to the center of the host in phase space, with the distance metric set by the position and velocity dispersion of the 20 most-bound host particles (Han et al., 2018). This metric was motivated by the fact that merged subhalos appear as a distinct population in this space. All these methods are effective at removing the vast majority of merged subhalos, although it is possible that studies focused on very low-radius subhalos or on tracking the end states of major mergers could benefit from revisiting them carefully and performing detailed comparisons. ### "Orphan" subhalo models Even older than particle-tracking methods or single-epoch halo finders are "orphan" subhalo models (e.g., White et al., 1987). In these models, a subhalo is followed until the point of disruption (or accretion), and then one of several methods is used to estimate the location of the subhalo's center thereafter (see review in Pujol et al., 2017). Most commonly, the most-bound particle is used (e.g., White et al., 1987; Wang et al., 2006), but some authors use many highly bound particles instead (e.g., Carlberg and Dubinski, 1991; Korytov et al., 2023). Some models eschew particles altogether and estimate the location of a test particle orbiting through an idealized potential (e.g., Somerville et al., 2008; Behroozi et al., 2019). There are two defining differences between a catalog of orphan subhalos and a catalog of subhalos found by a true subhalo finder. The first is that the properties of orphan subhalos are inferred from some post-processing model rather than being directly taken from a simulation. The positions and velocities of the orphan subhalo are the most obvious example of this, but the modeling can extend to other properties as well. For example, many authors have measured subhalo mass loss rates for some set of subhalos that a traditional halo finder is able to identify and then use those mass loss rates to compute \(m(t)\)(e.g., Jiang and van den Bosch, 2016; Behroozi et al., 2019; Sultan et al., 2021). Finally, orphan models must have a method for disrupting the subhalo, either by removing it once it gets too close to the center of its host (e.g., White et al., 1987), through monitoring the internal state of the tracked particles (Korytov et al., 2023), or by removing the orphan once the modeled \(m(t)\) crosses some limiting value (Somerville et al., 2008; Reddick et al., 2013; Behroozi et al., 2019). The introduction of an additional modeling layer to infer subhalo properties allows for additional free parameters and researcher degrees of freedom, introduces systematic uncertainty into predictions, and opens up the question of whether any incorrect predictions of the orphan model are a failure on the part of \(\Lambda\)CDM, the galaxy formation model used, or the decisions that went into tracking orphan subhalos. This is concerning because most orphan models predict that \(\approx 20\%\) to \(30\%\) of all low-mass satellites need to be represented by orphans (Pujol et al., 2017). There is some controversy over whether orphan modeling is needed to reproduce the radial distributions of satellite galaxies, as we discuss in Section 5.2. Given how sensitive radial distributions are to the disruption physics of satellites, the fact that it is unclear whether orphan modeling is even needed for this class of observation further underscores how much systematic uncertainty there is in this regime. One great strength of orphan models is that they are able to make predictions for subhalo populations that the simulation is not able to resolve. In contrast, a subhalo finder can only report the results of the simulation it is run on, resolution errors and all. Nonetheless, many common approaches used for orphan models are problematic. Many orphan models generate orphans based on the properties of the corresponding subhalo at the snapshot that subhalo was last found in the catalog, but as shown in Fig. 12, a large fraction of RCT subhalos are actually errors during their final snapshot, especially at small radii. This would lead to incorrect orphan subhalo properties. Many orphan models use the most-bound particle to model the location of the subhalo center, but as discussed in Appendix C, in a non-trivial fraction of subhalos the most-bound particle can numerically diffuse out of the center before the subhalo actually disrupts. Population-averaged mass loss models do a good job at reproducing average \(m\) functions, but do a poor job of matching the individual properties of subhalos due to the dependence of subhalo mass loss rates on orbital parameters, the scatter in individual mass loss rates, and possible evolution in subhalo mass loss rates over time (see discussion in Jiang and van den Bosch, 2016 as well as Fig. 7 of Sultan et al., 2021). This becomes concerning when comparing sub-populations of halos with different orbital parameter distributions such as one does, e.g., when constructing a number density profile. In light of these limitations, the orphan modeling techniques described in Heitmann et al. (2021); Sultan et al. (2021); Korytov et al. (2023) are very promising. This method uses the \(N\) most-bound "core" particles (20 in Korytov et al., 2023) at the moment of infall and uses a density estimate to identify a central particle, which is then taken as the orphan's position and velocity. Starting the orphan tracking at the moment of infall avoids the issue of subhalo finder errors during the final snapshot, and using many particles lessens the impact of numerical diffusion. Additionally, this method scales well to \(\gtrsim 10^{12}\) particle simulations, a feat that few sophisticated subhalo identification/modeling techniques can boast. However, the use of multiple particles encounters its own difficulties, particularly if properties of those particles are used to estimate disruption times (e.g., the effective radius of the core particles, as explored in Korytov et al. 2023). As illustrated in Fig. 5, the most-bound particles used for core-tracking are usually unconverged before infall even starts (the halo in this Figure has \(8\times 10^{4}\) particles at infall, and the 800 particles closest to the center of the subhalo have passed their individual relaxation times; this is typical for a subhalo at this resolution level). Thus, tying the evolution of subhalo properties to the evolution of core particles -- beyond just using them to estimate a location in phase space -- may not avoid the convergence issues that orphan models are intended to avoid. Our method also makes use of the same unconverged core particles, but only to select between density peaks; in this method, it is fine if they numerically diffuse outwards, as long as at least some of them stay within the subhalo. ## 8 Conclusions One of the major outstanding questions in the theory of \(\Lambda\)CDM is how this model truly predicts subhalos should behave. These small orbiting objects are complex enough that effective modeling usually involves simulations, but are deceptively difficult to simulate and to identify once the simulation has been run. The uncertainty in how durable subhalos are places limitations on a number of cutting-edge cosmological probes, particularly those that depend on large-scale clustering of galaxies and the properties of satellite galaxies. In this paper, we present a new method for identifying simulated subhalos (Section 3), with a particular focus on robustness tests. This method follows a subhalo's particles prior to infall and then uses an existing subhalo finder (currently Subfind; Springel et al. 2001a) along with the subhalo's most bound particles at infall to identify the subhalo within this tracked set of particles. * We perform extensive testing on the reliability limits of our method and on the popular Rockstar halo finder and find that our method substantially outperforms Rockstar, tracking subhalos to orders-of-magnitude lower masses (Section 4.2; Fig. 4). * Our method recovers 15% to 45% more subhalos at a fixed \(m_{\rm peak}\) than Rockstar (Section 5.1; Fig. 9) in our fiducial simulations, particularly subhalos close to the centers of their hosts (35% to 120% more subhalos within \(R_{\rm vir}/4\)), and the number of recovered subhalos increases substantially with increasing resolution. * Subhaloes found with our method are so long-lived that -- when combined with reasonable galaxy-halo models -- they do not require the use of "orphan" subhalos to follow subhalos until the point of likely galaxy disruption once a relatively modest resolution limit of \(n_{\rm peak}>4\times 10^{3}\) is met (Section 6; Fig. 11). This limit is set by the resolution of the simulation, not by failures of the subhalo finder. * We also outline a concrete set of steps that can be used to determine whether other subhalo finders meet the same criteria (Section 6.4). We discuss caveats associated with this conclusion in Section 6.3. The longer survival times of our subhalos allow us to quantitatively test some predictions of idealized simulations into the deep mass loss regime (Sections 4.3, 4.4, and 4.5). * We find that these idealized simulations and the numerical limits derived from them do a good job of describing the \(v_{\rm max}\) values of disrupting subhalos (Section 4.4; Fig. 6). * These numerical criteria are quite restrictive and put significant limits on the ability of simulations to study the velocity structure of subhalos (\(n_{\rm peak}>3\times 10^{4}\) for \(v_{\rm max}\) to be resolved until the point of likely galaxy disruption; Fig. 11). * and therefore abundances - are resolved to much lower resolutions (\(n_{\rm peak}>4\times 10^{3}\) for \(m(t)\) to be resolved until the point of likely galaxy disruption; Section 4.5; Figs. 8 and 11). * As part of this testing, we demonstrate that simple techniques for estimating stacked mass loss rates suffer survivor bias and that this bias is strong enough to qualitatively change the shapes of mass loss curves (Fig. 7; see also Han et al. 2016). We lay out statistical techniques that avoid this problem (Section 4.5). We also demonstrate that the Rockstar halo finder falsely converges with increasing resolution (Section 4.2 and 5.2; Figs. 4 and 10), giving the impression of numerical reliability to populations of subhalos that are not reliably tracked. We perform some limited analysis/discussion on the performance of other popular subhalo finding tools (Section 7 and Appendix H) but defer making strong statements to future work. We also identify several errors in Rockstar+consistent-trees merger trees and outline procedures for addressing them in Appendix A.1. The simplest of these corrections is that we advocate for measuring \(m_{\rm peak}\) and \(v_{\rm peak}\) for subhalos using only pre-infall masses because many subhalos experience large spikes in mass during disruption. ## Acknowledgments We would like to thank Bryne Hadnott for contributions to the website and tutorials associated with this paper and Tara Dacunha, Althea Hudson, and Azana Queen for helping to debug our analysis libraries. We also thank Benedikt Diemer, Keith Mansfield, Andrey Kravtsov, Peter Behroozi, Jiaxin Han, Frank van den Bosch, Jelle Aalbers, Tom Abel, Andrew Hearin, and Viraj Manwadkar for useful discussions and comments which helped improve the quality of this work.
2310.05807
Sharing Information Between Machine Tools to Improve Surface Finish Forecasting
At present, most surface-quality prediction methods can only perform single-task prediction which results in under-utilised datasets, repetitive work and increased experimental costs. To counter this, the authors propose a Bayesian hierarchical model to predict surface-roughness measurements for a turning machining process. The hierarchical model is compared to multiple independent Bayesian linear regression models to showcase the benefits of partial pooling in a machining setting with respect to prediction accuracy and uncertainty quantification.
Daniel R. Clarkson, Lawrence A. Bull, Tina A. Dardeno, Chandula T. Wickramarachchi, Elizabeth J. Cross, Timothy J. Rogers, Keith Worden, Nikolaos Dervilis, Aidan J. Hughes
2023-10-09T15:44:35Z
http://arxiv.org/abs/2310.05807v1
# Sharing Information Between Machine Tools to Improve Surface Finish Forecasting ###### Abstract At present, most surface-quality prediction methods can only perform single-task prediction [1] which results in under-utilised datasets, repetitive work and increased experimental costs. To counter this, the authors propose a Bayesian hierarchical model to predict surface-roughness measurements for a turning machining process. The hierarchical model is compared to multiple independent Bayesian linear regression models to showcase the benefits of partial pooling in a machining setting with respect to prediction accuracy and uncertainty quantification. **Keywords: Population-based Structural Health Monitoring; Bayesian Modelling, Hierarchical bayes** ## 1 Introduction One of the most important measures of workpiece quality in a machining process is the surface finish, and one of the most important factors in surface finish is surface roughness. Surface roughness is a widely-used index of machined product quality [2] and a high-quality surface finish can significantly improve the fatigue strength,corrosion resistance and creep life of machined parts [3]. The surface finish is highly important for the functional properties of parts; it has a large contribution to surface friction and the susceptibility of the part to contact wear. Additionally, the literature suggests that surface-roughness is a good indicator to estimate tool wear condition, which means accurate estimates of the surface roughness can help inform a tool condition-monitoring system [4, 5, 6]. Being able to predict surface roughness during the machining process is very valuable for manufacturers. These predictions can help inform tool replacement or inspection decision processes and reduce downtime and wasted material. The literature showcases a wide range of modelling systems for machining features. Hidden Markov Models (HMMs), have been a popular choice [1, 7, 8]; however, these models assume that observed values must be statistically independent of the previous sequence; this may not be the case in machining. HMM's can lose the information between adjacent feature data which can sometimes deteriorate the recognition accuracy [9]. Other researchers have used neural networks (NNs) to good effect [10, 11, 12]. However, NNs generally require large datasets for training which can be expensive to collect. Support Vector Machines (SVMs) are also popular but not without problems[13]. SVMs require a selection of the kernel function and some parameters that need to be selected by trial and error; this can be tricky and leave the user with sub-optimal parameters [14]. Many other models have also been used such as fuzzy logic [15], artificial neural network-based fuzzy inference systems [16] and chain-conditional random-field models [9]. Because of the natural degradation of tools during the machining process, and its effect on surface finish, tools must be replaced regularly. While each tool may be produced to the same specification and use the same materials, there will be variation within populations of tools. The variation in the physical properties of the tools is associated with variation in the behaviour between the tools; this can be an issue for standard modelling techniques. However, this variation lends itself well to a hierarchical model, a class of models that can account for variations within a population while taking advantage of the statistical similarities between them. An additional benefit of hierarchical models is their suitability to the online setting and sparse datasets; which is particularly useful for tool condition monitoring where researchers made need to make predictions as soon as the machining process has begun and with only a few data points to learn a model. Combining this with the usual benefits of Bayesian modelling (uncertainty quantification, prior information etc.) gives rise to a potentially powerful monitoring system. Hierarchical models have seen limited use in machining, Bombinski et al. highlighted the usefulness of hierarchical models by implemented a hierarchical neural network-based monitoring system with signal fusion methods[17]. Han et al. used a hierarchical structure to improve the implementation of HMMs for tool-wear estimation [18]. The hierarchical Dirichlet process-hidden Markov model showed greater accuracy when compared to conventional HMMs. Following the obvious advantages of hierarchical modelling for machining problems, the authors propose the use of a Bayesian hierarchical model. Specifically, a random intercepts and slopes model (also known as a mixed effects model) to predict the surface roughness during machining. ## 2 Contribution Although Bayesian hierarchical models have seen success in other parts of engineering [19, 20, 21],the benefits of these models have not yet reached machining and tool health monitoring. In this paper, the authors propose a random intercepts and slopes model to show the modelling improvements of hierarchical models specifically, for sparse datasets in machining. ## 3 The Data The dataset analysed in this paper is from the turning process shown in Figure 1. The workpiece is rotated around the _z-axis_ and the tool makes four passes along the workpiece. Each pass starts at point S and ends at point E. After four passes, the tool is inspected, and measurements are taken of the workpiece and tool. The four passes and measurements are repeated until tool failure. For full details of the experiment refer to Wickramarachchi [22]. The data to be analysed in this paper consists of the workpiece surface roughness measurements from seven repeats of the experiment detailed above. After each experiment the tool is replaced with a fresh tool. This data can be seen in Figure 2. The plots show arithmetic mean (\(R_{a}\)) surface roughness measurements against sliding distance. Sliding distance is how far the tool has traveled along the work piece, it is effectively a measure of how long the tool has been machining for. \(R_{a}\) surface roughness measures the deviation of a surface from a theoretical centre line [23]. Figure 1: Schematic showing the experimental set up used for data acquisition [22]. ## 4 The Hierarchical Model The explanation follows the description provided by Bull et al. [19]. Consider machining data, recorded from a population of \(K\) similar tools. The population data can be denoted, \[\left\{\mathbf{x}_{k},\mathbf{y}_{k}\right\}_{k=1}^{K}=\left\{\left\{x_{ik},y_{ ik}\right\}_{i=1}^{N_{k}}\right\}_{k=1}^{K} \tag{1}\] where \(y_{k}\) is a target response vector for inputs \(x_{k}\) and \(\left\{x_{ik},y_{ik}\right\}\) are the \(i^{\text{th}}\) pair of observations in group \(k\). There are \(N_{k}\) observations in each group and thus \(\Sigma_{k=1}^{K}N_{k}\) observations in total. The aim is to learn a set of \(K\) predictors related to the regression task. This paper focuses on regression, where the tasks satisfy, \[\left\{y_{ik}=f_{k}\left(x_{ik}\right)\right\}_{k=1}^{K} \tag{2}\] and the output \(y_{ik}\) is determined by evaluating one of \(K\) latent functions. For the case of linear regression, the mapping is denoted by, \[f_{k}\left(x_{ik}\right)=m_{k}\left(x_{ik}\right)+c_{k}+\epsilon_{k} \tag{3}\] where \(m_{k}\) is the tool-specific gradient of the roughness, \(c_{k}\) is the tool-specific intercept and \(\epsilon_{k}\) is the tool-specific noise. Where the noise is assumed to be \(\epsilon_{k}\)\(\sim\)Cauchy \(\left(0,\gamma_{k}\right)\) Together they form the set of \(K\) predictors, \[\left\{c_{k},m_{k},\epsilon_{k}\right\}_{k=1}^{K} \tag{4}\] In this paper, comparisons will be made between a _hierarchical_ model, where the mapping \(f_{k}\)_is_ assumed to be correlated between tools and an _independent_ model where correlation is not assumed. For the independent model, the slope and intercept of each tool are learned independently. A graphical model depicting the independent model can be seen in Figure 3. Since the mapping \(f_{k}\) is assumed to be correlated between tools for the hierarchical model, the model should be improved by learning the parameters in a joint inference over the whole population. The hierarchical model learns a global distribution over the tools and assumes the gradient and intercept associated with each tool is a sample from this global distribution. In practice, while a tool that has been in use for some extended amount of time may have rich historical data, newly-replaced tools will have limited training data. In this setting, learning separate, Figure 2: Experimental surface roughness measurements. independent models for each group will lead to unreliable predictions. On the other hand, a single regression of all the data (complete pooling) will result in poor generalisation. Instead, hierarchical models can be used to learn separate models for each group while encouraging tool-specific parameters to be correlated (partial pooling). The likelihood for the model is, \[\left\{y_{ik}\right\}_{k=1}^{K}\sim\] Cauchy \[\left(m_{k}\cdot x_{ik}+c_{k},\gamma_{k}\right)\] Following the Bayesian methodology, one can set prior distributions over the slope and intercept for the groups, \[\left\{m_{k}\right\}_{k=1}^{K}\sim\] Cauchy \[\left(\mu_{m},\sigma_{m}\right)\] Cauchy \[\left(\bar{\mu}_{m},s_{\mu_{m}}\right)\] Cauchy \[\sigma_{m}\sim\] HalfCauchy \[\left(0,s_{\sigma_{m}}\right)\] where the slopes are Cauchy distibuted, with mean \(\mu_{m}\) and standard deviation \(\sigma_{m}\). Equation (7) shows the prior expectation of the slopes is also Cauchy distributed with mean \(\bar{\mu}_{m}=0\) and standard deviation \(s_{\mu_{m}}=1\). Equation (8) shows that the prior deviation of the slope is HalfCauchy distributed with scale parameter \(s_{\sigma_{m}}=1\). Cauchy \[\left(\mu_{c},\sigma_{c}\right)\] Cauchy \[\left(\bar{\mu}_{c},s_{\mu_{c}}\right)\] Cauchy \[\left(0,s_{\sigma_{c}}\right)\] Cauchy \[\left(0,s_{\sigma_{c}}\right)\] where the intercepts are Cauchy distributed, with mean \(\mu_{c}\) and standard deviation \(\sigma_{c}\). Equation (10) shows the prior expectation of the intercepts is also Cauchy distributed with mean \(\bar{\mu_{c}}=0\) and standard deviation \(s_{\mu_{c}=1}\). Equation (11) shows that the prior deviation of the intercept is \(0,\) HalfCauchy distributed with scale parameter \(s_{\sigma_{c}}=1\). Cauchy \[\left\{\gamma_{k}\right\}_{k=1}^{K}\sim\] Cauchy \[\left(\gamma\right)\] Cauchy \[\left(0,s_{\gamma}\right)\] Finally, the variance of \(y_{k}\), \(\gamma_{k}\), is HalfCauchy distributed. Equation (13) shows that the prior deviation of \(\gamma_{k}\) is \(\gamma\) which is half HalfCauchy distributed with scale \(s_{\gamma}\). In this paper, \(s_{\gamma}=1\). As recommended by Gelman et al. [24], Cauchy distributions are used. Their heavy tails bring a robustness against outliers to the model, as well as efficiency during the inference and sampling process. A graphical model depicting the hierarchical structure can be seen in Figure 4. Figure 3: A graphical model representing the independent model. ## 5 Results The hierarchical and independent models will now be compared. For Tools 1-5, every measurement is given to the models for training. However, for Tools 6 and 7, the training set is restricted to only the first 5 roughness measurements. This emulates a scenario in which Tools 1-5 are no longer in use, and have completed the full tool life cycle, while Tools 6 and 7 are new tools with limited measurements. What one would expect to see is the independent model making accurate predictions for Tools 1-5, where there is sufficient data for the model to learn; but for Tools 6 and 7, the independent model is expected to struggle. Since this model is computing independent regressions for each tool, for tools with a smaller number of measurements, the model will be uncertain in its predictions due to the lack of data. In contrast, the hierarchical model will be able to use what it has learnt from the previous tools to reduce uncertainty in predictions. An additional benefit for the hierarchical model is the in-built robustness, because the model has seen Tools 1-5 before and remembers this data via updates to the global distributions of the gradient and slope, it is resistant to new, unrepresentative, data. Another way to visualise this is that outliers are more diluted since the previous observations count as extra data for this new tool. The predictions of the independent model when trained on the roughness measurements can be seen in Figure 5. The red crosses are the data the model has been trained on, while the red circles are the measurements the model cannot see, the green line is the predicted mean and the grey area is two standard deviations from the mean. Under data-rich conditions, Tools 1-5, the independent model fits well to the data. The model seems to fit a good estimate of the mean roughness but the standard deviations are large in some instances. For example, for Tool 4 it can be seen that two of the data points are far from the mean and cause large uncertainty in the model. Large uncertainties could cause problems in industry. For example, a simple tool condition-monitoring system may have some acceptable surface roughness, and when the roughness measurements surpass this value the tool must be replaced. In this scenario having uncertainties this large could cause false triggers of tool replacements; this will waste time and money for manufacturers. For Tools 6 and 7, with so few data points, the standard deviation of the independent model suffers. The model has over-estimates the variance from to the available data and does a poor job of predicting the hidden measurements. There is such large levels of uncertainty that the mean predictions are effectively meaningless. Compare this situation to the hierarchical model which can be seen in Figure 6, again, for Tools 1-5 the model fits the data well and the predicted means look sensible. Where the models differ is in the standard deviations. The hierarchical model is more confident in its predictions, as can be seen by the smaller grey area. This model is less likely to trigger an unnecessary replacement of the tool, increasing the efficiency of the manufacturing process. Where the differences between the models is highlighted best in the data poor scenarios, Tool 6-7. As expected, the hierarchical model performs better. The mean predictions do a good job of predicting the hidden measurements and the uncertainty in these predictions is smaller. The hierarchical model can draw on the statistical strength of the measurements from other tools which means that it is less prone to over-estimating the variance in the data-sparse setting. Figure 4: A graphical model representing the hierarchical model with partial pooling. Figure 5: The output from the independent model. The y-axes of all figures in this paper have been limited between 0-1.4 \(\mu m\) for ease of comparison. Figure 6: The output from the hierarchical model. Conclusion In this paper a Bayesian hierarchical model was used to predict workpiece surface roughness as a function of sliding distance. A clear benefit to the model was shown by comparisons to a set of independent linear regressions. The improved predictions and uncertainty quantification is useful when making predictions for a new tool without a rich history of data. The use of Bayesian hierarchical models can help improve decision-making processes and reduce costs involved in machining. Looking forward, the hierarchical model will be used to compute risk in an active learning framework and inform a decision making process for inspecting the machining tool. ## Acknowledgements The authors would like to gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) via grant references EP/W005816/1. For the purposes of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising.
2306.16623
The Segment Anything Model (SAM) for Remote Sensing Applications: From Zero to One Shot
Segmentation is an essential step for remote sensing image processing. This study aims to advance the application of the Segment Anything Model (SAM), an innovative image segmentation model by Meta AI, in the field of remote sensing image analysis. SAM is known for its exceptional generalization capabilities and zero-shot learning, making it a promising approach to processing aerial and orbital images from diverse geographical contexts. Our exploration involved testing SAM across multi-scale datasets using various input prompts, such as bounding boxes, individual points, and text descriptors. To enhance the model's performance, we implemented a novel automated technique that combines a text-prompt-derived general example with one-shot training. This adjustment resulted in an improvement in accuracy, underscoring SAM's potential for deployment in remote sensing imagery and reducing the need for manual annotation. Despite the limitations encountered with lower spatial resolution images, SAM exhibits promising adaptability to remote sensing data analysis. We recommend future research to enhance the model's proficiency through integration with supplementary fine-tuning techniques and other networks. Furthermore, we provide the open-source code of our modifications on online repositories, encouraging further and broader adaptations of SAM to the remote sensing domain.
Lucas Prado Osco, Qiusheng Wu, Eduardo Lopes de Lemos, Wesley Nunes Gonçalves, Ana Paula Marques Ramos, Jonathan Li, José Marcato Junior
2023-06-29T01:49:33Z
http://arxiv.org/abs/2306.16623v2
# The Segment Anything Model (SAM) for Remote Sensing Applications: ###### Abstract Segmentation is an essential step for remote sensing image processing. This study aims to advance the application of the Segment Anything Model (SAM), an innovative image segmentation model by Meta AI, in the field of remote sensing image analysis. SAM is known for its exceptional generalization capabilities and zero-shot learning, making it a promising approach to processing aerial and orbital images from diverse geographical contexts. Our exploration involved testing SAM across multi-scale datasets using various input prompts, such as bounding boxes, individual points, and test descriptors. To enhance the model's performance, we implemented a novel automated technique that combines a text-prompt-derived general example with one-shot training. This adjustment resulted in an improvement in accuracy, underscoring SAM's potential for deployment in remote sensing imagery and reducing the need for manual annotation. Despite the limitations encountered with lower spatial resolution images, SAM exhibits promising adaptability to remote sensing data analysis. We recommend future research to enhance the model's proficiency through integration with supplementary fine-tuning techniques and other networks. Furthermore, we provide the open-source code of our modifications on online repositories, encouraging further and broader adaptations of SAM to the remote sensing domain. ## 1 Introduction Remote sensing image analysis is an essential tool in various applications, including environmental monitoring, disaster management, urban planning, and many others [12, 57]. Accurately segmenting surface objects within these images is crucial for extracting valuable information, enhancing the efficiency of the processing task [20]. Despite advancements in segmentation techniques, including the advances of artificial intelligence (AI) with deep learning-based methods [4, 2], a key challenge remains: effective segmentation of images with minimal human input. The Segment Anything Model (SAM), developed by Meta AI, is a groundbreaking approach to image segmentation that has demonstrated exceptional generalization capabilities across a diverse range of image datasets, requiring no additional training for unfamiliar objects [19]. This "zero-shot" approach enables it to make accurate predictions with little to training data. However, its potential may be limited when facing specific domain conditions. To overcome this limitation, SAM can be modified by a "one-shot" learning approach [61], a novel aspect that we aim to explore with remote sensing imagery in this paper. Zero-shot learning pertains to a model's capability to accurately process and act upon input data that it hasn't explicitly encountered during training [1, 48]. This ability is derived from gaining a generalized understanding of the data rather than specific instances. Zero-shot learning systems can recognize objects or understand tasks they have never seen before based on learning underlying concepts or relationships. In contrast, one-shot learning denotes a model's ability to interpret and make accurate inferences from just a single example of a new class of data [61]. By feeding SAM with a single example (or "shot") of this new class, we can potentially enhance its performance, as it has more specific information to work with. The most well know one-shot methods for SAM are named Personality and PerSAM-F, both being training-free personalization approaches [61]. Given a single image with a reference mask, PerSAM localizes the target concept using a location before an initial estimate of where the object of interest is likely to be. The second method is PerSAM-F, a variant of PerSAM that uses one-shot fine-tuning to reduce mask ambiguity. In this case, the entire SAM is frozen (i.e., its parameters are not updated during the fine-tuning process), and two learnable weights are introduced for multi-scale masks. This one-shot fine-tuning variant requires training only 2 parameters and can be done in as little as 10 seconds to enhance performance [61]. Both are capable of leveraging SAM and improving it, making it a flexible model. Another important aspect relates to SAM's ability to perform segmentation with minimal input, requiring only a bounding box or a single point as a reference, or even a prompt text as guidance [19]. This capability has the potential to reduce human labor during the annotation process. Many existing techniques require intensive annotations for each new object of interest, resulting in significant computational overheads and potential delays in time-sensitive applications. SAM, on the other hand, presents an opportunity to alleviate this time-intensive task. Since SAM's release in April 2023, the geospatial community has shown strong interest in adapting SAM for remote sensing image segmentation. However, a more in-depth investigation is needed. In this context, we present a first-of-its-kind evaluation of SAM, focusing on both its zero and one shot learning performance on segmenting remote sensing imagery. We adapted SAM to our data structure, benchmarked it against multiple datasets, and assessed its potential to segment multiscale images. We then evolved SAM's zero-shot characteristic to the one-shot approach and demonstrated that with only one example of a new class of data, SAM's segmentation performance can be significantly improved. Our proposal's innovation is within the one-shot technique, which involves using a prompt-text-based segmentation as a training sample (instead of a human-labeled sample), making it a fully automated process for refining SAM on remote sensing imagery. In this study, we also discuss the implications, limitations, and potential future directions of our findings. Understanding the effectiveness of SAM in this domain is of paramount importance for novel development. In short, with its promise of zero-shot and one-shot learning, SAM has the potential to transform current practices by significantly reducing the time and resources needed for training and annotating data, thereby enabling a quicker, more efficient approach. ## 2 Remote Sensing Image Segmentation: A Brief Summary The remote sensing field has experienced impressive advancements in recent years, largely driven by improvements in aerial and orbital platform technologies, sensor capabilities, and computational resources [50, 39]. One of the most critical tasks in remote sensing is image segmentation, which involves partitioning images into multiple segments or regions, each corresponding to, ideally, a specific object or class [20]. In this section, we focus on providing comprehensive information regarding segmentation processes, deep learning-based methods, and techniques, and explain the overall importance of conducting zero-to-one shot learning. Traditional image segmentation techniques in remote sensing often rely on pixel-based or object-based approaches. Pixel-based methods, such as clustering and thresholding, involve grouping pixels with similar characteristics, while object-based techniques focus on segmenting images based on properties of larger regions or objects [15, 51]. However, these methods can be limited in their ability to handle the complexity, variability, and high spatial resolution of modern remote sensing imagery [20]. Segmentation involves various methods designed to separate or group portions of an image based on certain criteria. Each method has a unique approach and application. Interactive Segmentation, for example, is a process that relies on user input to enhance the accuracy of the segmentation [21, 54]. The user may guide the algorithm by identifying foreground and background markers. Super Pixelization is another method that groups pixels in an image into larger units, or "superpixels," based on shared characteristics such as color or texture [11]. This grouping can simplify the image data while preserving the structural essence of the objects. Object Proposal Generation goes a step further by suggesting potential object bounding boxes or regions within an image [15, 47]. These proposals serve as a guide for a more advanced model to identify and classify the actual objects' pixels. Foreground Segmentation, also known as background subtraction, is a technique primarily used to separate the main subjects or objects of interest (the foreground) from the backdrop (the background) in an image sequence [63, 31]. Semantic Segmentation is a more comprehensive approach where every pixel in an image is assigned to a specific class, effectively grouping regions of the image based on semantic interest [59, 17]. Instance Segmentation builds upon semantic segmentation by not only classifying each pixel but also identifying distinct objects of the same class and recognizing the individual objects as separate entities or instances [10, 30]. Panoptic Segmentation merges the concepts of semantic and instance segmentation, assigning every pixel in the image a class label and a unique instance identifier if it belongs to a specific class [16, 7]. This method aims to give a complete understanding of the image by identifying and classifying every detail. All these methods have been vastly studied, but one that surged in recent years, with the advancements of Natural Language Models (NLM), is known as "Promptable Segmentation," an approach that aims to create a versatile model capable of adapting to a variety of segmentation tasks [34, 62]. This is achieved through "prompt engineering," where prompts are carefully designed to guide the model toward generating the desired output [29, 48]. This concept is a departure from traditional multi-task systems where a single model is trained to perform a fixed set of tasks. The unique feature of a promptable segmentation model is its ability to take on new tasks at the time of inference, serving as a component in a larger system [48, 34]. For instance, to perform instance segmentation, a promptable segmentation model could be combined with an existing object detector. A state-of-the-art open-set object detector is Grounding DINO (GroundDINO) [26]. This system is an enhancement of the Transformer-based object detector called DINO [60], enriched with grounded pre-training to be able to identify a broader range of objects based on human inputs, such as category names or referring expressions. An open-set detector is meant to identify and classify objects that weren't part of the model's training data, as opposed to a closed-set detector that can only recognize objects it has been specifically trained on. The information from Grounding DINO can potentially be used to guide the segmentation process, providing class labels or object boundaries that the segmentation model could use. Most NLM incorporate deep-learning-based networks and, with the rise of these methods, more advanced segmentation techniques have been developed for remote sensing applications. Convolutional Neural Networks (CNNs), which emerged as a popular choice due to their ability to capture local and hierarchical patterns in images [39, 33], have widely been used as the backbone for these tasks. CNNs consist of multiple convolutional layers that apply filters to learn increasingly complex features, making them well-suited for segmenting objects in many remote sensing images [58, 4]. However, they are computationally intensive and may require substantial training data. Generative Adversarial Networks (GANs) have also shown potential in the field of image process. GANs consist of a generator and a discriminator network, where the generator tries to create synthetic data to fool the discriminator, and the discriminator aims to distinguish between real and synthetic data [18]. For image segmentation, GANs can be used to generate realistic images and their corresponding segmentations, which can supplement the training data and improve the robustness of the segmentation models [5]. Vision Transformers (ViT), on the other hand, is a recent development in deep learning that has shown promise in image segmentation tasks. Unlike CNNs, which rely on convolutional operations, ViT employs self-attention mechanisms that allow them to model long-range dependencies and global context within images [23, 25]. This approach has demonstrated competitive performance in various computer vision tasks, including remote sensing image segmentation [2], and it is currently outperforming CNNs in remote sensing data [13]. Another key concept in deep learning that can enhance the segmentation process refers to its capability for transfer learning. With transfer learning, a pre-trained model on a large dataset is adapted for a different but related task [49]. For instance, a CNN or ViT trained on a large-scale image recognition dataset like ImageNet can be fine-tuned for the task of remote sensing image segmentation [37, 40]. The advantage of transfer learning is that it can leverage the knowledge gained from the initial task to improve performance on the new task, especially when the amount of labeled data for the new task is limited. One of the main challenges in applying deep learning techniques to remote sensing image segmentation is the need for large volumes of labeled ground-truth data [6]. Acquiring and annotating this data can be time-consuming and labor-intensive, requiring expert knowledge and resources that may not be readily available. Furthermore, the variability and complexity of remote sensing imagery can make the labeling process even more difficult [3]. In light of these issues, it becomes imperative to develop robust, efficient, and accessible solutions that can aid in the processing and analysis of such data. A model that can perform segmentation with zero domain-specific information may offer an important advantage for this process. In this sense, the Segment Anything Model (SAM) has emerged as a potential tool for assisting in the segmentation process of remote sensing images. SAM design enables it to generalize to new image distributions and tasks effectively and already resulted in numerous applications [19]. By using minimal human input, such as bounding boxes, reference points, or simply text-based prompts, SAM can perform segmentation tasks without requiring extensive ground-truth data. This capability can reduce the labor-intensive process of manual annotation and be incorporated into the image processing pipeline, potentially accelerating its workflow. SAM has been trained on an enormous dataset, of 11 million images and 1.1 billion masks, and it boasts impressive zero-shot performance on already a variety of segmentation tasks [19]. Foundation models such as this, which have shown promising advancements in NLP and, more recently, in computer vision, can carry out zero-shot learning. This means they can learn from new datasets and perform new tasks often by utilizing 'prompting' techniques, even with little to no previous exposure to these tasks. In the fields of NLP, "foundation models" refer to large-scale models that are pre-trained on a vast amount of data and are then fine-tuned for specific tasks. These models serve as the "foundation" for various applications [32, 34, 55]. SAM's ability to generalize across a wide range of objects and images makes it particularly appealing for remote sensing applications. Since it can be retrained with a single example of each new class at the time of prediction [61], it demonstrates the models' high flexibility and adaptability. The implementation of a one-shot approach may assist in designing models that learn useful information from a small number of examples - in contrast to traditional models which usually require large amounts of data to generalize effectively. This could potentially revolutionize how we process remote-sensing imagery. As such, by investigating SAM's innovative technology, we may be able to provide more interactive and adaptable remote sensing systems. ## 3 Materials and Methods In this section, we describe how we evaluated the performance of the Segment Anything Model (SAM), for both zero and one-shot approaches, in the context of remote sensing imagery. The method implemented in this study is summarized in Figure 1. The data for this study consisted of multiple aerial and satellite datasets. These datasets were selected to ensure diverse scenarios and a better range of objects and landscapes. This helped in assessing the robustness of SAM and its adaptability to different situations and geographical regions. The study then investigated SAM's segmentation capacity under different prompting conditionals. First, we used the general segmentation approach, in which SAM was tasked to segment different objects and landscapes without any guided prompts. This provided a baseline for SAM's inherent segmentation capabilities with zero-shot. For this approach, we only evaluated its visual quality, since it segments every possible object in the image, instead of just the ones with ground-truth labels. It also is not guided by any means, thus resulting in the segmentation of unknown classes, serving as just a traditional segmentation filter. In the second scenario, bounding boxes were provided. These rectangular boxes, highlighting specific areas within the images, were used to restrict SAM's segmentation per object and see its proficiency in recognizing and segmenting them. Next, we conducted segmentation using points as prompts. In this setup, a series of specific points within the images were provided to guide SAM's process. It allowed us to test the precision capabilities of SAM. Finally, we experimented with the segmentation process using only textual descriptions as prompts. This was conducted with an implementation of SAM alongside GroundingDINO's method [26]. This permitted an evaluation of these models' capabilities to understand, interpret, and transform textual inputs into precise segmentation outputs. To measure SAM's adaptability and potential to deal with remote sensing imagery, we then performed a one-shot implementation. For each of the datasets, we included an example of the target class to SAM. For that, we adapted the model with a novel combination of the text-prompt approach and the one-shot learning method. Specifically, we selected the best possible example (highest logits) of the target object, using textual prompts to define the object for mask generation. This example was then presented to SAM as the sole representative of the class, effectively guiding its learning process. The rationale behind this combined approach was to leverage the context provided by the text prompts and the efficacy of the one-shot learning method to the adaptability of SAM to a fully-automated enhancement process. ### Description of the Datasets We begin by separating our dataset into three categories related to the origin of the platform used for capturing the images: 1. Unmanned Aerial Vehicle (UAV); 2. Airborne, and; 3. Satellite. Each of these categories provides unique advantages and challenges in terms of spatial resolution and coverage area dimension. In our study, we aim to evaluate the performance of SAM across these different sources to understand its applicability and limitations in diverse contexts. Their characteristics are summarized in Table 1. We also provided illustrated examples from these datasets in Figure 2, illustrating how the data is being tackled, as in bounding boxes and point prompts. The UAV category comprises data that have the advantage of very-high spatial resolution, returning images and targets with fine details. This makes them particularly suitable for local-scale studies and applications that require high-precision data. However, the coverage area of UAV datasets is limited compared to other data sources. The images comprised more single-class objects per dataset, so these problems were tackled in binary form. In the case of linear objects, specifically continued plantation crops cover, we used multi-points spread across the target, contained within its center and extremes, to ensure that the model was capable of understating it better. For more condensed targets such as houses and trees, we used the centered position of the object as a point prompt. The second category is Airborne data, which includes data collected by manned aircraft. These datasets typically offer a good compromise between spatial resolution and coverage area. We processed these datasets with the same approach as with the UAV images since they also consisted of binary class problems. The total quantifiable size of these datasets surpasses 90 Gigabytes and comprises more than 10,000 images and image patches. Part of the dataset, specifically the aerial one (UAV and Airborne), is currently being made public in the following link for others to use: Geomatics and Computer Vision/Datasets. These datasets cover different area sizes and their corresponding ground-truth masks were generated and validated by different specialists in the field. The third category consists of Satellite data, which provides the widest coverage and is focused on multi-class problems. The spatial resolution of satellite data used is generally lower than that of UAV and Airborne data. Furthermore, the quality of the images is more affected by atmospheric conditions, with illumination conditions differentiating from each other, thus providing additional challenges for the model. These datasets consist of publicly available images from the LoveDA dataset [53] and from SkySat ESA archive [9] and present a multi-class segmentation problem. To facilitate's SAM evaluation, specifically with the guided prompts (bounding box, point, and text), we conducted a one-against-all approach, in which we separated the classes into individual classifications ("specified class" versus "background"). ### Protocol for Promptable Image Segmentation In this section, we explain how we adapted SAM to the remote sensing domain and how we conducted the prompable image segmentation with it. All of the implemented codes, specifically designed for this paper, were made publicly available in an underscentruction educational repository [42]. Also, as part of our work, we are focusing on developing the "segment-geospatial" package [46], which implements features that will simplify the process of using SAM models for geospatial data analysis. This is a work in progress, but it is publicly available and offers a suite of tools for performing general segmentation on remote-sensing Figure 1: Schematic representation of the step-by-step process undertaken in this study to evaluate the efficacy of SAM’s approach in remote sensing image processing tasks. images using SAM. The goal is to enable users to engage with this technology with a minimum of coding experience. Our geospatial analysis was conducted with the assistance of a custom tool, namely "SamGeo", which is a component of the original module. SAM possesses different models to be used, namely: ViT-H, ViT-L, and ViT-B [19]. These models have different computational requirements and are distinct in their underlying architecture. In this study, we used the ViT-H SAM model, which is the most advanced and complex model currently available, bringing most of the SAM capabilities to our tests. To perform the general prompt, we used the generate method of the SamGeo instance. This operation is simple enough since it segments the entire image and stores it as an image mask file, which contained the segmentation masks. Each mask delineates the foreground of the image, with each distinct mask allocated a unique value. This allowed us to classify and segment different geospatial features. The result is a non-classified segmented image that can also be converted into a vector shape. As mentioned, we only evaluated this approach visually, since it was not possible to appropriately assign the segmented regions outside of our reference class. For the bounding box prompt, we used the SamGeo instance in conjunction with the objects' shapefile. This approach was used to extract bounding boxes from any multipart polygon geometry, which returned a list of geometric boundaries for our image data based on its coordinates. To efficiently process these boundaries, we initialized its predictor instance. In this process, the image was segmented and passed through the predictor along with a designated model checkpoint. Once established, the predictor processed each clip box, creating the masks for the segmented regions. This process enabled each bounding box's contents to be individually examined as instance segmentation masks. These binary masks were then merged and saved as a single mosaic raster to create a comprehensive visual representation of the segmented regions. Although not focused on remote sensing data, the official implementation is namely as Grounded-SAM [14]. The single-point feature prompt was implemented similarly to the bounding-box method. For that, we first defined functions to convert the geodata frame into a list of coordinates [x, y] instead of the previous [x1, y1, x2, y2] ones. We utilized SamGeo again for model prediction but with the distinction of setting its automatic parameter to "False" and applying the predictor to individual coordinates instead of the bounding boxes. This approach was conducted by iterating through each point of the coordinate pairs, predicting its features in instances, and saving the resulting mask into a unique file per point (also resulting in instance segmentation masks). After the mask files were generated, we proceeded to merge these masks into a single mosaic raster file, giving us a complete representation of all the segmented regions from the single-point feature prompt. The text-based prompt differentiates from the previous approach since it required additional steps to be implemented. This \begin{table} \begin{tabular}{c c c c c c c c c c} \hline **8** & **Platform** & **Resolution** & **Area** & **Target** & **Ground** & **Box** & **Point** & **Text** & **Reference** \\ \hline 00 & UAV & 0.04 m & 70 m & Tree & Yes & Yes & Centroid & Tree & \\ 01 & UAV & 0.04 m & 70 h & House & Yes & Yes & Centroid & House & \\ 02 & UAV & 0.01 m & 4 h & Plantation Crop & Yes & No & Multiple & Plantation & [58] \\ 03 & UAV & 0.04 m & 40 h & Plantation Crop & Yes & No & Multiple & Plantation & \\ 04 & UAV & 0.09 m & 90 h & Building & Yes & Yes & Centroid & Building & [10] \\ 05 & UAV & 0.09 m & 90 h & Car & Yes & Yes & Centroid & Car & \\ 06 & Airborne & 0.20 m & 120 h & Tree & Yes & Yes & Centroid & Tree & \\ 07 & Airborne & 0.20 m & 120 h & Vehicle & Yes & Yes & Centroid & Vehicle & \\ 08 & Airborne & 0.45 m & 190 h & Lake & Yes & Yes & Centroid & Lake & \\ 09 & Satellite & 0.30 m & \(-\) & Building: Road; Water; Barnes; Forest; Farm & Yes & Yes & Multiple & Building: Road; Water; Barnes; Forest; Farm & LoveDA [53] \\ 10 & Satellite & 0.50 m & 480 h & Building: Street; Water; Vehicle; Tree & Yes & Yes & Yes & Building: Street; Water; Vehicle; Tree & SkySat ESA [9]. \\ \hline \end{tabular} \end{table} Table 1: Overview of the distinct attributes and specifications of the datasets employed in this study. Figure 2: Collection of image samples utilized in our research. The top row features UAV-based imagery with bounding boxes and point labels, serving as prompts for SAM. The middle row displays airborne-captured data representing larger regions, with both points and rectangular polygon shapes provided as model inputs. The bottom row reveals satellite imagery, again with bounding boxes and points as prompt inputs, offering a trade-off between lower spatial resolution and wider area coverage. method blends GroundingDINO's [26] capabilities for zero-shot visual grounding with SAM's object segmentation functionality for retrieving the pre-trained models. For instance, once Grounding DINO has detected and classified an object, SAM is used to isolate that object from the rest. As a result, we've been able to identify and segment objects within our images based on a specified textual prompt. This procedure opens up a new paradigm in geospatial analysis, harnessing the power of state-of-the-art models to extract image features based only on natural language input. Since remote sensing imagery often contains multiple instances of the same object (e.g., several 'houses', 'cars', 'trees', etc.), we've added a looping procedure. The loop identifies the object with the highest probability in the image (i.e. logits), creates a mask for it, removes it from the image, and then restarts the process to identify the next highest probable object. This process continues until the model reaches a defined minimum threshold for both detection and text prompt association. The precise balancing of these thresholds is crucial, with implications for the accuracy of the model, so we manually set them for each dataset based on trial and error tentative. The segmented individual images and their corresponding boxes are subsequently generated, while the resulting segmentation mask is saved and mosaicked. ### One-Shot Text-Based Approach The one-shot training was conducted following the recommendations on [61] by using its PerSAM and PerSAM-F approaches. We begin by adapting the text-based approach of the combination of the GroundDINO [26] and SAM [19] methods to return the overall most probable object belonging to the specified class in its description. By doing so, we enable a fully-automated process of identifying a single object and including it on a personalized pipeline for training SAM with this novel knowledge. In this section, we describe the procedures involved in the one-shot training mechanism as well as the methods used for object identification and personalization. To summarize the whole process, we illustrate the main phases in Figure 3. Following Figure 3, the initial phase of the one-shot training mechanism involves the model using the object with the highest logits calculated from the text-based segmentation. This ensures the object is accurately recognized and selected for further steps. It's this aspect of the process that the text-based approach comes into play, capitalizing on GroundDINO's capabilities for zero-shot visual grounding combined with SAM's object segmentation for pre-trained model retrieval. As such, the selected object becomes the "sample" of the one-shot training process due to its high probability of belonging to the specified class by the text. Once the object has been identified through this method, the next phase involves creating a single-segmented object mask. This mask is used for the retraining of SAM in a one-shot manner. The text-based approach adds value by helping SAM distinguish between the different object instances present in the remote sensing imagery, such as multiple "houses", "cars", or "trees", for example. Each object is identified based on its individual likelihood, leading to the creation of a unique mask for retraining SAM. The third phase comes into play once the object with the highest probability has been identified and its mask has been used for SAM's one-shot training. The selected input object is removed from the original image, leaving the remaining objects ready for further segmentation. The final phase involves a dynamic, interactive loop, where the remaining objects are continuously segmented until no more objects are detectable by the PerSAM approach [61]. This phase is critical as it ensures that every potential object within the image is identified and segmented. Here again, the loop approach aids the process, using a procedure that identifies the next highest probable object, as it creates a mask, removes it from the image, and repeats. This cycle continues until a breakpoint is reached, where it detects that the position of the object is the same as the previous one. Another important clarification of the one-shot approach regards the choice of the method for its training. An early exploration of both PerSAM and PerSAM-F methods [61] was conducted to assess their utility in the context of remote sensing imagery. Our investigations have shown that PerSAM-F emerges as a more suitable choice for this specific domain. PerSAM, in its original formulation, leverages one-shot data through a series of techniques such as target-guided attention, target-semantic prompting, and cascaded post-refinement, delivering favorable personalized segmentation performance for subjects in a variety of poses or contexts. However, there were occasional failure cases, notably where the subjects comprised hierarchical structures to be segmented. Examples of such cases in traditional images are discussed in [61], where ambiguity provides a challenge for PerSAM in determining the scale of the mask as output (e.g. a "dog wearing a hat" may be segmented entirely, instead of just the "dog"). In the context of remote sensing imagery, such hierarchical structures are commonly encountered. An image may contain a tree over a house, a car near a building, a river flowing through a forest, and so forth. These hierarchical structures pose a challenge to the PerSAM method, as it struggles to determine the appropriate scale of the mask for the segmentation output. An example of such a case, where a tree covers a car, can be seen in Figure 4. To address this challenge, we used PerSAM-F, its fine-tuning variant of PerSAM. As previously mentioned, PerSAM-F freezes the entire SAM to preserve its pre-trained knowledge and only fine-tunes 2 parameters within a 10 seconds training window [61]. Crucially, it enables SAM to produce multiple segmentation results with different mask scales, thereby allowing for a more accurate representation of hierarchical structures commonly found in remote sensing imagery. PerSAM-F employs learnable relative weights for each scale, which adaptively select the best scale for varying objects. This strategy offers an efficient way to handle the complexity of segmentation tasks in remote sensing imagery, particularly when dealing with objects that exhibit a range of scales within a single image. This, in turn, preserves the characteristics of the segmented objects more faithfully. As such, PerSAM-F exhibited better segmentation accuracy in our early experiments, thus being the chosen method to be incorporated with the text-based approach. Regardless, to evaluate the performance and utility of the text-based one-shot learning method, we conduct a comparative analysis against a traditional one-shot learning approach. The traditional method used for comparison follows the typical approach of one-shot learning, providing the model with a single example from the ground-truth mask, manually labeled by human experts. To ensure fairness, we provided the model with multiple random samples from each dataset, and mimic the image inputs to return a direct comparison for both approaches. We calculated the evaluation metrics from each input and returned its average value alongside its standard deviation. Since the text approach always uses the same input (i.e. the highest logits object), we were able to return a single measurement of their accuracies. ### Model Evaluation The performance of both zero-shot and one-shot models was measured by evaluating their prediction accuracy on a ground-truth mask. For that, we used metrics like Intersection over Union (IoU), Pixel Accuracy, and Dice Coefficient. These metrics are commonly used in evaluating imaging segmentation, as they provide a more nuanced understanding of model performance. For that, we compared pairs of the predicted masks with the ground-truth masks. Intersection over Union (IoU) is a common evaluation metric for object detection and segmentation problems. It measures the overlap between the predicted segmentation and the ground truth [45]. The IoU is the area of overlap divided by the area of the union of the predicted and ground truth segmentation. A higher IoU means a more accurate segmentation. The equation to achieve it is presented as: \[IoU=\frac{TP}{TP+FP+FN} \tag{1}\] Here, TP represents True Positives (the correctly identified positives), FP represents False Positives (the incorrectly identified positives), and FN represents False Negatives (the positives that were missed). Figure 4: Comparative illustration of tree segmentation using PerSAM and PerSAM-F. On the left, the PerSAM model segments not only the tree but also its shadow and a part of the car underneath it. On the right, the PerSAM-F model, fine-tuned for hierarchical structures and varying scales, accurately segments only the tree, demonstrating its improved ability to discern and isolate the target object in remote sensing imagery. Figure 3: Visual representation of the one-shot-based text segmentation process in action. The figure provides a step-by-step illustration of how the model identifies and segments the most probable object based on a text prompt with “car” and “tree” as examples. Pixel Accuracy is the simplest used metric and it measures the percentage of pixels that were accurately classified [35]. It's calculated by dividing the number of correctly classified pixels by the total number of pixels. This metric can be misleading if the classes are imbalanced. The following equation can be used to return it: \[Pixel\ Accuracy=\frac{TP+TN}{TP+FP+TN+FN} \tag{2}\] Here, TN represents True Negatives (the correctly identified negatives). Dice Coefficient (also known as the Sorensen-Dice index) is another metric used to gauge the performance of image segmentation methods. It's particularly useful for comparing the similarity of two samples. The Dice Coefficient is twice the area of overlap of the two segmentations divided by the total number of pixels in both images (the sum of the areas of both segmentations) [35]. The Dice Coefficient ranges from 0 (no overlap) to 1 (perfect overlap). The equation to perform it is given as follows: \[Dice\ Coefficient=2*\frac{TP}{2*TP+FP+FN} \tag{3}\] The Dice Coefficient is twice the area of overlap of the two segmentations divided by the total number of pixels in both images (the sum of the areas of both segmentations). We also utilized other metrics such as True Positive Rate (TPR) and False Positive Rate (FPR) to measure the effectiveness of SAM, juxtaposed with the accurately labeled class from each dataset. The interpretation of these metrics is as per [43], where the True Positive Rate (TPR) denotes the fraction of TP cases among all actual positive instances. The False Positive Rate (FPR), for instance, signifies the fraction of FP instances out of all TN instances. A model with a higher TPR is proficient at correctly pinpointing lines and edges and performs better at avoiding incorrect detections of lines and edges when the FPR is lower. Both metrics are calculated as: \[\text{TPR}=\frac{TP}{(TP+FN)} \tag{4}\] \[\text{FPR}=\frac{FP}{(FP+TN)} \tag{5}\] In light of the nature of SAM, a transformer network, we aimed to preserve the context of our images for the attention mechanism of the model. Instead of cropping the images, specifically the aerial ones, into smaller patches, we chose to either use larger image crops or even entire orthomosaics for processing in a one-go. This method of implementation, however, substantially increased the amount of time required to perform the inference in our aerial data. For the larger patches, the inference process in GPU time was below 10 minutes for most data, while when considering entire datasets, it took around 1 to 2 hours to process. For the inference, we used an nVidia RTX 3090 with 24 GB GDDR6X video memory and 10,496 CUDAS in the Ubuntu 22.04 operation system. Regardless, the results yielded a detailed insight into the segmentation scores between each prompt (general, bounding box, poping, text, PerSAM-F, and text-based PerSAM-F). This analysis helped us evaluate better the efficiency and accuracy of SAM's performance in prompt segmentation against the ground-truth masks, providing a quantitative understanding of it. ## 4 Results and Discussion Our exploration of the Segment Anything Model (SAM) for remote sensing tasks involved an evaluation of its performance across various datasets and scenarios. This section presents the results and discusses their implications for SAM's role in remote sensing image analysis. This process commenced with an investigation of SAM's general segmentation approach, which requires no prompts. By merely feeding SAM with remote sensing images, we aimed to observe its inherent ability to detect and distinguish objects on the surface. Examples of different scales are illustrated in Figure 5, where we converted the individual regions for vector format. This approach demonstrates its adaptability and suitability for various applications. However, this method is not guided by a prompt, not returning specific segmentation classes, making it difficult to measure its accuracy by our available labels. As depicted in Figure 5, the higher the spatial resolution of an image, the more accurately SAM segmented the objects. An interesting observation pertained to the processing of satellite images where SAM encountered difficulties in demarcating the boundaries between contiguous objects (like large fragments of trees or roads). Despite this limitation, SAM exhibited an ability to distinguish between different regions when considering very-high spatial resolution imagery, indicative of an interesting segmentation capability that does not rely on any prompts. This approach offers value for additional applications that are based on object regions, such as classification algorithms. Moreover, SAM can expedite the process of object labeling for refining other models, thereby significantly reducing the time and manual effort required for this purpose. Following this initial evaluation, we proceeded to test SAM's promptable segmentation abilities using bounding boxes, points, and text features. The resulting metrics for each dataset are summarized in Table 2. Having compiled a dataset across diverse platforms, including UAVs, airborne devices, and satellites with varying pixel sizes, we noted that SAM's segmentation efficacy is also quantitatively influenced by the image's spatial resolution. These findings underscore the significant influence of spatial resolution on the effectiveness of different prompt types. For instance, on the UAV platform, text prompts showed superior performance for object segmentation tasks such as trees, with higher Dice and IoU values. However, bounding box prompts were more effective for delineating geometrically well-defined and larger objects like houses and buildings. The segmentation of plantation crops was a unique case. Point prompts performed well at a finer 0.01 m resolution for individual plants. However, as the resolution coarsened to 0.04 m and the plantation types changed, becoming denser with the plant canopy covering entire rows, bounding box prompts outperformed the others. This outcome suggests that, for certain objects, the type of input prompt can greatly influence detection and segmentation in the zero-shot approach. With the airborne platform, point prompts were highly effective at segmenting trees and vehicles at a 0.20 m resolution. This trend continued for the segmentation of lakes at a higher 0.45 m resolution. It raises the question of whether the robust performance of point prompts in these scenarios is a testament to their adaptability to very high-resolution imagery or a reflection of the target object's specific characteristics. These objects primarily consist of very defined features (like cars and vehicles) or share similar characteristics (as in bodies of water). In the context of satellite-based remote sensing imagery, point prompts proved most efficient for multi-class segmentation at the examined resolutions of 0.30 m and 0.50 m. This can be at Figure 5: Examples of segmented objects using SAM’s general segmentation method, drawn from diverse datasets based on their platforms. Objects are represented in random colors. As the model operates without any external inputs, it deduces object boundaries leveraging its zero-shot learning capabilities. tributed to the fact that bounding box prompts tend to overshoot object boundaries, producing more false positives compared to point prompts. This finding indicates the strong ability of point prompts to manage a diverse set of objects and categories at coarser resolutions, making them a promising tool even for satellite remote sensing applications. The text-based approach was found to be the least effective, primarily due to the model's difficulty in associating low-resolution objects with words. Still, it is important to noticed that, from all the datasets, the satellite multiclass problem proved to be the most difficult task for the model, with generally lower metrics than the others. Qualitatively, our observations also revealed that bounding boxes were particularly effective for larger objects (Figure 6). However, for smaller objects, SAM tended to overestimate the object size by including shadows in the segmented regions. Despite this overestimation, the bounding box approach still offers a useful solution for applications where an approximate estimate of such larger objects suffices. For these types of objects, a single point or central location does not suffice; they are defined by a combination of features within a particular area. Bounding boxes provide a more spatially comprehensive prompt, encapsulating the entire object, which makes them more efficient in these instances. The point-based approach outperformed the others across our dataset, specifically for distinct objects. By focusing on a singular point, SAM was able to provide precise segmentation results, thus proving its capability to work in detail (Figure 7). In the plantation dataset with 0.01 m resolution, for instance, when considering individual small plants, the point approach returned better results than bounding boxes. This approach may hold particular relevance for applications requiring precise identification and segmentation of individual objects in an image. Also, when isolating entities like single trees and vehicles, these precise spatial hints might suffice for the model to accurately identify and segment the object. The textual prompt approach also yielded promising results, particularly with very high-resolution images (Figure 8). While it was found to be relatively comparable in performance with the point and bounding box prompts for the aerial datasets, the text prompt approach had notable limitations when used with lower spatial resolution images. The text-based approach also returned worse predictions on the plantation with 0.04 m. This may be associated with the models' limitation on understanding the characteristics of specific targets, especially when considering the upper view of remote sensing images. Since it relies on GroundDINO to interpret the text, it may be more of a limitation on it than on SAM, mostly because, when applying the general segmentation, the results visually returned overall better segmentation on these datasets (Figure 5). Text prompts, though generally trailing behind in performance, still demonstrated commendable results, often closely following the top-performing prompt type. Text prompts offer ease of im \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **\#** & **Platform** & **Target** & **Resolution** & **Prompt** & **Dice** & **IoU** & **Pixel** & **Acc.** & **TPR** & **FPR** \\ \hline 00 & UAV & Tree & 0.04 m & Box & 0.888 & 0.799 & 0.960 & 0.942 & 0.036 \\ & & & & Point & 0.918 & 0.848 & 0.976 & 0.916 & 0.014 \\ & & & & Text & 0.922 & 0.852 & 0.981 & 0.921 & 0.012 \\ 01 & UAV & House & 0.04 m & Box & 0.927 & 0.863 & 0.984 & 0.974 & 0.015 \\ & & & & Point & 0.708 & 0.548 & 0.840 & 0.966 & 0.192 \\ & & & & Text & 0.892 & 0.798 & 0.956 & 0.971 & 0.101 \\ 02 & UAV & Plantation & 0.01 m & Box & 0.862 & 0.828 & 0.855 & 0.882 & 0.111 \\ & & & & Point & 0.958 & 0.920 & 0.950 & 0.980 & 0.092 \\ & & & & Text & 0.671 & 0.644 & 0.665 & 0.686 & 0.120 \\ 03 & UAV & Plantation & 0.04 m & Box & 0.801 & 0.689 & 0.952 & 0.944 & 0.104 \\ & & & & Point & 0.727 & 0.571 & 0.935 & 0.934 & 0.065 \\ & & & & Text & 0.441 & 0.328 & 0.499 & 0.450 & 0.061 \\ 04 & UAV & Building & 0.09 m & Box & 0.697 & 0.535 & 0.813 & 0.955 & 0.228 \\ & & & & Point & 0.691 & 0.528 & 0.842 & 0.911 & 0.175 \\ & & & & Text & 0.663 & 0.509 & 0.772 & 0.907 & 0.240 \\ 05 & UAV & Car & 0.09 m & Box & 0.788 & 0.650 & 0.970 & 0.660 & 0.002 \\ & & & & Point & 0.900 & 0.819 & 0.991 & 0.867 & 0.003 \\ & & & & Text & 0.927 & 0.843 & 0.973 & 0.893 & 0.001 \\ 06 & Airborne & Tree & 0.20 m & Box & 0.688 & 0.524 & 0.912 & 0.844 & 0.079 \\ & & & & Point & 0.917 & 0.847 & 0.935 & 0.883 & 0.029 \\ & & & & Text & 0.890 & 0.822 & 0.907 & 0.856 & 0.037 \\ 07 & Airborne & Vehicle & 0.20 m & Box & 0.861 & 0.756 & 0.995 & 0.869 & 0.003 \\ & & & & Point & 0.863 & 0.759 & 0.991 & 0.785 & 0.001 \\ & & & & Text & 0.846 & 0.744 & 0.971 & 0.769 & 0.002 \\ 08 & Airborne & Lake & 0.45 m & Box & 0.574 & 0.403 & 0.983 & 0.988 & 0.017 \\ & & & & Point & 0.972 & 0.945 & 0.999 & 0.991 & 0.001 \\ 09 & Satellite & Multiclass & 0.30 m & Box & 0.391 & 0.225 & 0.945 & 0.226 & 0.004 \\ & & & & Point & 0.823 & 0.567 & 0.878 & 0.678 & 0.037 \\ 10 & Satellite & Multiclass & 0.50 m & Box & 0.740 & 0.510 & 0.791 & 0.610 & 0.039 \\ & & & & Box & 0.261 & 0.150 & 0.936 & 0.151 & 0.005 \\ & & & & Point & 0.549 & 0.378 & 0.870 & 0.452 & 0.042 \\ & & & & Text & 0.494 & 0.340 & 0.783 & 0.407 & 0.044 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of metrics for the image segmentation task across different platforms, targets, and resolutions, and using different prompts for SAM in zero-shot. The values in red indicate the best performance for a particular target under specific conditions. Figure 6: Illustrations of images processed using bounding-box prompts. The first column consists of the RGB image, while the second column demonstrates how the prompt was handled. The ground-truth mask is presented in the third column and the prediction result from SAM in the fourth. The last column indicates the false positive (FP) pixels from the prediction. Figure 7: Illustrations of images processed using point prompts. The first column presents the RGB image, while the second column demonstrates the handling of the point prompt. The third column showcases the ground-truth mask, and the fourth column shows the prediction result from SAM. The final column highlights the false positive (FP) pixels from the prediction. Figure 8: Examples of images processed through text-based prompts. The first column contains the RGB image, while the second column indicates the text prompt used for the model. The ground-truth mask is shown in the third column, with the prediction result from SAM in the fourth. The last column indicates the false positive (FP) pixels from the prediction. plementation as their primary advantage. They don't necessitate specific spatial annotations, which are often time-consuming and resource-intensive to produce, especially for extensive remote sensing datasets. However, their effectiveness hinges on the model's ability to translate text-to-image information. Currently, their key limitation is that they are typically not trained specifically on remote sensing images, leading to potential inaccuracies when encountering remote sensing-specific terms or concepts. Improving the effectiveness of text prompts can be achieved through fine-tuning models on remote sensing-specific datasets and terminologies. This could enable them to better interpret the nuances of remote sensing imagery, potentially enhancing their performance to match or even surpass spatial prompts like boxes and points. Regarding our one-shot approach, we noticed that the models' performance is improved in most cases, as evidenced by the segmentation metrics calculated on each dataset. Table 3 presents a detailed comparison of the different models' performance providing a summary of the segmentation results. Figure 9 offers a visual illustration of example results obtained from both approaches, particularly highlighting the performance of the model. The metrics indicate that, while the PerSAM approach with a human-sampled example may be more appropriate than the proposed text-based approach, it may not always be the case when considering the metric's standard deviation. This opens up the potential for adopting the fully-automated process instead. However, in some instances, specifically where Ground-DINO's not capable of identifying the object, to begin with, the human-labeling provides a more appropriate return. In its zero-shot form, SAM tends to favor selecting shadows in some instances alongside its target, which can hinder its performance in tasks like tree detection. Segmenting objects with similar surrounding elements, especially when dealing with construction materials like streets and sidewalks, can be challenging for SAM, as noticed in our multi-class problem. Moreover, its performance with larger grouped instances, particularly when using the single-point mode, can be unsatisfactory. Also, the segmentation of smaller and irregular objects poses difficulties for SAM independently from the given prompt. SAM may generate disconnected components that do not correspond to actual features, specifically in satellite imagery where the spatial resolution is lower. The text-based one-shot learning approach, on the other hand, automates the process of selecting the example. It uses the text-based prompt to choose the object with the highest probability (highest logits) from the image as the training example. This not only reduces the need for manual input but also ensures that the selected object is highly representative of the specified class due to its high probability. Additionally, the text-based approach is capable of handling multiple instances of the same object class in a more streamlined manner, thanks to the looping mechanism that iteratively identifies and segments objects based on their probabilities. The one-host, however, excluded some of \begin{table} \begin{tabular}{l l l l l l l l l l} \hline \# & **Platform** & **Target** & **Resolution** & **Sample** & **Dice** & **IoU** & **Test Acc.** & **TPR** & **FPR** \\ \hline 00 & UAV & Tree & 0.04 m & Baseline & 0.922 & 0.852 & 0.981 & 0.921 & 0.012 \\ & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 01 & UAV & House & 0.04 m & Baseline & 0.927 & 0.863 & 0.984 & 0.974 & 0.015 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 02 & UAV & Plantation Crop & 0.01 m & Baseline & 0.801 & 0.689 & 0.952 & 0.944 & 0.104 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 03 & UAV & Plantation Crop & 0.04 m & Baseline & 0.958 & 0.920 & 0.950 & 0.980 & 0.092 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 04 & UAV & Building & 0.09 m & Baseline & 0.697 & 0.535 & 0.813 & 0.955 & 0.228 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 05 & UAV & Car & 0.09 m & Baseline & 0.927 & 0.843 & 0.973 & 0.893 & 0.001 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 06 & Airborne & Tree & 0.20 m & Baseline & 0.917 & 0.847 & 0.935 & 0.883 & 0.029 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 07 & Airborne & Vehicle & 0.20 m & Baseline & 0.863 & 0.759 & 0.991 & 0.785 & 0.001 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 08 & Airborne & Lake & 0.45 m & Baseline & 0.972 & 0.945 & 0.999 & 0.991 & 0.001 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 09 & Satellite & Multiclass & 0.30 m & Baseline & 0.823 & 0.567 & 0.878 & 0.678 & 0.037 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ 10 & Satellite & Multiclass & 0.50 m & Baseline & 0.549 & 0.378 & 0.870 & 0.452 & 0.042 \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ & & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 3: Comparison of segmentation results on different platforms and targets when considering both the one-shot and the text-based one-shot approaches. The baseline values are referent to the best metric obtained by the previous zero-shot investigation, be it from a bounding box, a point, or a text prompt. The red colors indicate the best result result for each scenario. the objects in the image to favorite only the objects similar to the given sample. In summary, upon comparing these two methods, we found that the traditional one-shot learning approach outperforms the zero-shot learning approach in all datasets. Additionally, the combination of text-based with one-shot learning also, even when not improving on it, gets close enough in most cases. This comparison underscores the benefits and potential of integrating state-of-the-art models with natural language processing capabilities for efficient and accurate geospatial analysis. Nevertheless, it is important to remember that the optimal choice between Figure 9: Visual illustration of the segmentation results using PerSAM and text-based PerSAM. The final column highlights the difference in pixels from the text-based PerSAM prediction to its ground truth. The graphic compares the range from the Dice values of both PerSAM and text-based PerSAM, illustrating how the proposed approach remains within the standard deviation of the traditional PerSAM approach, underscoring the potential of most practices to adopt the fully-automated process in such cases. these methods may vary depending on the specific context and requirements of a given task. ## 5 Future Perspectives on SAM for Remote Sensing SAM has several advantages that make it an attractive option for remote sensing applications. First, it offers zero-shot generalization to unfamiliar objects and images without requiring additional training [19]. This capability allows SAM to adapt to the diverse and dynamic nature of remote sensing data, which often consists of varying land cover types, resolutions, and imaging conditions. Second, SAM's interactive input process can significantly reduce the time and labor required for manual image segmentation. The model's ability to generate segmentation masks with minimal input, such as a text prompt, a single point, or a bounding box, accelerates the annotation process and improves the overall efficiency of remote sensing data analysis. Lastly, the decoupled architecture of SAM, comprising a one-time image encoder and a lightweight mask decoder, makes it computationally efficient. This efficiency is crucial for large-scale remote sensing applications, where processing vast amounts of data on time is of utmost importance. However, our study consists of an initial exploration of this model, where there's still much to be investigated. In this section, we discuss future perspectives on SAM and how it can be improved upon. Despite its potential, SAM has some limitations when applied to remote sensing imagery. One challenge is that remote sensing data often come in different formats, resolutions, and spectral bands. SAM, which has been trained primarily on RGB images, may not perform optimally with multispectral or hyperspectral data, which are common in remote sensing applications. A possible approach to this issue consists of either adapting SAM to read into multiple bands by performing rotated 3-band combinations or performing a fine-tuning to domain adaption. In our early experiments, a simple example run on different multispectral datasets demonstrated that, although the model has the potential to segment different regions or features, it still needs further exploration. This is something that we intend to explore in future research, but expect that others may look into it as well. Regardless, the current model can be effectively used in various remote sensing applications. For instance, we verify that SAM can be easily employed for land cover mapping, where it can segment forests, urban areas, and agricultural fields. It can also be used for monitoring urban growth and land use changes, enabling policymakers and urban planners to make informed decisions based on accurate and up-to-date information. Furthermore, SAM can be applied to a pipeline process to monitor and manage natural resources. Its efficiency and speed make it suitable for real-time monitoring, providing valuable information to decision-makers. This is also a feature that could be potentially explored by research going forward with its implementation. The one-shot technique of SAM, which is the capacity to generate accurate segmentation from a single example [61], could be further expanded into few-shot learning scenario. Our experimental results indicated an improvement in performance across most investigated datasets when this approach was utilized, especially considering the border of the objects. However, it is essential to note that one-shot learning may pose challenges to the generalization capability of the model, especially when dealing with remote sensing data that often exhibit a high degree of heterogeneity and diversity. For instance, a "healthy" tree can be a good sample for the model, but it can bias it to ignore "unhealthy" trees or canopies with different structures. Expanding the one-shot learning to a few-shot scenario could potentially improve the model's adaptability to different environments or tasks by enabling it to learn from more than one examples (1 to 10) instead of a single one. This would involve using a small set of labeled objects for each land cover type during the training process [48, 24]. On the other hand, a more robust learning approach, which uses a larger number of examples for each class, could further enhance the model's ability to capture the nuances and variations within each class. This approach, however, may require more computational resources and training data, and thus may not be suitable for all applications. Additionally, While SAM is a powerful tool for image segmentation, its effectiveness can be boosted when combined with some techniques. For example, integrating SAM into another ViT framework in a weakly-supervised manner could potentially improve the segmentation result, better handling the spatial-contextual information. However, it's worth noting that integrating it might also bring new challenges [52]. One potential issue could be the increased model complexity and computational requirements, which might limit its feasibility. But, as the training of transformers typically requires large amounts of data, SAM can provide fast and relatively accurate labeled regions for it. Furthermore, one of the key challenges to tackle would-be improving SAM's performance when applied to low spatial resolution imagery. As noted in our early experiments, SAM's accuracy tends to decrease when the image resolution is above 1 or 2 meters in size. This shortcoming can be addressed by coupling SAM with a Super-Resolution (SR) technique [56], creating a two-step process, where the first step involves using an SR model to increase the spatial resolution of the imagery, and the second step involves using the enhanced resolution image as an input to SAM. Given that SAM performs better with high-resolution images, this process could improve SAM's overall performance with remote sensing imagery that has a lower native resolution. However, it should be noted that SR techniques are not perfect, as they introduce errors in the high-resolution images that are created [56], which, in turn, impacts SAM's performance. Regardless, this approach could be tested and validated rigorously in future research. Lastly, as we explored the integration of SAM with other types of methods, such as GroundDINO [26], we noticed both strengths and limitations that were already discussed in the previous section. This combination demonstrates a high degree of versatility and accuracy in tasks such as instance segmentation, where GroundDINO's object detection and classification guided SAM's segmentation process. However, the flexibility of this approach extends beyond these specific models. Any similar models could be swapped in as required, expanding the applications and robustness of the system. Alternatives such as GLIP [22] or CLIP [27] may replace GroundDINO, allowing for further experimentation and optimization [64]. Furthermore, integrating language models like ChatGPT [36] could offer additional layers of interaction, nuances and understanding, demonstrating the far-reaching potential of combining these expert models. This modular approach underpins a potent and adaptable workflow that could reshape our capabilities in handling remote sensing tasks. Also, when integrated with Geographical Information Systems (GIS), the combined power of models like SAM and others can substantially enhance the user experience and capabilities of these systems. The promptability and modularity of this approach allow for the integration of other models that could offer complementary capabilities. For instance, incorporating NLM like ChatGPT could facilitate easier and more intuitive interaction between users and the GIS. This could make the GIS more accessible to non-expert users, as they could interact with the system using natural language prompts instead of complicated technical inputs [41]. Overall, this integration could revolutionize the way users interact with and utilize GIS, making the system more user-friendly, efficient, and versatile. It offers a vision of a new generation of GIS that is more adaptable and intuitive, able to handle diverse tasks, and provide richer insights into geographical data. In short, our study focused on demonstrating the potential of SAM adaptability for the remote sensing domain, as well as presenting a novel, fully-automated approach, to retrain the model with one example from the text-based approach. While there is much to be explored, was important to understand how the model works and how it could be improved upon. To summarize this discussion, there are many potential research directions and implementations for SAM in remote sensing applications, which can be condensed as follows: * Examining the most effective approaches and techniques for adapting SAM to cater to a variety of remote sensing data forms, including multispectral and hyperspectral data. * Analysing the potential of coupling SAM with few-shot or multi-shot learning, to enhance its adaptability and generalization capability across diverse remote sensing scenarios. * Investigating potential ways to integrate SAM with prevalent remote sensing tools and platforms, such as Geographic Information Systems (GIS), to augment the versatility and utility of these systems. * Assessing the performance and efficiency of SAM in real-time or near-real-time remote sensing applications to understand its capabilities for timely data processing and analysis. * Exploring how domain-specific knowledge and expertise can be integrated into SAM to enhance its ability to understand and interpret remote sensing data. * Evaluating the potential use of SAM as an alternative to traditional labeling processes and its integration with other image classification and segmentation techniques in a weakly-supervised manner to boost its accuracy and reliability. * Integrating SAM with SR approach to enhance its capability to handle low-resolution imagery, thereby expanding the range of remote sensing imagery it can effectively analyze. ## 6 Conclusions In this study, we conducted a comprehensive analysis of both the zero and one-shot capabilities of the Segment Anything Model (SAM) in the domain of remote sensing imagery processing, benchmarking it against aerial and satellite datasets. We innovated by presenting a fully-automated one-shot operation on SAM based on a text-prompt example, a practice that further enhanced its segmentation capabilities on most of our tests. However, it's essential to note that this constitutes an early phase. In this sense, more frameworks and larger, diverse datasets, will be crucial for further refining the model and solidifying these findings. Our data also indicated that SAM delivers notable performance when contrasted with the ground-truth masks, thereby underscoring its potential efficacy as a significant resource for remote sensing applications. Our evaluation reveals that the prompt capabilities of SAM (text, point, box, and general), combined with its ability to perform object segmentation with minimal human supervision, can also contribute to a significant reduction in annotation workload. This decrease in human input during the labeling phase may lead to expedited training schedules for other methods, thus promoting more streamlined and cost-effective workflows. Nevertheless, despite the demonstrated generalization, there are certain limitations to be addressed. Under complex scenarios, the model faces challenges, leading to less optimal segmentation outputs, by overestimating most of the objects' boundaries. Additionally, SAM's performance metrics display variability contingent on the spatial resolution of the input imagery (i.e., being prone to increase mistakes as the spatial resolution of the imagery is lowered). Consequently, identifying and rectifying these constraints is essential for further enhancing SAM's applicability within the remote sensing domain. In conclusion, our analysis provided insights into the operational performance and efficacy of SAM in the sphere of remote sensing segmentation tasks. While SAM exhibits notable promise, there is a tangible scope for improvement, specifically in managing its limitations and refining its performance for task-specific implementations. Future research should be oriented towards improving SAM's functional capabilities and exploring its potential integration with other methods to address a broader array of complex and challenging remote sensing scenarios. ## Supplementary Here, we provide an open-access repository designed to facilitate the application of the Segment Anything Model (SAM) within the domain of remote sensing imagery. The incorporated codes and packages provide users the means to implement point and bounding box-based shapefiles in combination with the SAM. The repositories also include notebooks that demonstrate how to apply the text-based prompt approach, alongside one-shot modifications of SAM. These resources aim to bolster the usability of the SAM approach in diverse remote sensing contexts, and can be accessed via the following online repositories: GitHub: AI-RemoteSensing [42] and; GitHub: Segment-Geospatial [46]. ## Acknowledgements This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Finance Code 001. The authors are funded by the Support Foundation for the Development of Education, Science, Technology of the State of Mato Grosso do Sul (FUNCT, 71/009.436/2022), the Brazilian National Council for Scientific and Technological Development (CNPq; 433783/2018-4, 310517/2020-6; 405997/2021-3; 308481/2022-4; 305296/2022-1), and CAPES Print (88881.311850/2018-01). ## Conflicts of Interest The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. ## Abbreviations The following abbreviations are used in this manuscript: \begin{tabular}{l l} AI & Artificial Inteligence \\ CNNs & Convolutional Neural Networks \\ GANs & Generative Adversarial Networks \\ GIS & Geographic Information Systems \\ NLP & Natural Language Processing \\ SAM & Segment Anything Model \\ UAV & Unmanned Aerial Vehicle \\ ViT & Vision Transformer \\ VLM & Visual Language Model \\ \end{tabular}
2307.01076
Analyzing Multiple-Choice Reading and Listening Comprehension Tests
Multiple-choice reading and listening comprehension tests are an important part of language assessment. Content creators for standard educational tests need to carefully curate questions that assess the comprehension abilities of candidates taking the tests. However, recent work has shown that a large number of questions in general multiple-choice reading comprehension datasets can be answered without comprehension, by leveraging world knowledge instead. This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer. We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage. These findings offer an approach for content creators to automatically capture the trade-off between comprehension and world knowledge required for their proposed questions.
Vatsal Raina, Adian Liusie, Mark Gales
2023-07-03T14:55:02Z
http://arxiv.org/abs/2307.01076v1
# Analyzing Multiple-Choice Reading and Listening Comprehension Tests ###### Abstract Multiple-choice reading and listening comprehension tests are an important part of language assessment. Content creators for standard educational tests need to carefully curate questions that assess the comprehension abilities of candidates taking the tests. However, recent work has shown that a large number of questions in general multiple-choice reading comprehension datasets can be answered without comprehension, by leveraging world knowledge instead. This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer. We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage. These findings offer an approach for content creators to automatically capture the trade-off between comprehension and world knowledge required for their proposed questions. Vatsal Raina, Adian Liusie, Mark Gales ALTA Institute/Department of Engineering, Cambridge University {vr311,a1826,mjfg}@cam.ac.uk **Index Terms**: machine reading comprehension, listening comprehension, multiple-choice, automatic speech recognition, world knowledge ## 1 Introduction Multiple-choice reading and listening comprehension tests serve as essential tools for evaluating language proficiency in educational settings [1]. In particular, multiple-choice questions permit fast and automated objective assessment of candidates' abilities. The creation of these standardized tests necessitates the careful selection of questions that accurately assess candidates' comprehension abilities. It is of interest for content creators to develop a framework to categorize the quality of questions used in assessment across several criteria such as complexity and diversity [2]. However, recent work [3] has identified an issue within general multiple-choice reading comprehension datasets sourced from real tests -- many questions can be answered correctly without language learners truly comprehending the passage, merely by relying on prior world knowledge. This work builds upon the concept of world knowledge in reading comprehension and aims to explore the extent to which contextual passages must be read/heard in multiple-choice reading/listening tests based on conversation transcriptions and listening comprehension assessments to deduce the correct answer. For example, a candidate may be able to deduce the correct answer to a large number of the comprehension questions by only reading the first sentence. Typically language learners may not understand the whole context and only partially comprehend the sentences. Figure 1 demonstrates three multiple-choice questions with varying degrees of required comprehension. Full comprehension, when the whole passage must be read in order to determine the correct answer. Partial comprehension, when the correct answer can be deduced from reading only a small part of the context. Finally, zero comprehension in the extreme case where the correct answer can be deduced without reading the context at all and by using world knowledge instead. For instance, in the zero comprehension example in Figure 1, without any need to read the context it is obvious that the answer is _sick children_ as the question asks about charities. Information about the extent of comprehension required in reading and listening tests can act as a core component in the question assessment framework [2, 4]. The degree of comprehension required can vary across the nature of the comprehension dataset. In this work, we consider a range of publicly available datasets that are very different in nature including commonsense-based reasoning, logical reasoning and multi-turn dialogue, speech transcriptions. We make the following contributions in this work: * Portability of world knowledge and partial comprehension systems from standard multiple-choice reading comprehension to dialogue and speech. * A thorough investigation of the degree of partial compre Figure 1: Example questions that can be answered with full, partial and zero comprehension respectively. hension from zero comprehension (world knowledge) to full comprehension. We emphasize the need for content creators to carefully and explicitly consider the extent of comprehension required for the questions they generate in order to better capture how language learners may interact with the deployed questions in tests. ## 2 Related work [3] indicates world knowledge is prevalent in several standard multiple-choice reading comprehension systems, reinforcing whether machine reading comprehension systems fully leverage the context for the desired comprehension task [5, 6, 7, 8]. [3] further introduces two performance metrics, effective number of options and mutual information of the context, to assess the extent to which world knowledge is used in these reading comprehension systems. We extend the work on world knowledge to investigate the spectrum between zero comprehension to full comprehension of real multiple-choice comprehension questions for text-based, dialogue-based and speech-based contexts. Previous work investigated automated approaches to assess the quality of comprehension questions. [2] present a framework to assess the quality of generated multiple-choice questions for comprehension. Four main qualities are identified: grammatical fluidity, answerability, diversity and complexity. Our work on assessing the extent to which the context needs to be read acts as an extension to this framework to capture the comprehensibility of the generated questions. Due to the lack of appropriately annotated speech corpora, several works investigate porting text-based systems for listening comprehension tasks. [9] explores applying a text-based question answering system on the TOEFL listening comprehension multiple-choice test from [10]. [11] further investigates the transfer learning style approach for extractive comprehension from SQuAD 2.0 [12] to a proprietary spoken question answering task, with a particular focus on the impact of automatic speech recognition (ASR) errors. Our approach ports systems from a multiple-choice reading comprehension task to a multiple-choice listening comprehension task to identify the extent to which comprehension of the context is required. ## 3 Multiple-choice comprehension ### Task Multiple-choice comprehension is a common assessment technique to assess the comprehension abilities of candidates in standardized tests [13]. Given a context passage, \(C\) and a question, \(Q\), the correct answer must be deduced from a discrete set of \(N\) answer options, \(\{O\}\). Hence, it is required to deduce the correct answer by comprehending the question and using the context passage as the information source to identify which answer option is the most suitable. ### Machine comprehension Machine comprehension performs the comprehension task using automated systems. Machine reading and listening comprehension for multiple-choice tests is a well researched area with state-of-the-art systems [14, 15, 16, 17] competing and outperforming humans on public benchmarks [18, 19, 20, 21]. In this work, the machine comprehension system's architecture replicates the standard multiple-choice machine reading comprehension systems from [22, 23] and depicted in Figure 2. Each option is separately encoded with the question and the context to generate a score. A softmax layer converts the scores associated with each option into a probability distribution where at inference time the predicted answer is taken to be the option with the greatest probability. The parameters of the core transformer [24] encoder and the linear layer are shared across all options. Hence, there is no requirement for the number of options at training and inference time to match. ### World knowledge It is expected that information must be used from both the context passage and the question to determine the correct answer. If the answer can be deduced without the context, it suggests 'world knowledge' [3] is sufficient to answer the question. We train a context-free system where the context is omitted to determine the extent to which world knowledge can be leveraged for comprehension. Table 1 summarizes the main differences between the standard and context-free systems where [CLS] and [SEP] denote classification and separation tokens respectively. ### Partial context Language learners often can shortcut reading the whole context passage in comprehension tasks and still correctly answer the question. Hence, we devise a simple approach to investigate the extent to which a context must be comprehended in order to determine the correct answer to standard multiple-choice questions. A standard system (see Table 1) trained with the full context is taken and applied at inference time to questions with only partial access to the context. After applying tokenization of the context, only \(\tau\)% of the context tokens are retained and input to the standard system. \(\tau\) can be varied to determine how much of the context is necessary for comprehension. ## 4 Experiments ### Data Several multiple-choice reading/listening comprehension datasets are used in this work including: RACE++ [25], ReClor \begin{table} \begin{tabular}{l l} \hline \hline System & Format \\ \hline Standard & [CLS]\(<\)\(C\)\(>\)[SEP]\(<\)\(Q\)\(>\)\(<\)\(O_{i}\)\(>\)[SEP] \\ Context-free & [CLS]\(<\)\(Q\)\(>\)\(<\)\(O_{i}\)\(>\)[SEP] \\ \hline \hline \end{tabular} \end{table} Table 1: Format for multiple-choice comprehension systems. Figure 2: The architecture for multiple-choice machine comprehension with context, \(C\), question, \(Q\) and \(N\) options, \(\{O\}\). [22], COSMOSQA [26], DREAM [27] and IBM-Debater [28]. **RACE++** is a dataset of English reading comprehension questions for Chinese high school students. The questions are collected at three levels: middle school, high school and college level, corresponding to increasing levels of complexity. **COSMOSQA** is a large scale commonsense-based reading comprehension dataset with four options per question. For this work, 2,985 examples from the development set is used. **ReClor** is a logical reasoning dataset at a graduate student level with four options per question. This is a challenging dataset as graduate students achieve an accuracy of 63%. 500 examples from the development split are used for this work (the test set is hidden). **DREAM** is a multiple-choice (three options) reading comprehension dataset that focuses on dialogue understanding. These dialogue are multi-turn and multi-party. It contains 10,197 questions and 6,444 dialogues, which were collected from English-as-a-foreign-language examinations. This work uses the 2,041 questions from the test split. The context is constructed by concatenating all dialogues into a single text. **IBM-Debater** consists of 200 spontaneous speeches arguing for or against 50 controversial topics. The dataset is structured to form a multiple-choice listening comprehension task by formulating each speech as a question that is aimed at confirming or rejecting the argument in a speech. Hence, each question has a binary class label with the transcribed speech acting as the context. The transcriptions are available as both manual and automatic speech recognition transcriptions. ### Training details and hyperparameters Two systems are trained on the large RACE++ training dataset (see Table 1): 1. A standard multiple-choice reading comprehension system with access to the context; 2. A context-free system without access to the context. Both systems are deep ensembles of 3 models that specifically use the large 1 ELECTRA [29] pre-trained language model in the form of the multiple-choice machine comprehension architecture of Figure 2. Footnote 1: Model configuration at: [https://huggingface.co/google/electronic-largediscriminator/blob/main/config.json](https://huggingface.co/google/electronic-largediscriminator/blob/main/config.json) Each model has 340M parameters. Grid search was performed for hyperparameter tuning of the standard system with the initial setting of the hyperparameter values by the systems from [23]. Apart from the default values used for various hyperparameters, the grid search was performed for the maximum number of epochs \(\in\{2,5,10\}\); learning rate \(\in\{2e-7,2e-6,2e-5\}\); batch size \(\in\{2,4\}\). Training was performed for 2 epochs at a learning rate of 2e-6 with a batch size of 4 and inputs truncated to 512 tokens at both training and inference time. Cross-entropy loss was used at training time with models built using NVIDIA A100 graphical processing units with training time under 4 hours per model. The context-free system had its hyperparameters selected to be identical to the standard system. ### Assessment Accuracy is used as the standard performance metric for inference on all datasets. The evaluation process aims to assess two aspects of the multiple-choice questions in each dataset: 1. the ability to use world knowledge in order to determine the correct answer and consequently the effective number of options per question; 2. the extent to which the context must be read/listened to determine the correct answer. The former is assessed by comparing the accuracy of a context-free comprehension system against a standard multiple-choice comprehension system while the latter is assessed by varying the amount of context available to a standard multiple-choice reading comprehension system at test time. ## 5 Results Multiple-choice questions are assessed for comprehensibility in terms of both world knowledge and partial access to the context. ### World knowledge Table 3 presents the prevalence of world knowledge across a range of reading and listening comprehension datasets. As both the standard and the context-free systems are trained on the RACE++ dataset, Table 3 further presents the portability of the systems to different forms of reading/listening comprehension. As in [3], the reading comprehension datasets of RACE++, COSMOSQA and ReClor observe significant presence of world knowledge. In particular, the context-free system on RACE++ achieves an accuracy of 59.1% despite having no access to the contextual passage that is more than double the accuracy of a random baseline. The ported context-free system also outperforms the 25% random baseline for commonsense reasoning and logical reasoning for COSMOSQA and ReClor respectively. Note, ReClor is a more challenging reading comprehension dataset than COSMOSQA and RACE++ [22], confirmed by the standard RACE++ trained system getting an accuracy of 73.2% on COSMOSQA but 48.8% on ReClor. Systems trained directly on COSMOSQA, ReClor observe a similar pattern [3]. From Table 3, both the context-free and the standard systems port across well to dialogues in the DREAM dataset. As before, the DREAM dataset demonstrates the presence of world knowledge as the context-free system surpasses the random \begin{table} \begin{tabular}{l|r r r|r} \hline \hline & TRN & DEV & EVL & \#options \\ \hline RACE++ & 100,388 & 5,599 & 5,642 & 4 \\ COSMOSQA & 25,262 & 2,985 & – & 4 \\ ReClor & 4,638 & 500 & 1000 & 4 \\ DREAM & 6,116 & 2,040 & 2,041 & 3 \\ IBM-Debater & – & – & 200 & 2 \\ \hline \hline \end{tabular} \end{table} Table 2: Dataset statistics. Relevant examples are underlined. \begin{table} \begin{tabular}{l|r r r} \hline \hline & Standard & Context-free & Random \\ \hline RACE++ & 86.8 & 59.1 & 25.0 \\ COSMOSQA & 73.2 & 52.8 & 25.0 \\ ReClor & 48.8 & 38.0 & 25.0 \\ DREAM & 86.0 & 46.1 & 33.3 \\ IBM-manual & 65.0 & 50.0 & 50.0 \\ IBM-ASR & 62.0 & 50.0 & 50.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy of standard and context-free systems trained on RACE++ in-domain and out-of-domain. baseline of 33% to achieve 46%. It is further interesting to observe the standard system ported from RACE++ gets an accuracy of 86%, which approaches the state-of-the-art performance of standard systems trained on DREAM [30]. However, the context-free system performs randomly on the speech transcriptions from the IBM-Debater dataset. This is an expected result as the speeches are reformulated into listening comprehension questions by posing whether the speech is pro or con a specific controversial topic (see Section 4.1). As the speeches are balanced for each topic, it is impossible to use world knowledge for a context-free system to deduce the argument in the speech without listening to it. The standard system, with access to the speech transcription, gets an accuracy of 65% with manual transcriptions and 62% with ASR transcriptions, comparable to [28]. Hence, the presence of ASR errors leads to a small drop in performance for binary classification. ### Partial information access This section investigates to what extent the context passage must be read or listened. Figure 3 presents the accuracy with partial access to the context, varying from zero to full access, for text, dialogue and speech-based comprehension questions. All results are presented using the standard system trained on RACE++. Hence, the accuracy with 0% access to the context on the plots differs in performance from the context-free system applied to the datasets from Table 2 - the context-free system's performance can expect to be an upperbound of performance with world knowledge as the system has explicitly been trained to try and deduce the correct answer without using the context. It is notable from Figure 3 that both the text-based and dialogue based reading comprehension datasets all start above the random line while the speech-based listening comprehension dataset begins at random accuracy, agreeing with Table 3. Figure 3 depicts that the text-based reading comprehension datasets increase linearly (approximately) with increasing access to the context passage. Such a linear relationship indicates that information required to deduce the correct answer is evenly distributed throughout the context passage. A similar behaviour is observed with DREAM, though the slow start indicates that information may be more disjoint in order to deduce the correct answer as emphasized in the original release of the DREAM dataset [27]. In contrast, a very different shape is observed for the speech transcriptions: there is a sharp increase on the IBM-Debater dataset with increased access to the speech and then the performance plateaus. Such a shape suggests the information is front-heavy where it is possible to deduce the side of the argument made in a speech using the first sentence. Table 4 further investigates the extent to which information is unevenly distributed in the IBM-Debater speeches. From Figure 3, 20% is used as an appropriate operating point to compare the performance with access to only the beginning extract of the context against the end and random extracts. For both the manual and the ASR transcriptions the performance is the highest for the beginning 20% and lowest for the end 20%, confirming the information to deduce the correct answer is concentrated at the beginning of the context. Future work should consider evaluating how performance varies with access to the easiest vs the most difficult sentences as the easiest sections mimic the parts of the context a language learner understands 2. Footnote 2: Initial experiments with sentence complexity based on standard vocabulary levels did not observe a statistically significant difference between the easiest and most difficult 20% according to text readability. Content creators are encouraged to plot similar characteristic graphs for newly proposed questions to gauge the degree of comprehension required by language learners. ## 6 Conclusions This work highlights the trade-off between contextual comprehension and world knowledge in multiple-choice reading and listening comprehension tests. We found that automated reading comprehension systems perform significantly better than random, even with limited access to the context passage. These findings provide content creators with an approach to capture the balance between comprehension and world knowledge in their questions. We further investigated to what extent a context needs to be read before the correct answer can be deduced, finding that it is possible to answer some questions across several reading/listening comprehension datasets with only access to a fraction of the context. Overall, our findings guide content creators in constructing more valid and reliable assessments, ensuring accurate evaluation of language proficiency. ## 7 Limitations A limitation for the IBM-Debater dataset is that the contexts have been truncated to 512 tokens prior to any experiments despite the average length being approximately 1000 tokens to use the standard pretrained language model finetuned on RACE++. \begin{table} \begin{tabular}{l|c c} \hline \hline & Manual & ASR \\ \hline Beginning [0-20\%] & 64.5 & 65.5 \\ Random & 58.0 & 57.0 \\ End [80-100\%] & 52.5 & 55.5 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy on IBM with 20% access to the context. Figure 3: Accuracy with partial context access. Points are plotted at 10% intervals.
2303.03619
Detection of ~100 days periodicity in the gamma-ray light curve of the BL Lac 4FGL 2022.7+4216
Study of quasi-periodic oscillations (QPO) in blazars is one of the crucial methods for gaining insights into the workings of the central engines of active galactic nuclei. QPOs with various characteristic time scales have been observed in the multi-wavelength emission of blazars, ranging from the radio to gamma-ray frequency bands. In this study, we carry out a comprehensive variability analysis of the BL Lac object 4FGL 2022.7+4216 detected by the \textit{Fermi-}LAT, over a period of more than three years, from April 27, 2019 to August 09, 2022. By utilizing multiple widely-used methods of time-series analyses, we detect the presence of quasi-periodic fluctuations with a period of $\sim$100 days with a confidence level exceeding $4\sigma$. This is the first time such a variability feature pertaining to this source is being reported. We propose that the observed QPO may be related to the precession of the blazar jet with a high Lorentz factor or to the motion of a plasma blob through the helical structure of the jet. However, for a decisive conclusion on the physical origin of such fluctuation, further multi-wavelength complementary observations, especially Very Long Baseline Interferometric observations, would be required.
Banerjee, Anuvab, Sharma, Ajay, Mnadal, Avijit, Das, Avik Kumar, Bhatta, Gopal, Bose, Debanjan
2023-03-07T03:08:11Z
http://arxiv.org/abs/2303.03619v1
Detection of \(\sim 100\) days periodicity in the gamma-ray light curve of the BL Lac 4FGL 2022.7+4216 ###### Abstract Study of quasi-periodic oscillations (QPO) in blazars is one of the crucial methods for gaining insights into the workings of the central engines of active galactic nuclei. QPOs with various characteristic time scales have been observed in the multi-wavelength emission of blazars, ranging from the radio to gamma-ray frequency bands. In this study, we carry out a comprehensive variability analysis of the BL Lac object 4FGL 2022.7+4216 detected by the _Fermi_-LAT, over a period of more than three years, from April 27, 2019 to August 09, 2022. By utilizing multiple widely-used methods of time-series analyses, we detect the presence of quasi-periodic fluctuations with a period of \(\sim\)100 days with a confidence level exceeding 4\(\sigma\). This is the first time such a variability feature pertaining to this source is being reported. We propose that the observed QPO may be related to the precession of the blazar jet with a high Lorentz factor or to the motion of a plasma blob through the helical structure of the jet. However, for a decisive conclusion on the physical origin of such fluctuation, further multi-wavelength complementary observations, especially Very Long Baseline Interferometric observations, would be required. keywords: BL Lacertae objects: individual (4FGL 2022.7+4216) - galaxies: active - galaxies: jet - methods: observational ## 1 Introduction Blazars are a subclass of active galactic nuclei (AGN) with their relativistic jets pointed toward our line of sight. Blazar continuum emission exhibits rapid flux variability over a wide range of time scales, such that multifrequency variability studies offer significant insights into the location, size, and underlying physical processes of the emission regions. Moreover, as the high energy \(\gamma\)-ray emission of blazars originate from their jets, the exploration of \(\gamma\)-ray variability provides clues regarding the jet dynamics of such systems. Even though the blazar variability is often of stochastic nature such that the power spectral density can be fairly represented by a power-law over a wide range of temporal frequencies(see Bhatta & Dhital, 2020; Sobolewska et al., 2014, and the references therein), there have been claims of the detection of quasi-periodic oscillations (QPO) in different wavebands with a time scale ranging from a few days to years (e.g. Rani et al., 2009; Bhatta, 2019; Gupta et al., 2019; Bhatta, 2021). In particular, since the \(\gamma\)-rays in blazars are widely believed to be originated from the highly collimated relativistic jets, QPO detection in \(\gamma\)-ray is crucial to infer the jet dynamics and particle acceleration mechanisms. It was, however, pointed out that many of such detection last for only 2-4 cycles and the significance of those detection is likely to be overestimated (Gupta, 2014). Owing to the continuous monitoring of _Fermi_-LAT to explore the \(\gamma\)-ray sky, a few significant detection of \(\gamma\)-ray QPOs have been, however, reported; namely, 34.5-day QPO in PKS 2247-131 (Zhou et al., 2018), \(\sim\)47-day QPO in 3C454.3 (Sarkar et al., 2021), and \(\sim\)7.6-day QPO in CTA 102 (Sarkar et al., 2020). Using a systematic search for periodicities using a comprehensive likelihood estimation on a large sample of \(\gamma\)-ray detected blazars, Penii et al. (2020) found only 11 sources showing strong signatures of periodicity of \(>4\sigma\) significance. The BL Lacerate object 4FGL J2202.7+4216 at redshift \(z=0.069\) was detected in a high flux state on May 1, 2019, with a daily averaged flux reaching \((1.5\pm 0.2)\times 10^{-6}\) photons cm\({}^{-2}\)s\({}^{-1}\) by _Fermi_-LAT (Garrappa & Buson, 2019). Subsequently, on August 19, 2020, _Fermi_-LAT has captured the source with daily averaged flux \((2.8\pm 0.2)\times 10^{-6}\) photons cm\({}^{-2}\)s\({}^{-1}\)(Ojha & Valverd, 2020). The \(\gamma\)-ray flaring activity has coincided with the optical brightening where R magnitude \(<12\) has been registered by several optical monitoring campaigns (Grishina & Larionov, 2020; Jankowsky & Wagner, 2020; Steinke et al., 2020). Strong intra-night optical variability has been detected by Automatic Telescope for Optical Monitoring (ATOM) monitoring program (Jankowsky & Wagner, 2020). The detection of a 356 GeV photon with a strong possibility to be associated with the source has also been reported (Garrappa & Buson, 2019). Therefore, it can be regarded as a plausible candidate for the very high energy (VHE) emitting source. In this work, we present a study the Fermi/LAT gamma-ray light curve of the BL Lac object 4FGL 2022.7+4216 detected by the _Fermi_-LAT spanning more than three years. By employing multiple methods of time-series analyses, we report presence of QPO with a characteristic timescale of \(\sim\)100 days. In Section 2, we provide an outline of the data acquisition and processing method used for the Fermi/LAT telescope, including relevant details. In Section 3, analysis and the results of time series methods, that is, LSP and WWZ and auto-regressive method, are presented. In Section 4, the result and its implications are discussed in the light of standard model of AGN and conclusions are summarized. ## 2 Observations and Data Reduction A study of the source \(4FGL2022.7+4216\) has been done using the Fermi-LAT data. Fermi is a space-based gamma-ray observatory onboard two instruments (One is the Large Area Telescope (LAT) and another is Gamma-ray Burst Monitor (GBM)). The LAT has a large effective area of \(>8000\)\(cm^{2}\) at \(\sim 1GeV\). It is a pair conversion detector, that mainly operates in the energy range 20 MeV - 300 GeV and has a FOV of 2.4 sr atwood et al. (2009), covering about 20% of the sky at any time, and the whole sky in every three hours. In this study, the source was analyzed in the time domain MJD 58552-59847 and was selected with 15\({}^{\circ}\) circular region of interest(ROI) centered at RA: 330.68 and Dec: 42.2778. The recommended Fermi science tools 'FERMITOOLS' package was used to analyze the Fermi-LAT data of the source. The data was obtained in the energy range 20 MeV - 300GeV from the fermi PASS 8 database and it was filtered using gtselect FERMITOOLS tool with constraints evclass=128 and evtype=3 and to prevent the source contamination with from earth limb, we set a criterion on the zenith angle, should be less than 90\({}^{\circ}\). We filtered the data with the standard filter '(DATA_QUAL \(>0\)) &&(LAT_CONFIG == 1)' using the gtshrtfinder tool to obtain high-quality data in the good time intervals (GTIs). The integrated livetime as a function of sky position and off-axis angle and exposure were computed using gttcube and gtexposure tasks respectively. An unbinned likelihood analysis was performed using the gtshte tool (Cash, 1979; Mattov et al., 1996) which provided the significance of each source within the region of interest (ROI) in the form of test statistics (TS \(\sim\sigma^{2}\)) (Mattox et al., 1996). The 7-day binned light curve was obtained by integrating the source fluxes for the intervals where TS \(>25\) (above \(\sim 5\sigma\) significance; Figure-1). The diffuse \(\gamma\)-ray emissions of the galactic and extra-galactic were modeled using two files : gll_iem_v07.fits and iso_PSR2_SOURCE_V3_v1.txt and adopted instrument response functions were PSR2_SOURCE_V3 to obtain the spectrum of the source. The iterative fitting of the light curve was done using ENRICO software (Sanchez & Deli, 2013). ## 3 Data Analysis & Results We adopted several quantitative tests to search for QPO in the gamma-ray light curve of the source, namely the 'Weighted Wavelet Z-transform (WWZ)' method, 'Lomb-Scargle Periodogram (LSP)' method, and 'REDFIT' method. Below we describe the results obtained by the application of these methods. ### Lomb-Scargle Periodogram The LSP, introduced by Lomb (Lomb, 1976) and extended later by Scargle (Scargle, 1982), is actually a variant of the traditional discrete Fourier transform (DFT). However, it has a distinct advantage over DFT, which is that for uneven sampling, it reduces the effect of sampling irregularities by iteratively fitting sinusoidal waves to the data. We compute the LSP using the astropy LombScargle class by considering the minimum and maximum temporal frequencies to be \(1/T\) and \(1/2\Delta t\) respectively, where \(T\) is the total observation period and \(\Delta t\) is the time-binning. The resulting LSP shows a prominent peak at 0.01 days\({}^{-1}\). However, it is quite well known that variability observed in blazar lightcurves are also associated with underlying red noise, which can lead to apparent periodic behavior for a few cycles within the low-frequency regime (Vaughan, 2005). We, therefore, attempt to determine the statistical significance of the periodic feature using the method proposed by Emmanoulopoulos et al. (2013). We approximate the observed power spectra by a power-law model and simulate 1000 light curves with the best-fit power spectral slope and flux distribution as that of the original light curve. The periodicity feature at 0.01 days\({}^{-1}\) is found to be \(>4\sigma\) significant (Top panel of Figure 2). Furthermore, it is conventional to test the presence of periodicity using Generalized Lomb-Scargle Periodogram (GLSP)1 as well which accounts for the measurement errors in the analysis. We obtain the periodicity with the same \(\sim\)100 day period using GLSP as well, which further strengthens our claim. Footnote 1: [https://pyastronomy.readthedocs.io/en/latest/pyTimingDoc/pyPeriodDoc/gls.html](https://pyastronomy.readthedocs.io/en/latest/pyTimingDoc/pyPeriodDoc/gls.html) Statistically speaking, however, since we do not have a priori expectations of having a particular periodicity, it is more rigorous to estimate the 'global significance', which is the fraction of surrogate light curves showing a LSP peak larger than the observed confidence level at any frequency value. In this way, it is ensured that the peaked component is searched over a larger frequency interval, and consequently, we are sampling from a larger population of 'false positive' signals. If we do not restrict the peak to reside within a particular frequency range, then we find that the peaked component at 100 days is associated with \(>99\%\) global confidence. ### Weighted Wavelet Z-Transform The standard LSP method has the limitation that it attempts to fit the sinusoidal profile across the entire domain of observation and does not account for the fact that the features coming from the real astrophysical observation could be time-dependent, i.e. the amplitude and frequency could evolve over time. Therefore, in order to characterize the periodicity features and their evolution, the wavelet transform method turns out to be a more suitable tool, which convolves the light curve with the time and frequency-dependent kernel and attempts to localize the periodicity feature in time and frequency. For the purpose of our analysis, we use the Morlet kernel Grossmann & Morlet (1984) with the functional form: \[f[\omega(t-\tau)]=\exp[I\omega(t-\tau)-c\omega^{2}(t-\tau)^{2}] \tag{1}\] and the WWZ map is given by, \[W[\omega,\tau:x(t)]=\omega^{1/2}\int x(t)f^{*}[\omega(t-\tau)]dt \tag{2}\] We use publicly available software2 to estimate the WWZ power as a function of frequency and evolution of time. In the time-frequency plane, the color-scaled WWZ map demonstrates a decipherable concentration of power around 0.01 days\({}^{-1}\), which is apparent in the average WWZ power as well (Bottom left panel of Figure 2). We estimate the significance of this peak WWZ power in the same way by simulating 1000 light curves according to the best-fit PSD model and resampling the simulated light curves according to the source Results from LSP and WWZ methods light curves. The peak significance was found to be \(>4\sigma\) here as well (Bottom right panel of Figure 2). ### Redfit In the REDFIT method, the unevenly spaced time series data are fitted with a first-order autoregressive process (AR1), avoiding interpolation and its inevitable bias in the time domain (Schulz & Mudelsee, 2002). This method is also used to test the significance of the flux peaks in time series against the background of red-noise in the first-order autoregressive process. The usage of AR1 process is justified by the autoregressive nature of the emission flux of blazars (Schulz & Mudelsee, 2002; Kushwaha et al., 2020), where the current emission flux is dependent on its previous flux state. We use a REDFIT program to estimate the spectrum using LSP (Lomb-scargle periodogram) and WOSA (Welch-overlapped-segment-averaging) procedures with the number of WOSA segments (\(n_{50}=1\)); a Welch window was chosen to reduce spectral leakage. The bias-corrected power spectrum alongside the theoretical and simulation-generated AR(1) processes are provided in Figure-3. It shows a prominent peak of about 100 days with a confidence level of 99% (red curve), which is the maximum significance provided by the REDFIT program. ## 4 Discussion and Conclusions In this work, we report the detection of 100 days periodicity pertaining to the \(\gamma\)-ray detection BL Lac object 4FGL 2022.7+4216 using three different methods of time series analysis. Below we illustrate a few physical scenarios which can give rise to the periodic behaviour of \(\gamma\)-ray lightcurve and thereby infer the plausible mechanisms operative in our context on the basis of time scale. * In the case of binary supermassive black hole (SMBH) systems, if the secondary black hole pierces the accretion disk of the primary black hole, QPO may be observed as a consequence of the impact flashes (Valtonen et al., 2008). Such a QPO of a period of \(\sim\)12 years has been reported earlier in the context of OJ 287 (Valtonen et al., 2008). However, the time scale corresponding to this process is \(\sim\) years, and, therefore, the present detection of 100-day periodicity is difficult to explain under the purview of this scenario. * Since blazar emission is dominated by jets, it is quite likely that the QPO signature will have some bearing on the jet emission features. The precession of the blazar jets could be induced by the interaction of the secondary source and could manifest itself as quasi-periodicity owing to the varying Lorentz factor (Begelman et al., 1980). Furthermore, the orientation of the jet could also be influenced by the Lense-Thirring precession of the inner edge of the disk. However, such mechanisms typically result in periodicity \(\sim\) 1-2 years time-scale (Rieger, 2007) and fall outside of the ballpark of the present detection. However, for blazars with jets closely aligned with our line of sight, the detected periodicity could be significantly shorter because of the Doppler boosting effect (Rieger, 2004). Such a mechanism has been inferred to explain the 34.5-day QPO in the case of BL Lac PKS 2247-131 (Zhou et al., 2018). In our case, therefore, such a mechanism could be operative if the jet is associated with large Lorentz factors. * The variable Doppler factor arising out of the movement of the plasma blob along the internal helical structure of the jet could be another plausible physical mechanism of jet-driven quasi-periodicity (Camenzind & Krockenberger, 1992). Such helical structures could come from the interaction of the jet with the surrounding medium (Godfrey et al., 2012) or the hydrodynamic instability effects (Hardee & Rosen, 1999). Depending upon the parameters like the pitch angle, the viewing angle, and the Doppler boosting factor, the variability time-scale can range from \(\sim\) a few days to \(\sim\) months (Rani et al., 2009). The observed 100-day periodicity could be a result of such structural effects of the jet. Given the time scale of the QPO, we infer it is most likely originating from a precessing jet with a high Lorentz factor or because of the motion of the plasma blob along a curved jet. In the case of the one-zone leptonic model where the plasma blob moves along the postulated helical trajectory, the time dependence of the viewing angle will cause varying Doppler factor and consequent intensity variation even without the intrinsic changes in jet emission. The time dependence of the viewing reads as \[\cos\theta_{\rm obs}(t)=\sin\phi\sin\phi\,\cos 2\pi t/P_{\rm obs}+\cos\phi\cos\psi \tag{3}\] where \(\psi\) is the jet angle relative to our line of sight and \(\phi\) is the pitch angle of the blob. \(P_{\rm obs}\) is the observed periodicity (Sobacchi et al., 2016). The Doppler factor varies with time as \(\delta(t)=1/(\Gamma(1-\beta\cos\theta(t)))\), where \(\Gamma=1/\sqrt{1-\beta^{2}}\) is the bulk Lorentz factor of the jet motion where \(\beta=\frac{v_{\rm obs}}{t}\). The periodicity in the rest frame of the blob is estimated by \[P_{\rm rf}=\frac{P_{\rm obs}}{1-\beta\cos\psi\cos\phi}. \tag{4}\] For typical values of \(\phi=2^{\circ}\), \(\psi=5^{\circ}\) and \(\Gamma=8.5\), the rest frame periodicity becomes \(\sim\)24 years for \(P_{\rm obs}\sim 100\) days as we have obtained in our case. During one period, the blob traverses a distance \(D=c\beta P_{\rm rf}\cos\phi\sim 7\) parsec. Since the prominent QPO signature of \(\sim\)100 days is observed throughout the entire domain of \(\sim\)1200 days in the WWZ map, we expect roughly \(\sim\)12 cycles of oscillations during this period. Therefore, the projected distance during these 12 cycles would be estimated as \(D_{P}\sim 12D\) sin \(\psi=7.2\) pc. Such parsec scale helical jets have been found in the case of several other blazar sources as well (Vicente et al., 1996; Tateyama et al., 2002). Roy et al. (2021) considered that the inclination angle of the jet axis Figure 3: Power spectrum and the significance levels using REDFIT method. The blue and magenta curves denote the theoretical AR(1) spectrum and the mean simulated AR(1) spectrum respectively. The green, orange and red curves represent 90, 95, and 99% significance levels respectively. relative to the line of sight (\(\psi\)) could be time-dependent, which can explain the time variation of the amplitude of oscillation. Therefore, the amplitude variation would then be a direct offshoot of the geometric bending structure of the jet. However, a more comprehensive investigation, including detailed very long baseline interferometry (VLBI) monitoring needs to be undertaken to confirm the presence of such curvature within a length scale of 7pc. ## Acknowledgements This study made use of the _Fermi_-LAT data, obtained from the Fermi Science Support Center (FSSC), distributed by NASA's Goddard Space Flight Center (GSFC). D. Bose acknowledges the support of Ramanujan Fellowship-SB/S2/RIN-038/2017. A. Sharma and A. Mandal are grateful to S. N. Bose National Centre for Basic Sciences under the Department of Science and Technology (DST), Government of India, for providing the necessary support to conduct this research. ## Data Availability The work uses publicly available data from _Fermi_-LAT.
2309.01055
Integration of Vision-based Object Detection and Grasping for Articulated Manipulator in Lunar Conditions
The integration of vision-based frameworks to achieve lunar robot applications faces numerous challenges such as terrain configuration or extreme lighting conditions. This paper presents a generic task pipeline using object detection, instance segmentation and grasp detection, that can be used for various applications by using the results of these vision-based systems in a different way. We achieve a rock stacking task on a non-flat surface in difficult lighting conditions with a very good success rate of 92%. Eventually, we present an experiment to assemble 3D printed robot components to initiate more complex tasks in the future.
Camille Boucher, Gustavo H. Diaz, Shreya Santra, Kentaro Uno, Kazuya Yoshida
2023-09-03T02:18:35Z
http://arxiv.org/abs/2309.01055v1
# Integration of Vision-based Object Detection and Grasping for ###### Abstract The integration of vision-based frameworks to achieve lunar robot applications faces numerous challenges such as terrain configuration or extreme lighting conditions. This paper presents a generic task pipeline using object detection, instance segmentation and grasp detection, that can be used for various applications by using the results of these vision-based systems in a different way. We achieve a rock stacking task on a non-flat surface in difficult lighting conditions with a very good success rate of 92%. Eventually, we present an experiment to assemble 3D printed robot components to initiate more complex tasks in the future. ## I Introduction The ability to observe and understand the environment has been expanded to robotic systems with artificial intelligence and machine learning has demonstrated its use to achieve impressive outcomes in various fields, including image and data processing for robotic application. Imitating the human ability to detect and grasp any sort of object has posed challenges for applications such as transporting large objects, construction and automation. The combination of machine vision and robotics to replicate such type of grasping requires precise target detection, localization and manipulation. This paper aims to tackle this challenge in lunar conditions with limited lighting conditions, considering various craters, environment changes and irregular objects like in Fig. 1. A lot of unpredictable situations can occur in a lunar mission without any possibility of human assistance. The robots must achieve missions of exploration, scientific experiments, construction, etc., using accurate and robust neural networks. We aim to demonstrate that a generic software architecture using the vision-based neural networks YOLOv8 (You Only Look Once) [1] and GPD (Grasp Pose Detection) [2], as shown in Fig. 2, can be used to achieve numerous applications like rock stacking or autonomous robot assembling, and this by just modifying the dataset and the manipulation pipeline. The goal is, therefore, not to improve the existing neural networks but to integrate and use them efficiently to perform the aforementioned tasks. We will first establish state-of-the-art on the different elements of the generic pipeline: object detection, instance segmentation and grasp detection. A brief overview of our system used for this paper will be done in the second part. Then, we will present our vision-based frameworks: YOLOv8 used on custom datasets and GPD. Subsequently, we will explain the ROS integration, and two applications - rock stacking and robot assembling - by just changing the way of using the results from the vision-based system. Eventually, our experiments will be presented, followed by an analysis and the outline for the future. ## II state of the art ### _Object Detection_ In visual object recognition, the use of Convolutional Neural Network (CNN) has led to new challenges. The detectors can be classified into two categories: two-stage or regional-proposal-based algorithms and single-stage ones. One-stage frameworks have the advantage to process the entire image in a single pass, making it more computationally efficient and better suited for real-time detection [3]. In 2015, J. Redmon _et al._ presented a new one-shot framework YOLO [4]. J.Terven _et al._ analyzed the YOLO's evolution, examining the innovations and contributions in each iteration from the original YOLO to the new version YOLOv8 in January 2023 [5]. The first version performed faster than any existing object detector but the localization error was larger compared with state-of-the-art methods such as region-based Fast R-CNN [6]. Through the years, YOLO has evolved to stand out as state-of-the-art object detection in Fig. 1: Robot-xArm7, gripper with camera and second camera fixed at the base, stacking rocks in a moon-like environment. a real-time framework with its remarkable balance of speed and accuracy. It has then been used in numerous fields such as autonomous vehicles with object tracking, like pedestrians [7] and other obstacles [8], surveillance and security fields [9] or medical field with cancer detection [10]. D. Reis _et al._ demonstrated the use of the latest version YOLOv8 for detecting flying objects in real time in a challenging environment [11]. ### _Instance Segmentation_ Along with the object detection challenge, the semantic segmentation and the instance segmentation are also very discussed problems. While object detection aims to classify and give the location, the goal of semantic segmentation is to label every pixel into a class according to the region within which it is enclosed. A.M. Hafiz _et al._ defined the instance segmentation problem as the task of providing simultaneous solutions to object detection as well as semantic segmentation [12]. They reviewed in 2020 the evolution of instance segmentation up to Mask R-CNN [13], YOLACT [14] and TensorMask [15]. As for the object detection the one-shot models are said to be faster than the two-stage ones, and therefore more suitable for real time utilization. ### _Grasping Detection_ To allow robots to achieve various tasks and reproduce human behaviours, the challenge of reliably grasping and handling objects, like household items, mechanical parts or packages, is extremely important. The research on robotic systems for manipulation tasks has mainly focused on human-robot interaction, and at first, systems were lacking in the autonomous part of grasping and placing an unknown object in an unstructured environment. Mahler _et al._ proposed DexNet, a grasp system, with a 93% grasp success rate, which takes depth images as input and gives grasps in the plane as output, i.e. with a single degree of orientation freedom around the gravity axis [16]. Morrison _et al._[17] and Viereck _et al._[18] studied the problem of grasping dynamically moving objects and proposed a closed loop system with a grasp success rate of 83% and 88.9%. Gualtieri _et al._ proposed GPD framework [19][2] that takes point cloud data as input and produces 6-DOF grasp poses as output. Their system incorporates a new method for generating grasp hypotheses that, relative to prior methods, does not require a precise segmentation of the object to be grasped and can generate hypotheses on any visible surface. Their system gives really good results, especially for dense environments with a grasp success rate of 89%. In the final step of their work, they also discussed the idea of combining object and grasp detection. They made experiments on household objects, but only evaluating the accuracy of the object detection on the grasped objects, the grasping strategy was not based on the object detection results such as proposed here. ## III System Overview Our robotic system is comprised of an articulated arm xArm7 (7-DOF) from UFactory. It is equipped with a parallel gripper with two fingers; the robotic arm is fixed on a table next to the sandbox. The vision system is made up of two Intel RealSense cameras d435 which retrieve RGB-D (color and depth image). To recreate lunar-like conditions with uneven surfaces we use sand and an artificial light source as shown in Fig. 1. For the manipulations we use various irregular objects such as polystyrene rocks and 3D printed robot components like head, body, joint, etc. Fig. 4. These objects are challenging because of their irregularities in shape, size, color and weight. The computer used is equipped with CPU Intel 19.13900KF 24 cores and GPU NVIDIA GeForce RTX 4090/24 GB. The software system shown in Fig. 2 is comprised of some low-level and medium-level packages, like MoveIt, for the controls, motion planning, etc. and a high level with various applications like object detection or robot assembling. ## IV Vision-based frameworks ### _YOLOv8_ To perform object detection and instance segmentation, we choose YOLOv8 [1], whose new architecture is well resumed by J.Terven _et al._[5]. Solawetz _et al._ explained the improvements from the previous versions such as the anchor free detection and the new convolutions [20]. This version has a high mean Average Precision (mAP) (respectively 50.2 and 53.9 mAP50-95 for its medium model and larger one) while maintaining a lower inference speed on the COCO (Common Objects in Context) dataset [21]. Another positive highlight is that YOLOv8 can be used both with a command line interface and with a PIP package, which is very useful for ROS integration and for all the tasks like training, validation, prediction, etc. ### _Custom dataset and YOLOv8 training_ In order for our object detection results to be applicable in lunar robotic applications, it needs to perform efficiently in a challenging environment with shadows, high exposure, occlusion, and on miscellaneous objects such as robot parts, screws, bolts, various types of rocks, etc. The construction and the training of a custom dataset, considering the identified complexities, are as important as the model choice to achieve highly accurate results. Fig. 2: Generic software architecture based on vision-based frameworks. To better highlight the importance of a custom dataset, especially considering the lighting in a lunar scenario, we compare the mean Average Precision between two models. The first one is YOLOv8m, YOLOv8 medium model, trained on coco128 [22], a sub-dataset of 128 images from the COCO dataset. The second is YOLOv8m trained on a custom dataset. We add to the coco128 images 62 new pictures of 6 objects (bottle, laptop, mouse, scissors, spoon and fork) in more complex lighting conditions than in the original dataset. Examples images from these datasets are shown in Fig. 3 where three objects are shown (scissor, mouse and bottle). For the validation, we ensure to use different objects than the ones used for the training and different lighting conditions. We also make different validation sets by adding several augmentation steps which degrade image quality. We can see in Table 1 that even with the more complex validation set (brightness, exposure and 10% of noise) the model trained with the custom dataset outperforms the original one. Therefore, during the construction of our datasets, we take particular care to include a wide variety of images in different lighting and exposure conditions, occluded and cropped objects, different colors, sizes and shapes, etc. For the different applications, we constructed two main datasets, each with different challenges. First, polystyrene-made imitated rocks with the main challenges being lighting conditions and uneven surfaces, Fig. 4 (a) and (b). During the dataset creation, the sand, the lighting and the exposure conditions were modified. We also add different augmentation steps: +/- 25% of brightness and exposure and 5% of noise. The augmentation enables the dataset to be artificially enlarged using label-preserving transformation to reduce over-fitting on image data. We include real rocks in the validation to get more reliable accuracy results. Instance segmentation will be performed on this dataset, the validation results will be presented in part VI. The second dataset is 3D printed robot components - head, body, joint, legs, etc. Fig. 4 (c), with two main challenges. The first one is low inter-class variance: components which look very similar to each other compared to the rest of the labels, for example, the difference between a joint and a body joint consists in being attached or not to a robot body. The second challenge has overlapping classes and occlusions. Using the YOLOv8 results we are able to recognize the robots components and determine their state: if the algorithm detects a body, it will need to get its associated joints, furthermore if a body joint is detected it needs to be classified as available or not (another part already attached to or not), and likewise. The labelling rules are very important and need to be defined before the annotations; in this dataset, for example, we decide to define a leg as a joint plus a foot and a body and the main body part plus its body joints as we can see in Fig. 4. After creating the datasets we train them on YOLOv8 models. The robot dataset is composed of 528 images with 12 classes; which are split into 90% as the training set and 10% as the validation set. We train on different epochs to detect the moment where the model stops improving and begins overfitting. For this model it is around 50 epochs (about 3 times the number of classes). We also decide to keep the original training hyper-parameters, since the dataset is not very large, we do not want to over-fit the model by tuning the parameters. The hyper-parameters are: batch size of 16, AdamW as the optimizer, momentum of 0.937, weight decay of 0.0005 and learning rate of 0.000667. We perform the training of the small, medium and large YOLOv8 models and then evaluate to determine an optimal trade-off between inference speed and mAP50-95. We can see in Table II a noticeable mAP50-95 increase between the small and the medium model but not a significant improvement between the medium and the large. On the Fig. 4: Custom datasets and YOLOv8 predictions (a) moonrock segmentation, mask deformed by strong exposure (c) robot components detection. Fig. 3: Object detection on bottles, mouses and scissors in difficult lighting. validation set all the models have an average total speed (pre-process + inference + post-process) under 10 milliseconds/image, which fit perfectly with the detection in real time. Regarding the results, we decide to choose the medium model YOLOv8m. We test on different SDR videos and we observe a total speed of 0.71 + 7.34 + 0.89 = 8.94 milliseconds, more than 60 fps (frames per second) which is consistent with real-time use. ### _GPD configuration and tuning_ For the object manipulations, the results obtained with YOLOv8 are not enough and we need to improve the grasping strategy. We integrate GPD package for the grasp detection for several reasons; firstly, because it can be easily integrated to ROS with a package in C++ and Python. Moreover, since GPD operates with point cloud input, we can easily manipulate this point cloud using the results obtained from object detection. In addition, since GPD is not limited to detecting planar grasps, it can more easily generate side grasps, which can be needed for some rocks or robots components, it then better ensures the autonomous grasping in any kind of situation. The GPD library allows configuring several parameters related to i) the geometry of the gripper and grasp descriptor, ii) pre-processing of the point cloud, iii) grasp candidates generation, iv) filters and selection. For the hand geometry, we first test with the actual dimensions of our UFactory gripper, however, we find that we can get more successful grasps using smaller dimensions, according to the size of the objects. This also reduces the computation time, since the grasp descriptor is based on the volume inside the gripper. For ii), we set the workspace parameters to match the field of view of the point cloud from the _pre-grasp position_ in order to generate candidates only around the observed object of interest. It is also necessary to set the _sample_above_plane_ to filter out candidates on the table plane. For iii), the first important parameter is the _hand_axes_ to define the main axis of the generated candidates. We select a vertical orientation that facilitates the actual planning and execution trajectory. Second is the _number of orientation_ and _number of samples_ generated around the selected axis, we find that 5 orientations and 100 samples are sufficient to find valid grasp candidates in real-time. For iv), we enable the _filter by approach direction_ in the z axis, again to facilitate the planning and execution; setting the number of selected grasps to 20 is also enough to ensure a real-time selection of valid candidates. ## V Integration on xArm7 ### _Rock stacking in a moon-like environment_ In this part, we will focus on the use of the vision-based frameworks results for the stacking rocks task. For lunar exploration, we want robots being able to autonomously recognize interesting rocks, pick and place them for analysis. We also want the robots to achieve construction tasks. Therefore, our first application is to autonomously stack small and medium rocks in these lunar conditions. We use our vision-based general framework shown in Fig. 2 for specific sub-tasks as described in Fig. 5. The first step is to classify all the detected rocks by size to stack them in decreasing order; the sorting is done using the area of the object's mask given by the instance segmentation. In the next, we move from pure detection to real application. Indeed, the instance segmentation gives results in pixels but the xArm moves in the real world. To obtain usable data, we deproject the pixel results to point coordinates in millimeters (mm) using the depth information and the intrinsic parameters of the RealSense camera. The intrinsic matrix K contains the focal lengths and the principal point. We eventually transform these coordinates from the camera frame to the xArm's frame using TF ROS package to retrieve the final coordinates in the robot's workspace Fig. 6. Now that we can deproject specific points from pixel to coordinates we can grasp the rock. To perform a better grasping, with oriented rocks for instance, we will use a specific package GPD described in the following subsection. The final step is to actually stack the rock. After grasping, a new instance segmentation is performed from the _eye on base_ camera's frames and the deprojection process can be repeated to calculate the height. The xArm is sent to the determined final position, with an accurate \(z\) value given the rock height and the depth of the stacking point. ### _Modular robot model assembly task_ The goal of this task is to assemble the prototype of our modular robot model composed by modular components as shown in Fig. 4(c). This prototype was selected to test our algorithms to demonstrate the challenging task of grasping. In order to implement the assembly using our general framework integrating GPD and YOLOv8 for tasks planning, we need to implement the specific sequence and use the specific modules described in Fig. 5. The main steps for this task are the _Get object pose_ that implements the call to the _Object2workspacePose_ class to retrieve the object pose in the workspace, Fig. 7 a). The Fig. 5: Rock stacking and Assembly tasks vision-based frameworks. _Grasping sequence_ that moves the _eye in hand_ camera close to the object, centering the _pointcloud_ on the object and allowing GPD library to generate the grasp candidates around the object of interest, Fig. 7 b). Once a valid grasp pose has been received and the actual grasping sequence has finished, we move the part to a _pre-assembling_ position to detect the joint assembling point position and calculate the displacement to the target body joint and assemble the part, Fig. 7 c). ## VI Experiments ### _Rock stacking_ The first step in validating the algorithm is the rock detection. We evaluate the mAP of yolov8m model trained on our _moonrock dataset_ Table III. We provide different validation sets by adding augmentation steps such as brightness, exposure and noise. On the non-modified set, the model obtained very good results with 74.0 mAP50-95 for both the box and the mask. On the worst set, we still observed acceptable accuracy with 46.7 and 45.8 mAP50-95. The training dataset is only made with the imitated rocks, so we tested the transposition to real rocks and the model successfully performed 69.5 mAP50-95 on mask segmentation, compared to 77.0 with fake rocks. Finally, we can notice a small decrease in the accuracy of small rocks compared to big ones. To evaluate the algorithm of rock stacking, we perform 50 tests. In the grasping strategy, the rocks need to be sorted by size first. We evaluate the size classification success rate of 96%; the mask area sorting works very well, even on rocks with only 1 cm of difference in length. The two failures are because of very high exposure on a rock which induces a small error on the mask as show in Fig. 4(b). The use of instance segmentation also results in efficient height estimation, with very good accuracy and only 4% of relative error. The rock-stacking task has an overall success rate of 92 %. The grasping failed twice, and twice the rock is grasped at the extreme end, so when it is stacked, the rocks' center of mass are shifted and the rock topples over. We measured an average alignment error (distance from bottom rock's center to the top one) of 25 mm. To correct this error, we should use the second camera to get a feedback of the grasp and maybe track the rock while it is stacking up. The rock-stacking task in a moon-like environment faces several challenges. The first is to provide highly accurate results in object detection and instance segmentation in difficult light conditions (strong light variations, shadow occlusions or exposure and brightness); we show that constructing a custom dataset and training the YOLOv8 model on it can overcome these difficulties. Then, we manipulate irregular rocks so we perform instance segmentation to sort the masks' area to get an accurate size classification and we integrate GPD package in the grasping strategy to autonomously grasp almost any kind of rock. Finally, we tackle the non flat surface challenge, making the rocks' height calculation difficult, by introducing a second camera to perform instance segmentation. ### _Robot assembling_ For this task we aim to assemble the robot model shown in Fig. 1, that consist of the parts shown in Fig. 4c). The success of the assembling depends mainly on three factors, associated to the main steps presented on Fig. 7, i) the accuracy and stability of the object pose detection -Table IV-, ii) the success of the selected grasp -Table V- and iii) the visibility of the grasped joint -Table V-, which depends on the actual grasped orientation. We evaluate these factors separately for the head and the leg objects. For i) we put the objects in a fixed position and recorded the position for 144871 samples, and calculated the standard deviation for every coordinate as shown in Table IV. We can see that for the head and legs, the maximum error is 0.37 mm, which is pretty accurate to define the grasp trajectory. However, for the body joints the maximum error is 36.16 mm; this is due Fig. 6: Scheme of the _pixel to point_ process. Fig. 7: Main steps for the robot assembly task. to the small size of the joints and the noise in the depth frame used for the de-projection of the pose. To evaluate the accuracy of the selected grasps, we repeat the grasping sequence from different initial positions of the objects, the results are shown in Table V. The success in this step is very dependent on the selected grasp, which due to the GPD implementation is inherently stochastic. More fine tuning of the parameters can be done for a specific object but it will affect the performance of other objects. For the evaluation of the visibility of the grasped joint, after a successful grasp of the object, we send the _eef_ to the fixed _pre-assembly_ position and calculate the success ratio of pose detection, the results are shown in Table V. The main challenge of this task is to be able to detect and manipulate small objects, as well as detecting the target positions for assembling. We approach this problem using a real-time system that allows us to calculate several validity checks to improve the success ratios of the assembly pipeline. Another challenge for this experiment is the non-optimal trajectories generated by MoveIt for some cases, which we solve by planning directly in the joint space for those particular cases. We partially tackle occlusion problems by having two cameras, however, there are still limitations particularly in assembling the legs. We plan to improve this by using a camera on a second arm in the near future. The final critical issue is the limitation of YOLOv8 to provide non-oriented bounding boxes, which is required for a more precise assembly. We plan to do more post-processing of the YOLOv8 results in order to estimate the angle. Even though we use a very simple and small robot model, we achieved high success rates in the assembly process, so for the future assembly tasks of real robots and bigger parts we expect to improve our results. ## VII Conclusion This paper is the first milestone for the integration of our vision-based systems on robotic manipulators for lunar applications. The explanations of our software framework, its integration for xArm7 and our experiments demonstrate how an integration of the same vision-based software can be used for various robotic applications. The results of these vision-based frameworks can be used in many other ways to improve the performance. For instance, using real-time instance segmentation for tracking the pose of objects for a better manipulation, like stacking or assembling. The next achievement is to autonomously assemble a full-scale modular robot. In addition to the presented software, additional features can be introduced such as a second arm, assembly sequence planning as well as communication between several robots.
2301.05411
Do the Defect Prediction Models Really Work?
You may develop a potential prediction model, but how can I trust your model that it will benefit my software?. Using a software defect prediction (SDP) model as a tool, we address this fundamental problem in machine learning research. This is a preliminary work targeted at providing an analysis of the developed binary SDP model in real-time working environments.
Umamaheswara Sharma B, Ravichandra Sadam
2023-01-13T06:53:04Z
http://arxiv.org/abs/2301.05411v1
# Do the Defect Prediction Models Really Work? ###### Abstract "You may develop a potential prediction model, but how can I trust your model that it will benefit my software?". Using a software defect prediction (SDP) model as a tool, we address this fundamental problem in machine learning research. This is a preliminary work targeted at providing an analysis of the developed binary SDP model in real-time working environments. Software Defect Prediction, Machine Learning, Probabilistic Bounds, Real-time Analysis. ## I Introduction Due to the rapid development of complex and critical software systems, testing has become a tough challenge. Software defect prediction (SDP) models are being developed to alleviate this challenge [1, 2, 3, 4, 5, 6]. The primary objectives in developing SDP models are to reduce the testing time, cost, and effort to be spent on the newly developed software project [7]. The task of SDP models is to predict the defect proneness of newly developed software modules. Once an efficient SDP model has been developed, any organisation may utilise its services. However, it is evident from the machine learning (ML) literature that, in general, the developed prediction models may produce misclassifications on the unseen data [8]. Owing to the result of either a misclassification (from the prediction model) or ineffective testing, the occurrence of any malfunction in the software modules may cause problems ranging from inconvenience to the loss of life [9]. It is more likely that a software fails when the prediction model wrongly predicts a defective module. To know how feasible the SDP models are in the real-time working environments, we provide a theoretical analysis using probabilistic bounds. In a nutshell, the proofs are demonstrated by computing the deviation of a random variable (which is modeled as a hazard rate of a software that utilises SDP models) far from the estimated hazard rate of a manually tested software. Additionally, the proofs are also provided in terms of the measure called reliability. ## II Preliminaries We begin by discovering the chances of failures in the system from the predictions of SDP models. There are many ways a system will fail [9, 10]. Of which, the primary possible instance is when a defective module is predicted as clean. In such cases, in real-time working environments, the tester may miss the defective module. Now, the following assumption ensures failure incidents from each false negative module on the test set: **Assumption 1**.: _Misclassification of each defective module can cause one failure in the software._ This assumption enables us to count the total failures on the test set and on the newly developed project. Since we know the fact that the general testing procedures do not prompt all the defects [10], the following assumption ensures the presence of failures in any software that is tested by using SDP models: **Assumption 2**.: _The integration test, system test, or acceptance test do not prompt the defects for the misclassified defective modules._ Now, to measure the percentage of occurrences of the failure cases on the test set, we use a measure called the false omission rate (FOR). The FOR is the ratio of the total number of false negatives over the total predicted clean modules. This is given as: \[FOR=\frac{\text{False Negatives}}{\text{Predicted Cleans}}=\frac{\text{FN}}{ \text{FN+TN}} \tag{1}\] Since only predicted clean modules may contain hidden defects, the measure FOR is well suited to estimating the percentage of failure occurrences on the test set. However, in real-time testing, FNs do not provide sufficient information about the failures in software because the actual class label for the predicted clean module is unknown. Hence, we model the actual class for each predicted clean module as a random variable. For any newly developed software \(S=\{M_{1},M_{2},\cdots,M_{n}\}\) with \(n\) modules, let us assume \(l,(l\leq n)\) modules are predicted as being from the clean class. Now, the following random variable is used to represent the failure case from the wrongly predicted defective module, \(M_{i}\): \[X_{i}=\begin{cases}1,&\text{if the module $M_{i}$ is classified wrongly as clean}\\ 0,&\text{if the module $M_{i}$ is classified correctly as clean}\end{cases} \tag{2}\] To provide a guarantee that, for any module \(M_{i}\), \(X_{i}\) takes a value in \(\{0,1\}\) with an identical probability, the following assumption must hold true. **Assumption 3**.: _The SDP model is trained on the historical data of the software project(s)._ In general, the SDP models are being developed on the historical data of the software projects, assuming similar data distributions for the training set, test set, and the population set [1, 2, 3, 4, 5, 6, 7]. From Assumption 3, since the SDP model does not change dynamically, we have that each predicted clean module goes into the wrong class with a similar FOR value. That is, the FOR is treated as the probability that each predicted clean module may fall into the defective module. This is given as: \[p=\textit{FOR} \tag{3}\] This probability is used to define the failure distribution of the software project. Hence, from Equations 2 and 3, the probability distribution of the random variable \(X_{i}\) is represented as: \[\textbf{Pr}[X_{i}=1]=p,\text{ and, }\textbf{Pr}[X_{i}=0]=1-p,\text{ for }1 \leq i\leq l.\] Now, to count the total failure instances from the prediction model, the following assumption ensures independence between each tested module: **Assumption 4**.: _The SDP model provides predictions for independent observations (software modules)._ In fact, all the SDP models assume independence between the data points [1, 2, 3, 4, 5, 6, 7]. Since each predicted clean module has a identical probability then it becomes a Bernoulli trial [11]. Now, the sum of \(l\) identical Bernoulli trials is said to be a binomial distribution [11, 12]. This is given as: \[X=\sum_{i=1}^{l}X_{i} \tag{4}\] Now, the mean of the random variable \(X\) is derived as follows (using linearity of expectation): \[\mathbb{E}[X]=\mathbb{E}\Big{[}\sum_{i=1}^{l}X_{i}\Big{]}=\sum_{i =1}^{l}\mathbb{E}[X_{i}]=\sum_{i=1}^{l}\big{[}1*\textbf{Pr}[X_{i}=1]\\ +0*\textbf{Pr}[X_{i}=0]\big{]}=\sum_{i=1}^{l}p=lp \tag{5}\] So far, we have modelled the occurrence of failures (we also call it the hazard rate later in the paper) as a random variable and estimated the expected number of failures (wrong predictions for the defective modules) in a software. It is worth noting that, with no loss of generality, the predicted defective modules will be tested by the tester. The following assumption ensures the presence of failures in some portion of software after its release: **Assumption 5**.: _For some portions of the software other than the predicted clean modules, the hazard rate follows the Weibull distribution._ Here, the hazard rate is defined as the instantaneous rate of failures in a software system [9]. According to Hartz et al. (in [13]), the hazards in a software (or the part of a software) may not be estimated with a single function. Hence, in order to fit various hazard curves, it is useful to investigate a hazard model of the form that is known as the Weibull distribution. Here, we assume the occurrence of a Weibull distribution of the hazard rate for the rest of the software modules (other than predicted clean modules). Now, for a software that is tested by using both the SDP model and the testers, the total estimated hazards (\(\hat{z}(t)\)) are calculated as: \[\hat{z}(t)=X+\hat{K}t^{\hat{m}},\text{ for some }\hat{K}>0,\hat{m}>-1,\text{ and time }t>0 \tag{6}\] Here, \(\hat{z}(t)\) is the hazard rate of a software that is tested by using the SDP model (and later the predicted defective modules are serviced by the testers). Here, \(\hat{K}t^{\hat{m}}\) is assumed to be the hazard rate, represented in terms of the Weibull distribution, for the software modules other than predicted clean modules. Here, the parameters \(\hat{K},\hat{m}\), and \(t\) will take real values and the inequality constraints for these parameters are adopted directly from the Lyu's work [9]. Hence, from Assumptions 1 and 5, for the total software modules, the resultant hazard model (\(\hat{z}(t)\)) is the sum of the hazard rates of the sub parts of the software. Now the expected hazard rate of the software is derived as: \[\mathbb{E}[\hat{z}(t)]=\mathbb{E}[X+\hat{K}t^{\hat{m}}]=\mathbb{E}[X]+\hat{K} t^{\hat{m}}=lp+\hat{K}t^{\hat{m}} \tag{7}\] To demonstrate the feasibility of SDP models in the real-time scenario, the following assumptions must be met: **Assumption 6**.: _An identical software is used for both the cases of testing using SDP model and manual testing._ This is an important assumption in providing proof for the feasibility of SDP models in the real-time scenario. For a software, from Equation 6, we know that the hazard rate is \(\hat{z}(t)=X+\hat{K}t^{\hat{m}}\). Assume the same software that was tested by the testers, for which we have a Weibull distribution of the hazard rate as: \[z(t)=Kt^{m},\text{ for some }K>0,m>-1,\text{ and time }t>0 \tag{8}\] The definition of the parameters such as \(K,m\) and \(t\) is similar to the definition of the parameters in Equation 6. Note that, at time \(t\), the two hazard functions such as \(z(t)\) and \(\hat{z}(t)\) describe the instantaneous rate of failures in the software when tested manually and with SDP, respectively. Now, for any software, the proofs (given in Section III) provide the tight bounds for the deviation of a random variable far below from the corresponding hazard rate estimated with manual testing. Similarly, another possible approach is to find the deviation of the random variable (expressed in terms of reliability) far above the reliability of the manually tested software. Here, reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment [9]. The relation between reliability and hazard rate is given below [9]: \[R(t)=e^{-\int_{0}^{t}z(x)dx} \tag{9}\] Where \(R(t)\) is the software's reliability at time \(t\), and \(z(t)\) is the hazard rate. Using Equation 9, we can derive the reliability values from numerous hazard models in some time interval [0,\(t\)]. Here, we assume that the two identical software systems (having different testing scenarios--one is manually testing and the other is testing using SDP) are deployed at time 0. Now, The reliability of the manually tested software is now defined by the Weibull model of a hazard rate (\(z(t)\)). **Lemma 1**.: _For the Weibull hazard model of a software \(z(t)=Kt^{m},\) for some \(K>0,m>-1\) and time \(t>0\), its reliability is:_ \[R(t)=e^{\frac{-Kt^{(m+1)}}{m+1}}\] Proof.: Substituting the value of \(z(t)\) (from Equation 8) in \(R(t)\) (that is, in Equation 9) yields: \[R(t)=e^{-\int_{0}^{t}Kx^{m}\ dx} \tag{10}\] Simplifying Equation 10 satisfies the Lemma. The proofs in Section III are valid by ensuring the probability value (\(p\)) lies in the interval (0,1). Hence, the following assumption must hold true: **Assumption 7**.: _The SDP model should produce at least one false negative and one true negative on the test set._ ## III The Proofs ### _The tight lower bound in terms of hazard rate_ In Section II, we modelled the number of hazard (failure) instances in a software that is tested by using the SDP model as a random variable (that is, \(\hat{z}(t)=X+\hat{K}t^{\hat{m}}\)). Now, the following theorem defines the deviation of a random variable, \(X+\hat{K}t^{\hat{m}}\) below the value of hazard rate of a manually tested software, \(Kt^{m}\) (in fact, far below from the expectation, \(\mu\)). **Theorem 1**.: _Let \(X_{1},X_{2},\ldots,X_{l}\) be the independent Bernoulli trials such that for, \(1\leq i\leq l\), \(Pr[X_{i}=1]=p\), where \(0<p<1\). Also let \(\exists\) parameters \(K>0,m>-1,\hat{K}>0,\hat{m}>-1\), time \(t>0\). Then for \(X\)_ = \(\sum_{i=1}^{l}X_{i},\hat{z}(t)=X+\hat{K}t^{\hat{m}},\mu=\mathbb{E}[X+\hat{K}t^{ \hat{m}}]=\hat{K}t^{\hat{m}}+\text{lp}\)_, and for the Weibull hazard model of a manually tested software, \(Kt^{m}\):_ \[Pr[X+\hat{K}t^{\hat{m}}<Kt^{m}]<e^{\frac{-(lp-Kt^{\hat{m}}+2\hat{K}t^{\hat{m} })^{2}}{2(Kt^{\hat{m}}+lp)}}\] Proof.: As before, \(Pr[X+\hat{K}t^{\hat{m}}<Kt^{m}]\) can be rewritten as: \[Pr[X+\hat{K}t^{\hat{m}}<Kt^{m}]=Pr[X<Kt^{m}-\hat{K}t^{\hat{m}}] \tag{11}\] We know for some \(0<\delta\leq 1\), and \(\mu\), using the Chernoff bound, the lower tail bound for the sum of independent Bernoulli trials, \(X\), that deviates far from the expectation \(\mu\) is [14]: \[Pr[X<(1-\delta)\mu]<e^{\frac{-\mu\delta^{2}}{2}} \tag{12}\] Here, the value \((1-\delta)\mu\) represents the left-side marginal value from the expectation \(\mu\) with the band length of \(\delta\). Now, we wish to obtain a tight lower bound that the random variable (that is, \(X+\hat{K}t^{\hat{m}}\)), that deviates far below from the hazard rate of a manually tested software, \(Kt^{m}\). In Equation 11, for some \(K>0,\hat{K}>0,m>-1,\hat{m}>-1\), and \(t>0\), the value \(Kt^{m}-\hat{K}t^{\hat{m}}\) is assumed to be below the expectation, \(\mu\), in a given time period \([0,t]\). Now, from Equations 11 and 12: \[(1-\delta)\mu=Kt^{m}-\hat{K}t^{\hat{m}}\] \[\Rightarrow\delta=1-\frac{Kt^{m}-\hat{K}t^{\hat{m}}}{\mu} \tag{13}\] From Equation 6, we know the expected hazards in a software, which uses the SDP models is: \[\mu=\mathbb{E}[X+\hat{K}t^{\hat{m}}]=\hat{K}t^{\hat{m}}+lp\] Now, substituting \(\delta\) (from Equation 13) and \(\mu\) in Equation 12 results in the tight lower bound for the deviation of a random variable (that is, \(X+\hat{K}t^{\hat{m}}\)) below the hazard rate of a manually tested software. This is expressed below: \[Pr[X+\hat{K}t^{\hat{m}}<Kt^{m}]<e^{\frac{-[Kt^{\hat{m}}+lp]\big{[}-\frac{Kt^{ \hat{m}}-Kt^{\hat{m}}}{Kt^{\hat{m}}+lp}\big{]}^{2}}{2}} \tag{14}\] Simplifying the above equation will ensure the proof. Thus, we have from Theorem 1 the occurrence of fewar hazards in the software that uses SDP than the occurrence of the total hazards in the same software that is tested by a human is exponentially small in \(l,\hat{K},\hat{m}\), and \(t\), implying that at the larger values of these parameters, the bound becomes tighter. ### _The tight upper bound in terms of reliability_ In this section, we provide a lemma that calculates the reliability of a software that is tested using the SDP model. **Lemma 2**.: _Let \(X_{1},X_{2},\ldots,X_{l}\) be the independent Bernoulli trials, also let \(\exists\) parameters \(\hat{K}>0,\hat{m}>-1\), time \(t>0\). Then for X = \(\sum_{i=1}^{l}X_{i}\) and \(\hat{z}(t)=X+\hat{K}t^{\hat{m}}\), its reliability is:_ \[\hat{R}(t)=e^{-\left[Xt+\frac{\hat{K}t^{\hat{m}+1}}{m+1}\right]}\] Proof.: From Equation 6, we have the hazards in software, which is tested by using the SDP model. Now substitute the value of \(\hat{z}(t)\) (from Equation 6) in Equation 9, then we have: \[\hat{R}(t)=e^{-\int_{0}^{t}[X+\hat{K}x^{\hat{m}}]dx} \tag{15}\] Here, \(\hat{R}(t)\) is a random variable used to represent the reliability of the software which is tested from the predictions of the SDP model. Now, simplifying Equation 15 will result in the reliability of the software that is tested by using the SDP model. Now, the expected reliability of a software (which uses SDP models), \(\mu_{\hat{R}}\) or \(\mathbb{E}[\hat{R}(t)]\) is derived as: \[\mathbb{E}[\hat{R}(t)]\ =\ \mathbb{E}\big{[}e^{-\left[Xt+\frac{\hat{K}t^{ \hat{m}+1}}{m+1}\right]}\big{]}\ =\ \mathbb{E}\big{[}e^{-Xt}\big{]}e^{\frac{Kt^{\hat{m}+1}}{m+1}} \tag{16}\] We observe that: \[\mathbb{E}\big{[}e^{-Xt}\big{]}=\mathbb{E}\big{[}e^{-t\sum_{i=1}^{l}X_{i}} \big{]}=\mathbb{E}\big{[}\prod_{i=1}^{l}e^{-tX_{i}}\big{]} \tag{17}\] Since the \(X_{i}\) are independent, the random variables \(e^{-tX_{i}}\) are also independent. It follows that, \(\mathbb{E}\big{[}\prod_{i=1}^{l}e^{-tX_{i}}\big{]}=\prod_{i=1}^{l}\mathbb{E}\big{[} e^{-tX_{i}}\big{]}\). Now using these facts in Equation 16 gives: \[\mathbb{E}[\hat{R}(t)]=e^{\frac{Kt^{m+1}}{m+1}}\prod_{i=1}^{l}\mathbb{E}\big{[} e^{-tX_{i}}\big{]} \tag{18}\] Here, the random variable \(e^{-tX_{i}}\) assumes a value \(e^{-t}\) with probability \(p\), and the value 1 with probability \(1-p\). Now computing \(\mathbb{E}\big{[}e^{-tX_{i}}\big{]}\) from these values, we have that: \[\prod_{i=1}^{l}\mathbb{E}\big{[}e^{-tX_{i}}\big{]}=\prod_{i=1}^{l}\big{[}pe^{- t}+1-p\big{]}=\prod_{i=1}^{l}\big{[}1+p(e^{-t}-1)\big{]} \tag{19}\] Now we use the inequality \(1+x<e^{x}\) with \(x=p(e^{-t}-1)\), to obtain the expected reliability. \[\mu_{\hat{R}}=\mathbb{E}[\hat{R}(t)]<e^{\left[lp(e^{-t}-1)+\frac{Kt^{m+1}}{m+ 1}\right]} \tag{20}\] The intuition behind the inequality is to provide an easy computation and that does not harm the final bound in the following theorem. Now, by using the Lemmas 1 and 2, the following theorem provides a bound for the deviation of a random variable, \(e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}\), above the reliability of a manually tested software, \(e^{-\frac{Kt^{m+1}}{m+1}}\). **Theorem 2**.: _Let \(X_{1},X_{2},\ldots,X_{l}\) be the independent Bernoulli trials such that for, \(1\leq i\leq l\), \(Pr[X_{i}=1]=p\), where \(0<p<1\). Also let \(\exists\) parameters \(K>0,m>-1,\hat{K}>0,\hat{m}>-1\), time \(t>0\). Then for X = \(\sum_{i=1}^{l}X_{i},\hat{R}(t)=e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}, \mathbb{E}[\hat{R}(t)]=\mu_{\hat{R}}<e^{\left[lp(e^{-t}-1)+\frac{Kt^{m+1}}{m+1 }\right]}\), and for the Weibull distribution for the Reliability function, \(e^{\frac{-Kt^{m+1}}{m+1}}\):_ \[\begin{array}{l}Pr\Big{[}e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}>e^{- \frac{Kt^{m+1}}{m+1}}\Big{]}<\\ -\bigg{[}e^{\left[lp(e^{-t}-1)+\frac{Kt^{m+1}}{m+1}\right]}\big{]}-\left[\frac {Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{m+1}\right]\bigg{]}^{2}\frac{1}{2e^{\left[ p(e^{-t}-1)+\frac{Kt^{m+1}}{m+1}\right]}}\bigg{]}\\ e^{\left[p(e^{-t}-1)+\frac{Kt^{m+1}}{m+1}\right]}\bigg{]}\end{array}\] Proof.: The proof for this upper tail is very similar to the proof for the lower tail, as we saw in Theorem 1. As before, \[\begin{array}{l}Pr\Big{[}e^{-\left[Xt+\frac{\hat{Kt^{m+1}}}{m+1}\right]}>e^{ -\frac{Kt^{m+1}}{m+1}}\Big{]}=Pr\Big{[}Xt<\\ \Big{[}\frac{Kt^{m+1}}{m+1}-\frac{\hat{Kt^{m+1}}}{\hat{m}+1}\Big{]}\Big{]}=Pr \Big{[}X<\Big{[}\frac{Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{\hat{m}+1}\Big{]}\Big{]} \end{array} \tag{21}\] Now, we wish to obtain a tight lower bound that the random variable, \(e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}\), deviates far from the value \(e^{\frac{-Kt^{m+1}}{m+1}}\). In Equation 21, for some \(K>0,\hat{K}>0,m>-1,\hat{m}>-1\), and \(t>0\), the value \(\left[\frac{Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{\hat{m}+1}\right]\) is assumed to be below the expectation, \(\mu_{\hat{R}}=\mathbb{E}[\hat{R}(t)]\), in a given time period \([0,t]\). Now, equating the Equations 12 and Equation 21, then we get: \[(1-\delta)\mu_{\hat{R}}=\frac{Kt^{m}}{m+1}-\frac{\hat{K}t^{\hat{m}}}{\hat{m}+1}\] \[\Rightarrow\delta=1-\Big{[}\frac{Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{\hat{m}+1} \Big{]}\frac{1}{\mu_{\hat{R}}} \tag{22}\] Now, substitute the value of \(\delta\) (from Equation 22) in Equation 12 to obtain the tight upper bound (expressed in terms of lower bound) for the deviation of a random variable (that is, \(e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}\)) from the reliability (which is derived from the Weibull model of the hazard rate) of a manually tested software \(e^{\frac{-Kt^{m+1}}{m+1}}\). This is expressed below: \[Pr\Big{[}e^{-\left[Xt+\frac{\hat{Kt^{m+1}}}{m+1}\right]}>e^{\frac{-Kt^{m+1}}{m+ 1}}\Big{]}<e^{\frac{-Kt^{m}}{m+1}\left[1-\frac{1}{\mu_{\hat{R}}^{2}\left[ \frac{Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{m+1}\right]}\right]^{2}\frac{1}{2\mu_{ \hat{R}}}} \tag{23}\] After simplification, we get: \[Pr\Big{[}e^{-\left[Xt+\frac{Kt^{m+1}}{m+1}\right]}>e^{\frac{-Kt^{m+1}}{m+1}} \Big{]}<e^{-\left[\mu_{\hat{R}}-\left[\frac{Kt^{m}}{m+1}-\frac{\hat{Kt^{m}}}{m+1 }\right]\right]^{2}\frac{1}{2\mu_{\hat{R}}}} \tag{24}\] Substituting the expected reliability \(\mu_{\hat{R}}\) from Equation 20 in Equation 24 accomplishes the proof. Thus, we have from Theorem 2, the possibility of getting better reliability in the software that uses SDP than in the same software that is tested by a human is exponentially small in \(l,\hat{K},\hat{m}\), and \(t\), implying that, similar to the result of Theorem 1, at the larger values of these parameters, the bound becomes tighter. ## IV Future Plans Theorems 1 and 2 provides preliminary bounds for the post-analysis of the binary classification model (SDP model) in real-time working environments. We believe that providing a critique of the developed binary classification model in the real-time working environment is novel in machine learning theory and has the potential to provide insight into the feasibility of other applications (such as safety-critical applications, for example, tumour prediction systems for medical diagnosis, online fraud detection, etc.). Within the scope of this work, the extensions of Theorems 1 and 2 are numerous. A few examples include, 1) the bounds become more specific to the application if the state-of-the-art hazard (and reliability) models are used in the construction of the proof, 2) new bounds derived if the random variable \(X\) is assumed to be a function of time, \(t\), and 3) new bounds derived assuming the dependency among the random variables (relaxing Assumption 4). In this case, we derive bounds assuming the presence of cascading failures in the software as a result of SDP model.
2308.09720
On the Unexpected Abilities of Large Language Models
Large Language Models (LLMs) are capable of displaying a wide range of abilities that are not directly connected with the task for which they are trained: predicting the next words of human-written texts. In this article, I review recent research investigating the cognitive abilities developed by LLMs and their relation to human cognition. I discuss the nature of the indirect process that leads to the acquisition of these cognitive abilities, their relation to other indirect processes, and the implications for the acquisition of integrated abilities. Moreover, I propose the factors that enable the development of abilities that are related only very indirectly to the proximal objective of the training task. Finally, I discuss whether the full set of capabilities that LLMs could possibly develop is predictable.
Stefano Nolfi
2023-08-09T09:15:07Z
http://arxiv.org/abs/2308.09720v2
# On the Unexpected Abilities of Large Language Models ###### Abstract Large language models are capable of displaying a wide range of abilities that are not directly connected with the task for which they are trained: predicting the next words of human-written texts. In this article, I discuss the nature of this indirect acquisition process and its relation to other known indirect processes. I argue that an important side effect of such indirect acquisition is the development of integrated abilities. I discuss the extent to which the abilities developed by large language models are predictable. Finally, I briefly discuss the relation between the cognitive skills acquired by these systems and human cognition. ## 1 Introduction Large language models (LLMs) such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022) and LLaMA (Touvron et al., 2023) consist of large neural networks containing hundreds of billions (or more) parameters trained on hundreds of terabytes of human-written text data. More specifically, given a sequence of tokens x = {x,....., x\({}_{n}\)}, where tokens encode words or word-parts, LLMs are trained autoregressively to predict the target token x\({}_{i}\) based on the preceding tokens x\({}_{i}\). They are based on the Transformer neural network architecture (Vaswani et al., 2017) where multi-head attention layers are stacked in a very deep neural network. After the training process described above, LLMs are usually subjected to an additional training process that can be realized by using a reinforcement learning with human feedback method, that uses a relatively small training set compiled manually by humans (Ouyang et al., 2022). The goal of this additional training is that to align the behavior of the model with human values, i.e., to make the text generated by the model more helpful, honest and harmless. As a result of the training process, LLMs develop a wide range of abilities and skills that are not directly connected with the task of predicting the next words. In this article, I illustrate some of these abilities, discuss how they are acquired, why their development was unexpected, and to what extent their appearance can be predicted. Finally, I discuss some of the differences between human and LLMs intelligence. Alternative LLMs differ in several respects such as the number of parameters, size and composition of the training data, training time, and procedure used to align their output to human values. They differ in performance both quantitatively and qualitatively. Moreover, some LLMs, such as GPT-4 and PaLM, process multimodal information including images. To discuss the general properties of these models, I will focus my analysis on the abilities that that can be induced by training these systems with text only. Large Language Models (LLMs) are capable of displaying a wide range of abilities that are not directly connected with the task for which they are trained: predicting the next words of human-written texts. One of the most striking skills of LLMs is formal linguistic competence (Mahowald et al., 2023), i.e., the capacity to produce and comprehend language. These systems can produce text that is hard to distinguish from human output and are capable of correctly discriminating grammatical vs ungrammatical sentences by passing challenging tests designed by the natural language research community (Warstadt et al., 2020; Warstadt & Bowman, 2022). The acquisition of this ability is perhaps not too surprising considering that these systems are trained by using a massive collection of human-written text. On the other hand, the quality of the competence acquired largely surpasses what linguists could imagine only 5 years ago and falsifies past claims stating that statistical approaches would never be able to capture the complex syntactic and semantic features of language (Pinker & Prince, 1988; Petroni et al., 2019; Everaert et al., 2015). LLMs trained on large text corpora also acquire large amounts of factual knowledge (Petroni et al., 2019; Robert et al., 2020; Elazar et al., 2021) which enable them to achieve state-of-the-art results on open-domain question answering benchmark without accessing external knowledge (Robert et al., 2020). Also this remarkable ability is not particularly surprising since the unstructured text used to train them contains a wealth of non-linguistic information, such as "the capital of Italy is Rome" and "two plus three is five". In addition to formal linguistic and factual knowledge competence, LLMs display a large set of additional competences that have surprised everyone, including the developers of these systems. Below are some of the most remarkable examples. LLMs can perform dynamical semantic operations, i.e., understanding how the meaning of a sentence alters the context described in the preceding sentences (Li et al, 2021). For example, consider the sentence "You see an open chest. The only thing in the chest is an old key. There is a locked wooden door leading east" followed by the second sentence "You pick up the key." An LLM can understand that these two sentences can be followed by the sentence "Next, you use the key to unlock the door" or "Next, you drop an apple on the ground" but cannot be followed by the sentence "Next, you remove an apple from the chest" since the previous sentences imply implicitly that the chest is empty (Li et al., 2021). LLMs display theory of mind skills that enable them to infer the mental states of the characters described in a text. More specifically, they attribute thoughts, desires and goals to characters, posit intentions, and explain the actions of the characters based on their goal (Kosinski, 2023). Indeed, GPT-4 managed to solve nearly all the tasks (95%) of 40 classic false-belief tasks widely used to test theory of mind skills in humans (Kosinski, 2023). LLMs also display a certain ability to recognize affordances (Jones et al., 2022), i.e., discriminating the action that an agent can and cannot perform with an object, and to perform logical reasoning (Talmor et al., 2022; Cresswell et al., 2022), although their performance is still limited compared to humans. LMMs learn and use representations of the outside world, at least to some extent. Indeed, they acquire internal representations of colors words that closely mirror the properties of human color perception (Abdou et al., 2021; Patel & Pavlick, 2022; Sogaard, 2023). Moreover, they can internally represent the spatial layout of the setting of a story (Patel & Pavlick, 2022; Bubeck et al., 2023) and update such representations as more related information is revealed (Li et al., 2021). They are thus capable of extracting physical knowledge indirectly from written text, without observing the world directly and without interacting with it. LLMs can modify the text that they generate in a remarkably flexible manner based on the language content provided as input -- a property that is referred to as in-context learning (Brown et al., 2020). This enables them to learn on the fly to perform a task described through instructions and/or through examples. For a review of known skills and performance see Srivastava et al. (2022) and Bubeck et al. (2023). ## 3 Indirect acquisition of skills We might wonder how predicting the next words of human-written text can promote the development of a large set of complex cognitive skills. The answer is that these skills are acquired indirectly by the need to predict the next words accurately. This is because accurate prediction of the next words requires a deep comprehension of the preceding text, and this comprehension requires the possession and use of cognitive skills. Therefore, the development of the cognitive skills is induced indirectly. Indirect acquisition processes of this kind are known to occur in other adaptive processes, such as natural evolution and individual learning. The body structure of bacteria, plants, animals, and humans evolved as a side effect of the attempt of small molecules to replicate as efficiently as possible (Dawkins, 2016). Moreover, the behavioral skills developed by natural organisms, such as walking, flying, escaping predators, communicating, etc., also evolved as a side effect of the attempt of these molecules to replicate as efficiently as possible. Thus also in the case of natural evolution, structural properties and skills emerge indirectly as a result of attempt to maximize a different ability. The spontaneous acquisition of skills that are not rewarded directly but that are instrumental to a function that is rewarded directly is observed also in artificial evolving systems. For example, the ability to communicate through a self-organizing communication system emerges spontaneously in groups of robots selected for the ability to forage (Nolfi & Mirolli, 2010). Finally, indirect acquisition processes characterize individual learning too. For example, dexterous manipulation skills that enable a robotic hand to rotate and translate objects emerge spontaneously in robots trained to solve the Rubik's cube problem (Akkaya et al., 2019). As an example of indirect skill acquisition in LLMs, let's consider theory of mind and reasoning skills. To predict as accurately as possible the words that the characters of a story will say next, the system should infer the goals of the characters from their behavior and should differentiate between the events that happened in the story and the events of which the characters are aware of. In other words, the system should acquire an ability to reason and to identify the mental states of the characters and an ability to imagine the actions that they can take to achieve their own goals. Clearly, the more indirect the relationship is between the skills that need to be acquired and the task that is rewarded directly, the lower the probability that the skills are acquired. From this point of view, the development of complex cognitive skills that are related very indirectly to the task of predicting the next words still results surprising. The successful acquisition of those skills can be explained by considering a set of enabling factors that characterize the LLMs domain. A first enabling factor is the high informative nature of the prediction error, i.e., the fact that it provides a very reliable measure of the knowledge and skills of the system. This implies that improvements and regressions of the system skills always lead to decreases and increases of the error, respectively, and vice versa. A second enabling factor is the predictability of human language granted by its symbolic and non-dynamical nature. The predictability of dynamical systems, such as robots interacting physically with their external environment, is limited by the complex system nature of their dynamics. This is due to the fact that minor differences tend to produce large effects over time. The task of predicting the next word benefits from the absence of this limitation that is granted by the non-dynamical nature of language. A third enabling factor is the availability of a huge set of data ready to be used for training. Indeed, the remarkable ability of LLMs manifests only when their size and training set are scaled-up to huge dimensions. Overall, these factors can explain why LLMs manage to acquire cognitive skills that are related in a very indirect manner to the task of predicting the next words. Indirect acquisition of cognitive skills also allows for achieving another remarkable result: the acquisition of integrated capabilities, i.e., the acquisition of skills that are organized to work in synergy with the other acquired skills. This is due to the fact that the skills are acquired in parallel, i.e., are co-shaped. Moreover, it is due to the fact that the adaptive advantage of each skill depends also on the way in which the skill interacts with the other existing skills. As an example, we can consider the tight integration of the factual knowledge and the linguistic competence of LLMs. The integrated nature of these capabilities allow to recover the knowledge possessed by LLMs through natural language queries. Another example is the acquisition of integrated theory of mind and reasoning skills which enable to infer the goals of the characters of a story from their behavior and to guess the actions that the characters might do to achieve their goals. This level of integration could not be obtained in other ways, e.g., by building modular systems composed of multiple modules responsible for different skills or by rewarding each skill directly. Embodied theories of language acquisition in humans (Cisek, 1999; Kolodny & Edelman, 2018) postulate a similar indirect acquisition process. More specifically, they postulate that language originates in humans as a form of action that has the function of influencing the behaviors of the conspecifics (and perhaps also of the self (Mirolli & Parisi, 2019)). It starts with the production of a crying behavior that allow to obtain the parent's attention and care and proceeds with the discovery of alternative vocalizations that enable to obtain alternative specific intended outcomes. Interestingly, this implies that although the acquisition of language and associated skills in humans and LLMs differ in important respects, we will return on this in Section 5, it relies on an indirect process in both cases. ## 4 Predictability and emergence An important issue that needs to be investigated is how well the skills acquired by LLMs are predictable. This question has important implications for AI safety and alignment, since the impossibility of predicting the abilities that will be developed by larger models implies that these models could acquire undesired and dangerous capabilities, without warning (Schaeffer et al., 2023). The performance of large language models scales as a power-law with model size, dataset size, and amount of computation used for training (Kaplan et al., 2020). This implies that the overall performance (prediction error) of these systems is predictable. In other words, it implies that the overall performance that can be obtained by increasing the size of the model and/or the training time can be extrapolated based on the performance displayed by models that are smaller or less trained. However, the specific abilities that will be developed by a model of a certain size are not necessarily predictable. The interest in this topic was raised by the publication of an influential article by Wei et al. (2022) entitled "Emergent abilities of large language models". The authors observe that several abilities, such as in-context learning, instruction following, step-by-step reasoning, and arithmetic skills, that are not present in smaller models. They appear in larger models only. Moreover, they show that the acquisition process of these abilities in sufficiently large models is characterized by a sharp transition between a phase in which the system does not show any progress in the ability and a subsequent phase in which the system progresses. According to the authors, these two observations imply that predicting the abilities that will be acquired by scaled-up models will be impossible. In other words, they claim that the abilities that will be developed by scaled-up models cannot be extrapolated from what we know about smaller models. Wei et al. (2022) later questioned the presence of sharp transitions during the training process and showed that the occurrence of sharp or more continuous transitions in the acquisition process depends on the metric used to evaluate the performance. The occurrence of sharp transitions might thus be induced by the utilization of specific metrics. Here, I would like to question the unpredictability hypothesis with a different argument. The set of abilities that can be used to comprehend human-written text is closed and is probably restricted to the set of abilities possessed by the humans who wrote the text used to train the models. If this hypothesis holds, we should expect that models trained to predict the next words of human written text would not develop alien abilities, i.e., abilities unknown to humankind. The reason why the abilities necessary to comprehend human-written text are closed to the abilities possessed by humans is that human language is an artifact of humans themselves that has been shaped in its form by the cognitive abilities of human speakers. The same reason could explain why the cognitive abilities possessed by humans should be sufficient to comprehend human language. Notice, however, this does not imply that LLMs cannot surpass human intelligence. I will discuss this aspect in the next section. This contrasts with other machine learning methods and domains in which the set of solutions that could be found is open and is not constrained by the solutions possessed by humans. In particular, it contrasts with trial-and-error learning methods in which the learning system attempts to maximize a utility function by interacting actively with an external environment. For example, it contrasts with AlphaZero (Silver et al., 2017), a system that learns to play the game of GO by playing against itself. The training is realized by introducing random variations in the actions produced by the system and by increasing the probability to produce the varied actions that increase the chances of winning the game. Indeed, the analysis of the behavior discovered by this system revealed strategies that were previously unknown to humans (Silver et al., 2017). Notice that, by contrast, LLMs do not interact with an external environment. They are passively exposed to a sequence of words, and they do not have the possibility to alter their next observations through their actions. Moreover, they do not learn by trial-and-error. They learn by minimizing the offset between the words predicted and the words that actually follow. Future studies investigating the course of the learning process in LLMs can improve our understanding of the dependency relationships among abilities and shed light on the order in which they tend to be developed. The preliminary analysis reported in Srivastava et al. (2022) indicates that skills having a compositional nature progress only after the acquisition of the required sub-skills. For example, LLMs acquire an ability to identify chess moves corresponding to mates only after they develops a good ability to discriminate between valid and invalid moves (Srivastava et al., 2022). Another type of study that could provide insights into the developmental course concerns comparing the outcome of multiple training sessions of the same model. ## 5 LLMs versus Natural Intelligence LMMs differ from humans in several respects. Probably the most striking difference is that humans acquire much of their knowledge and skills actively by interacting sub-symbolically with the external environment and by interacting sub-symbolically and symbolically with other humans. LLMs instead acquire much of their knowledge and skills passively by being exposed to symbolic information only (human written text). This difference triggered a heated debate between those who think that LLMs use words without really understanding their meaning and those who think that they have a genuine comprehension. The former group stresses the limitations of current LLMs, e.g. their difficulty with causal and multi-step compositional reasoning (Dziri et al., 2023), their limited sensitivity to affordances with respect to humans (Jones et al., 2022), and their inability to distinguish their own output from factual knowledge (Ortega et al., 2021). More generally, they claim that they could not have a real understanding since they do not have any direct experience of the world they talk about. The latter group, on the other hand, stresses that: current models display already remarkable performance and that current limitations are likely to be overcome in future models. More generally, they claim that the knowledge of the physical world can be extracted indirectly from the traces left in human written text. The possibility to extract knowledge indirectly is supported by evidence collected on colorblind people who associate colors to the same emotions as sighted people (Saysani et al., 2021). Moreover, the fact that language contains sufficient information is supported by the fact that LLMs trained also with images do not acquire better representations than models trained with language data only (Yun et al., 2021). Whether or not acquiring knowledge passively, without interacting with the external environment, and acquiring knowledge from language input only limits the quality of the representations that can be acquired thus represent an open question for the moment (Pezzulo et al., 2023). Similarly, the relative importance that direct and indirect knowledge acquisition have in the case of humans still represents an open question (Borghi et al., 2023). A second difference concerns the amount of the training data. State of the art LLMs are exposed to language corpora that exceed the language data experienced by a typical human by several orders or magnitudes (Warstadt et al., 2023). The data used to train LLMs is also much wider in content than the information accessed by a typical human. Clearly this implies that LLMs can have a much wider knowledge than humans and can acquire a much larger set of abilities than individual humans. The possibility to transfer knowledge among skills enables systems possessing a larger set of skills to outperform systems possessing a smaller set of abilities. Consequently, LLMs possessing a large set of skills can potentially outperform humans even though they acquired their skills by processing human data. An example of transfer learning of this kind is reported in the experiments reported by Lee et al. (2022) in which the authors trained a single transformer network to play N different Atari games by imitating the strategy of N players trained on N different games through reinforcement learning. The system trained to perform multiple games managed to outperform some of its teachers on some corresponding games. Possibly, LLMs possessing a large set of skills could also acquire through finetuning new abilities, unrepresented in humans, by combining sets of skills that do not coexist in any individual human. The larger the set of abilities of LLMs becomes, the greater the chances that their skills can be combined in novel ways become. A third difference concerns the computation properties. The human brain has a much greater number of neurons and connections than state-of-the-art LLMs. Moreover, natural neurons are much more complex than artificial neurons. On the other hand, the transformer architecture used by LLMs can process thousands of words at once without any loss of information, while human brains process sequences of items sequentially and have limited short-term memory. Finally, a fourth difference concerns the fact that humans have values, beliefs, goals, and desires while LLMs do not. LMMs can acquire implicit values and goals, e.g., be helpful, honest, and harmless, through a subsequent fine-tuning training. This can be realized through a human mediated reinforcement learning method which operates by weakening or strengthening alternative model outputs according to preference judgements produced by human testers (Ouyang, 2022). Alternatively, it can be realized by manually specifying a list of rules or principles, by finetuning the system through a supervised learning procedure based on a training set composed of self-generated critiques and revisions, and by further finetuning the model through a reinforcement learning process that judges automatically the alternative outputs produced by the system (Bai et al, 2022). However, these methods are far from perfect. They can fail in subtle and surprising ways and do not guarantee that the model will operate according to the intended values and goals in all circumstances. Notice, however, that contrary to what could be expected, LLMs not express necessarily the values encoded in the text used to train them, especially when the behavior of the pretrained model is steered through prompting or fine-tuning (Bowman, 2023). In other words, the values expressed by a LLM do not necessarily reflect the average of the values expressed in its training data. Indeed, exposing models to more examples of unwanted behavior during pretraining can improve their ability to avoid the production of them through fine-tuning (Korbak et al. 2023). ## 6 Conclusion The emergence of a series of cognitive abilities in transformer neural networks trained to predict the next words of human written text came up as a surprise even for the developers of LLMs. Nobody anticipated such a remarkable result, namely that the development of several linguistic and cognitive abilities could be induced indirectly by the attempt to guess the next words as accurately as possible. We were aware of indirect acquisition processes of this kind. We know that in natural evolution, the attempt to maximize the probability of reproducing led to the development of a wide range of species possessing many skills. Moreover, we know that in trial-and-error learning processes, the acquisition of a given skill can induce the development of abilities that are instrumental to that skill. On the other hand, nobody hypothesized that a similar indirect process could be obtained by attempting to predict the next words. As claimed in this article, this surprising result can be explained by considering the non-dynamical nature of language data and the highly informative nature of the prediction error. The non-dynamical nature of language rules out the problem affecting dynamical systems, namely the fact that minor variations at a certain state can have huge effects over time. The highly informative nature of the prediction error ensures the efficacy of the learning process and enables also the acquisition of skills that are related in a very indirect manner to the prediction task. As claimed in Section 3, the indirect nature of the acquisition process also allows to develop well integrated solutions, i.e., solutions in which each acquired ability is shaped so as to work in synergy with the other acquired skills. The exploitation of an indirect acquisition process thus plays a crucial role. Interestingly, an indirect acquisition process seems to characterize human cognition too. Indeed, humans acquire communication and cognitive skills, at least in part, in the attempt to alter the behaviors of their conspecifics with the purpose of achieving their own goals. LLMs thus resemble natural intelligence in that respect, despite they differ in other important respects summarized above. Finally, I discussed to the extent to which the abilities that can be developed by LLMs are predictable. I hypothesized that the indirect acquisition of these abilities implies that the set of abilities that can be acquired is restricted to the set that is necessary and sufficient to understand human-written text. Consequently, the set of abilities that can be acquired is probably restricted to the set of abilities possessed by the humans who wrote the text used for training. Moreover, I clarified that this hypothesis does not contradict the possibility to achieve super-human performance. Indeed, the integration of knowledge and skills possessed by multiple humans that exceed those possessed by any single human can allow to outperform the abilities of individual humans. ## Acknowledgment I acknowledge financial support from PNRR MUR project PE0000013-FAIR.
2303.04176
Inducing superconductivity in bilayer graphene by alleviation of the Stoner blockade
External magnetic fields conventionally suppress superconductivity, both by orbital and paramagnetic effects. A recent experiment has shown that in a Bernal stacked bilayer graphene system, the opposite occurs -- a finite critical magnetic field is necessary to observe superconducting features occurring in the vicinity of a magnetic phase transition. We propose an extraordinary electronic-correlation-driven mechanism by which this anomalous superconductivity manifests. Specifically, the electrons tend to avoid band occupations near high density of states regions due to their mutual repulsion. Considering the nature of spontaneous symmetry breaking involved, we dub this avoidance Stoner blockade. We show how a magnetic field softens this blockade, allowing weak superconductivity to take place, consistent with experimental findings. Our principle prediction is that a small reduction of the Coulomb repulsion would result in sizable superconductivity gains, both in achieving higher critical temperatures and expanding the superconducting regime. Within the theory we present, magnetic field and spin-orbit coupling of the Ising type have a similar effect on the Bernal stacked bilayer graphene system, elucidating the emergence of superconductivity when the system is proximitized to a $\rm WSe_2$ substrate. We further demonstrate in this paper the sensitivity of superconductivity to disorder in the proposed scenario. We find that a disorder that does not violate Anderson's theorem may still induce a reduction of $T_c$ through its effect on the density of states, establishing the delicate nature of the Bernal bilayer graphene superconductor.
Gal Shavit, Yuval Oreg
2023-03-07T19:00:19Z
http://arxiv.org/abs/2303.04176v2
# Inducing Superconductivity in Bilayer Graphene by Alleviation of the Stoner Blockade ###### Abstract External magnetic fields conventionally suppress superconductivity, both by orbital and paramagnetic effects. A recent experiment [1] has shown that in a Bernal stacked bilayer graphene system, the opposite occurs - a finite critical magnetic field is necessary to observe superconducting features occurring in the vicinity of a magnetic phase transition. We propose an extraordinary electronic-correlation-driven mechanism by which this anomalous superconductivity manifests. Specifically, the electrons tend to avoid band occupations near high density of states regions due to their mutual repulsion. Considering the nature of spontaneous symmetry breaking involved, we dub this avoidance _Stoner blockade_. We show how a magnetic field softens this blockade, allowing weak superconductivity to take place, consistent with experimental findings. Our principle prediction is that a small reduction of the Coulomb repulsion would result in sizable superconductivity gains, both in achieving higher critical temperatures and expanding the superconducting regime. Within the theory we present, magnetic field and spin-orbit coupling of the Ising type have a similar effect on the Bernal stacked bilayer graphene system, elucidating the emergence of superconductivity when the system is proximitized to a \(\mathrm{WSe}_{2}\) substrate. We further demonstrate in this paper the sensitivity of superconductivity to disorder in the proposed scenario. We find that a disorder that does not violate Anderson's theorem may still induce a reduction of \(T_{c}\) through its effect on the density of states, establishing the delicate nature of the Bernal bilayer graphene superconductor. ## I Introduction A superconductor subject to an external magnetic field usually suffers deterioration of its superconducting properties: the superconducting gap and transition temperature are suppressed, vortices are introduced into the bulk of the material, and resistivity increases [2; 3]. The magnetic field's most harmful aspect is its orbital effect on the superconducting condensate. This effect can be almost entirely suppressed when the magnetic field is applied parallel to thin films (whose width is much smaller than the London penetration depth) or in two-dimensional materials, e.g., graphene. Yet, in these materials, the magnetic field's adverse effect may persist in the form of pair breaking due to the Zeeman effect. Namely, if the electron pairs that make up the superconducting condensate have opposite spins, then the Zeeman coupling to their spin magnetic moment eventually eliminates superconductivity. In the case of conventional spin-singlet superconductors, the Pauli-Chandrasekhar-Clogston limit [4; 5] sets the critical field strength at \(\Delta/\left(\sqrt{2}\mu_{B}\right)\) (\(\Delta\) is the superconducting gap, \(\mu_{B}\) is the Bohr magneton). In recent years superconductors that are not very sensitive to magnetic fields have emerged. These materials are very thin, up to a single atomic layer, and have a non-singlet superconducting order parameter facilitated by their multi-orbital band structure and electronic correlations. The most notable examples are few-layer transition metal dichalcogenides, presumably hosting so-called Ising superconductivity [6; 7; 8], and twisted graphene multilayers [9; 10; 11]. A recent experiment [1] has discovered an even more extreme example of the effect of magnetic fields. Remarkably, the authors found that in electrically-biased Bernal-stacked bilayer graphene (BLG), superconductivity emerges in the hole-doped side of the charge neutrality point only _above_ a critical in-plane magnetic field strength (which also exceeds the Pauli limiting field). This material's superconducting regime appears to lie close to a magnetic phase transition, making the phenomena even more peculiar. We present and study the following scenario as a possible explanation of the magnetic-field-induced superconductivity in BLG. In the absence of an external magnetic field, an electrical displacement field modifies the BLG (non-interacting) band structure such that the density may be tuned to the vicinity of a van-Hove singularity (vHS) with a large density of states (DOS). However, when Coulomb interactions between the electrons are introduced, due to the large DOS, a Stoner-like phase transition occurs so that some bands are occupied more than others. In this spontaneously reconstructed distribution of the occupations, non of the bands is near the vHS, and the interaction energy is minimized. We find that applying an external parallel magnetic field weakens this "Stoner Blockade" effect, allowing the system to park near configurations with a larger DOS. Analyzing a first-order phase transition under general considerations, we find that this is a generic outcome to be expected when applying a field that couples to the order parameter. The presence of the large normal-state DOS enables in turn, the stabilization of a superconducting phase, whose \(T_{c}\) is large enough to be observed experimentally. Thus, our theory gives rise to superconductivity residing exactly around the phase transition line, as is experimentally observed. A straightforward prediction of the theory we present is that a slight suppression of the Coulomb repulsion by, e.g., tuning the strength of screening by a nearby metallic gate (cf. Refs. [12; 13; 14]), can lead to a dramatic expansion of the parameter regime supporting superconductivity. The novel Stoner blockade mechanism we present has two additional appealing features hinting at its relevance to BLG. First, it easily generalizes to the scenario where the in-plane field is replaced by an Ising spin orbit coupling (ISOC) term in the band structure. It thus accounts for some of the phenomenology found in other experiments [15; 16], where en -hanced superconductivity was measured in BLG proximate to a \(\mathrm{WSe}_{2}\) monolayer. Second, we demonstrate that within this framework, due to the required high DOS in our scenario, only pristine high mobility stat-of-the-art devices would display superconducting behavior, even in the presence of protection by the so-called Anderson's theorem [17]. This somewhat resolves the issue of the scarcity of superconducting BLG devices to date, requiring recent major advances in device quality. The rest of the manuscript is organized as follows. In Sec. II we describe how electron interactions give rise to a forbidden range of Fermi-level energies close to the vHS within a simple Hartree-Fock picture. We sketch how this can be detrimental to superconductivity and how an in-plane magnetic field partially alleviates the blockade. The superconductivity calculations, taking into account the instantaneous Coulomb repulsion and a retarded pairing mechanism, are described in Sec. III. The residual pair-breaking orbital effect of the magnetic field is also considered. The case of ISOC and the importance of (non-pair-breaking) disorder are discussed in Sec. IV. Finally, we conclude our discussion in Sec. V, and comment on several open questions. ## II Stoner blockade ### Normal state "cascade" In this work we focus on studying the Hamiltonian \[H=H_{0}+H_{\mathrm{int}}+\mathcal{H}_{\mathrm{SB}}, \tag{1}\] where \(H_{0}\) describes the low-energy dispersion of electrons in BLG, \(H_{\mathrm{int}}\) is a phenomenological short-range interaction Hamiltonian, and \(\mathcal{H}_{\mathrm{SB}}\) is an \(SU\left(2\right)\) symmetry-breaking operator to be discussed later on. We define \(\Psi_{\mathbf{k}}\) as an 8-spinor of fermionic annihilation operators at momentum \(\mathbf{k}\), with pseudo-spin (layer), valley, and spin degrees of freedom, described by Pauli matrices \(\sigma_{i}\), \(\tau_{i}\), and \(s_{i}\), respectively. The single particle Hamiltonian may be written as [18; 19], \[H_{0}=\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}\left(h_{0}+h_{\mathrm{tri} }+h_{\mathrm{Dis}}+h_{\mathrm{p.h.}}\right)\Psi_{\mathbf{k}}. \tag{2}\] Here, the matrix \(h_{0}\) accounts for the quadratic band touching in each valley, \(h_{\mathrm{tri}}\) describes the trigonal warping due to sub-leading interlayer tunneling, \(h_{\mathrm{Dis}}\) describes the potential difference between the layers induced by an electric displacement field, and \(h_{\mathrm{p.h.}}\) accounts for particle-hole asymmetric terms. The different terms are given by \[h_{0}=-\frac{v^{2}}{\gamma_{1}}\left[\left(k_{x}^{2}-k_{y}^{2}\right)\sigma_{ x}+2k_{x}k_{y}\sigma_{y}\tau_{z}\right], \tag{3a}\] \[h_{\mathrm{tri}}=v_{3}\left(k_{x}\sigma_{x}\tau_{z}-k_{y}\sigma_{y}\right),\] (3b) \[h_{\mathrm{Dis}}=-U\biggl{(}1-2\frac{v^{2}k^{2}}{\gamma_{1}^{2}}\biggr{)}\sigma _{z},\] (3c) \[h_{\mathrm{p.h.}}=\left(2\frac{vv_{4}}{\gamma_{1}}+\Delta^{\prime}\frac{v^{2}}{ \gamma_{1}^{2}}\right)k^{2}. \tag{3d}\] In the expressions above \(k^{2}=k_{x}^{2}+k_{y}^{2}\), and \(2U\) is the potential difference between the graphene layers. Here we use the parameters \(v=1.1\times 10^{6}\frac{\mathrm{m}}{\mathrm{sec}}\), \(v_{3}=1.3\times 10^{5}\frac{\mathrm{m}}{\mathrm{sec}}\), \(v_{4}=4.8\times 10^{4}\frac{\mathrm{m}}{\mathrm{sec}}\), \(\gamma_{1}=381\,\mathrm{meV}\), and \(\Delta^{\prime}=22\,\mathrm{meV}\). In the presence of a large displacement field a gap opens in the band structure at charge neutrality, and the DOS features pronounced van-Hove singularities. An example of the valence band DOS, (which will be our focus since it is where superconductivity was observed in experiments) is shown in Fig. 1a. Next, electronic interactions in our Hamiltonian are given by \[H_{\mathrm{int}}\!=\!\frac{1}{\Omega}\sum_{\mathbf{q}}\left(\frac{U_{C}}{2}N_ {\mathbf{q}}N_{-\mathbf{q}}+U_{V}n_{\mathbf{q}}^{+}n_{-\mathbf{q}}^{-}+J \mathbf{S}_{\mathbf{q}}^{+}\cdot\mathbf{S}_{-\mathbf{q}}^{-}\right), \tag{4}\] where \(N_{\mathbf{q}}=\sum_{\mathbf{k}}\Psi_{\mathbf{k}+\mathbf{q}}^{\dagger}\Psi_{ \mathbf{k}}\), \(n_{\mathbf{q}}^{\pm}=\sum_{\mathbf{k}}\Psi_{\mathbf{k}+\mathbf{q}}^{\dagger} \frac{1\pm\tau_{z}}{2}\Psi_{\mathbf{k}}\), \(\mathbf{S}_{\mathbf{q}}^{\pm}=\sum_{\mathbf{k}}\Psi_{\mathbf{k}+\mathbf{q}}^{ \dagger}\frac{1\pm\tau_{z}}{2}\Psi_{\mathbf{k}}\), and \(\Omega\) is the system area. The structure of \(H_{\mathrm{int}}\) is the most general form of short-range interactions which respect the symmetry of the system: time-reversal, \(SU(2)\) spin symmetry (in the absence of magnetic fields or spin-orbit coupling), and the \(U(1)\) charge and (approximate) valley symmetries. The interaction term proportional to \(U_{C}\) is a structure-less density-density interaction, which is entirely \(SU(4)\) symmetric in valley-spin space, and is considered to be dominant as compared to the other two terms. The term proportional to \(U_{V}\) accounts for possible differences between intravalley and intervalley density-density interactions and will be set to zero throughout this work, as it is non-essential for correctly capturing the phenomenology we aim to study. Finally, \(J\) is the intervalley Hund's coupling between electron spins in opposite valleys. The experimental phenomenology in hBN-encapsulated BLG (also in rhombohedral trilayer graphene) is most consistent with the Hund's interaction being ferromagnetic, i.e., \(J<0\)[1; 20]. We may now analyze the model of Eq. (1) using a variational Hartree-Fock approach, similar to the the ones employed in Refs. [20; 21; 22]. Our interest lies on the hole-doped side of charge neutrality in the system, where the peculiar superconducting phenomenon was experimentally observed. Moreover, for the physical effect illustrated in this work it is sufficient to consider flavor symmetry broken phases, i.e., order parameters which are some combination of \(\tau_{z}\) and \(s_{z}\) alone. Our analysis thus proceeds as follows. At a given chemical potential \(\mu\), the grand-potential \(\Phi=\left\langle H-\mu N_{0}\right\rangle_{\mathrm{H.F.}}\) is minimized, where \(\left\langle\right\rangle_{\mathrm{H.F.}}\) denotes the expectation value calculated using the variational wavefunction \[\left|\Psi\right\rangle_{\mathrm{H.F.}}=\prod_{i}\left(\prod_{\begin{subarray}{ c}\mathbf{k}\\ c_{\mathbf{k}}\geq\mu_{i}\end{subarray}}c_{i,\mathbf{k}}\right)\left|\mathrm{CN}\right\rangle, \tag{5}\] where \(c_{i,\mathbf{k}}\) annihilates an electron of flavor \(i\) in the valence band at momentum \(\mathbf{k}\) with energy \(\epsilon_{\mathbf{k}}\), \(\left|\mathrm{CN}\right\rangle\) is the flavor-symmetric charge-neutral Fermi-sea, and \(\mu_{i}\leq 0\) are the four variational parameters corresponding to the four spin-valley flavors, with the index \(i=(\tau,s)\) combining the spin and valley indices (see Appendix A). Obtaining the different \(\mu_{i}\), we calculate the flavor resolved densities \(\nu_{i}=-\frac{1}{\Omega}\sum\limits_{\mu_{i}<\epsilon_{\mathbf{k}}<0}\), and their relation to the total density \(n=\sum_{i}\nu_{i}\). We focus on the vicinity of the valence band vHS in the high displacement field regime, where the anomalous superconductivity was observed. Fig. 1b demonstrated the typical pattern of phase transitions we observe with a reasonable choice of parameters, consistent with the experimental picture. The system favors a flavor-symmetric phase at lower hole densities, transitions into a two-fold symmetric spontaneously spin-polarized phase with increased doping, and then becomes flavor symmetric again as more holes are doped into it. The electrons spontaneously develops flavor polarization as to avoid an energetically unfavorable phase, where the Fermi levels of all the bands sit at a high DOS region. The interaction energy cost of such a phase, given that \(U_{C}\) is strong enough, triggers a Stoner-like transition. The ferromagnetic intervalley Hund's interaction \(J\) is responsible for the specific pattern of flavor polarization, where valley degeneracy is preserved, yet the electrons spin polarize. Tracking the evolution of the individual flavor densities, one notices an interesting feature. The flavors tend to avoid certain densities, which encompass the vHS (Fig. 1a). We term this interaction-induced blocking of certain flavor-resolved densities the _Stoner blockade_. Unsurprisingly, the extent of the blockaded region is directly related to the strength of repulsive interactions, as demonstrated in Fig. 1c. ### Blocking superconductivity Let us briefly discuss the implications of the demonstrated Stoner blockade (detailed calculations of superconductivity within our model are carried out in Sec. III). The experimentally measured critical temperature of the superconducting phase, of order \(\mathcal{O}\left(10\,\mathrm{mK}\right)\), is much smaller than other typical energy scales of the system. The \(\pi\)-electrons graphene bandwidth is \(\mathcal{O}\left(\mathrm{eV}\right)\), the interlayer potential difference due to the displacement field (at the relevant parameter regime where superconductivity is observed) is \(\mathcal{O}\left(50\,\mathrm{mev}\right)\), and the distance between the vHS and the top of the valance band, due to the trigonal warping [see Eq. (3b)] band is \(\mathcal{O}\left(10\,\mathrm{mev}\right)\)[1]. It is thus instructive to examine the expression for weak coupling superconducting critical temperature, \(T_{c}\sim\omega_{c}\exp\left(-\frac{1}{\tilde{g}\mathcal{N}}\right)\), with the effective pairing interaction \(\tilde{g}\), the pairing interaction cutoff \(\omega_{c}\), and the Fermi level DOS \(\bar{\mathcal{N}}\). The dimensionless coupling constant is assumed to be rather small, \(\tilde{g}\mathcal{N}\ll 1\). The important observation is that the critical temperature is extremely sensitive to slight changes in the Fermi level DOS in this case. Quantitatively, one may relate the change in critical temperature \(\delta T_{c}\), to a DOS variation \(\delta\bar{\mathcal{N}}\), \[\frac{\delta T_{c}}{T_{c}}=\frac{1}{\tilde{g}\mathcal{N}}\times\frac{\delta \bar{\mathcal{N}}}{\mathcal{N}}. \tag{6}\] Thus, in a weak-coupling scenario there is a huge "lever factor" converting DOS changes into modification of \(T_{c}\). As a consequence of the above considerations, blocking the high DOS regions of individual flavor fillings can catastrophically weaken superconductivity. Conversely, relief of the Stoner blockade, even by a modest amount, may produce Figure 1: (a) Density of states per flavor of the valence band, computed from the Hamiltonian \(H_{0}\) [Eq. (2)]. The grey rectangle demarcates the blockaded region in panel (b). (b) Flavor resolved densities \(\nu_{i}\) as a function of total electron filling, calculated by the variational Hartree-Fock method. Spontaneous spin polarization develops in the system approaching the van-Hove filling from either side. The gray rectangle emphasizes the forbidden range of flavor density due to the strong electronic interactions. Here, \(U_{C}=1.8\) eV\(\cdot\)nm\({}^{2}\), \(J=0.25\) eV\(\cdot\)nm\({}^{2}\). (c) Extent of the Stoner blockade with varying interaction strength \(U_{C}\). The van Hove filling is marked by a dashed blue line. Throughout this figure we use \(U=60\) meV. more robust superconductivity with higher \(T_{c}\). We now move on to discuss a natural way to lift the blockade - via introducing a Zeeman term. ### Softening the phase transitions Let us examine the width of the Stoner blockaded region \(\Delta\nu_{b}\) more carefully. To that end, we introduce a simple model for the free energy exhibiting a first-order phase transition and a jump in the densities and magnetization. Approximating the vHS as symmetric around the singular filling \(n_{\mathrm{vHS}}\), we can relate the blockade to a _first-order jump in magnetization_\(\Delta m\) (Appendix B), \[\Delta\nu_{b}\approx\Delta m-\frac{1}{2}\left|n_{\mathrm{vHS}}-n^{c}\right|, \tag{7}\] with \(n^{c}\) the density at which the phase transition spontaneously occurs, and we defined the magnetization \(m\equiv\sum_{\tau,s}\sigma_{s}^{ss}\nu_{\tau s}\). The upshot of the crude estimate in the expression (7) is that reducing the first-order magnetization jump immediately shrinks the blockaded region. It is well-known that a spontaneous first-order transition is softened by a perturbation that couples linearly to the order parameter. In the case of spin magnetization, this is clearly just a Zeeman magnetic field. Consider the following simple free-energy density, expanded around the phase transition point, \[f\left(m\right)=f_{0}+\alpha m^{2}-\frac{1}{2}\beta m^{4}+\frac{1}{3}\gamma m ^{6}-Bm. \tag{8}\] For simplicity, as we are only interested in the qualitative properties of the phase transition, in Eq. (8) \(m\) is the dimensionless order parameter (magnetization), \(\alpha,\beta,\gamma>0\) have units of energy density, and \(B\) is the Zeeman-like energy density. Notice \(\alpha\) is the parameter that controls the transition (in our case, the relevant parameter is the electron density). In terms of the above parameters, the \(B=0\) transition, where the minimum of \(f\) is at \(m\neq 0\), occurs at \(\alpha_{c}=\frac{3}{16}\frac{\beta^{2}}{\gamma}\), and the magnetization jumps by a magnitude \(\Delta m^{0}=\sqrt{\frac{3\beta}{4\gamma}}\). By calculating the magnetic susceptibility \(dm/dB\) on both sides of the transition, we find the small field dependence of this jump (Appendix B), \[\Delta m\approx\Delta m^{0}-2\frac{\gamma}{\beta^{2}}B. \tag{9}\] One thus recovers the expected effect: a finite magnetic field significantly softens the first-order phase transition. Let us now turn to include this effect explicitly within our model by introducing the Zeeman coupling \[\mathcal{H}_{\mathrm{SB}}^{\mathrm{Zeeman}}=-V_{Z}\sum_{\mathbf{k}}\Psi_{ \mathbf{k}}^{\dagger}s_{z}\Psi_{\mathbf{k}}, \tag{10}\] which explicitly breaks the spin \(SU\left(2\right)\) symmetry of \(H_{0}+H_{\mathrm{int}}\). Repeating our variational analysis with finite \(V_{Z}\) we find precisely the expected behavior from the above simplified considerations. Namely, the jump in magnetization gradually decreases on both sides of the transition, as illustrated in Fig. 2a. In Fig. 2b we demonstrate the effect of finite Zeeman coupling on the so-called blockade. The flavor-resolved densities now encroach into the previously forbidden territory in the \(V_{Z}=0\) case. Thus, the normal state Fermi level DOS may become higher with applied in-plane magnetic fields. ## III Superconductivity Having established the relevant phenomenon naturally arising in the non-superconducting normal state of BLG, we explore its effects on superconductivity in this system. Our starting point for this discussion will be the result of the variational Hartree-Fock approach. For simplicity, we assume superconductivity emerges within a spin-polarized sector, neglecting the sector whose Fermi energy resides far away from the vHS. Furthermore, we assume intervalley electron pairing, as finite Figure 2: (a) The magnitude of the discontinuous magnetization jump at the phase transition points, as a function of Zeeman coupling [Eq. (10)]. As expected from a first-order magnetization transition, the jump softens with an increase in the magnetic field. Here, the magnetization is defined \(m\equiv\sum_{\tau,s}\sigma_{s}^{ss}\nu_{\tau s}\). (b) Flavor resolved densities \(\nu_{i}\) as a function of total electron filling, with \(V_{Z}=0.05\) meV. The gray rectangle marks the forbidden range of flavor density when \(V_{Z}=0\) (see Fig. 1b). Notice that in the vicinity of the transition, some flavors occupy a previously-forbidden region. Other than \(V_{Z}\neq 0\), the parameters used in this Figure are identical to the ones in Fig. 1b. momentum pairing is generically considered less favorable due to its sensitivity to disorder effects. ### Tolmachev-Anderson-Morel approach Projecting on to the valence bands of \(H_{0}\) We consider the action \[\mathcal{S} =\sum_{n,\mathbf{k},\tau}\left(\xi_{\mathbf{k},\tau}-i\omega_{n} \right)\bar{e}_{n\mathbf{k}\tau}c_{n\mathbf{k}\tau}\] \[+\sum_{n,m,\ell,\mathbf{k},\mathbf{k}^{\prime},\mathbf{q},\tau, \tau^{\prime}}V_{\mathbf{q}}\bar{c}_{n+\ell,\mathbf{k}+\mathbf{q}\tau}c_{n \mathbf{k}\tau}\bar{c}_{m-\ell,\mathbf{k}^{\prime}-\mathbf{q}\tau^{\prime}}c_{ m\mathbf{k}^{\prime}\tau^{\prime}}, \tag{11}\] where \(c_{n\mathbf{k}\tau}\) is a fermionic Grassman variable corresponding to a fermion with Matsubara frequency \(\omega_{n}=\pi\left(2n+1\right)T\), momentum \(\mathbf{k}\) at valley \(\tau\), \(\xi_{\mathbf{k},\tau}=\epsilon_{\mathbf{k}\tau}-\bar{\mu}\), \(\epsilon_{\mathbf{k}\tau}\) is the electronic spectrum of the valley \(\tau\) valence band, \(\bar{\mu}\) is the Fermi energy of the relevant sector, and \(V_{\mathbf{q}}\) is a generalized interaction projected onto the BLG valence bands. We simplify the interaction term by replacing the general \(V_{\mathbf{q}}\) with a single short-range term \(V\), and decouple the interaction term in the Cooper channel (pairing between electrons of opposite momenta at opposite valley) via a Hubbard-Stratonovich transformation, such that the action reads \[\tilde{\mathcal{S}} =\sum_{n,\mathbf{k},\tau}\left(\xi_{\mathbf{k},\tau}-i\omega_{n} \right)\bar{e}_{n\mathbf{k}\tau}c_{n\mathbf{k}\tau}+\frac{1}{V}\bar{\Delta}\Delta\] \[+i\sqrt{\frac{T}{\Omega}}\sum_{n,\mathbf{k},\tau}\left(\Delta \bar{c}_{n\mathbf{k}\tau}\bar{c}_{-n,-\mathbf{k},-\tau}+\bar{\Delta}c_{-n,- \mathbf{k},-\tau}c_{n\mathbf{k}\tau}\right), \tag{12}\] where \(\Delta\) is the superconducting order parameter. Our analysis thus proceeds in two steps. We first assume an upper energy cutoff on the action \(\Lambda_{0}\), at which the interaction is repulsive. The initial value of \(V\) will be determined by the screened Coulomb interaction, \(V_{\mathbf{q}}\approx 2\pi e^{2}/\left(\epsilon_{r}\left|\mathbf{q}\right|\right)\) (\(\epsilon_{r}\) is the dielectric constant). Considering only low-momentum scattering (intervalley scattering is largely suppressed), the relevant momenta are of order \(k_{F}\sim\sqrt{\left|n\right|/4}\) (accounting for the four flavors), which in our regime of interest is \(\sim 1/20\) nm\({}^{-1}\). However, the Thomas-Fermi momentum \(q_{TF}=2\pi e^{2}\mathcal{N}\left(\bar{\mu}\right)/\epsilon_{r}\) is of order \(\mathcal{O}\left(1\,\mathrm{nm}^{-1}\right)\) (considering \(\epsilon_{r}=4\) for BBN and the vicinity of the vHS), i.e., much larger than \(k_{F}\). Physically, this indicates that the combination of large Fermi energy DOS with low electron density means the Coulomb repulsion is quite efficiently screened. We will thus replace \(V_{\mathbf{q}}\approx\mathcal{N}^{-1}\left(\bar{\mu}\right)\) henceforth. Similar considerations were discussed in Refs. [23; 24]. We will also include the effects of the Hund's interaction, which is attractive in the spin-polarized Cooper channel, such that the initial interaction is \[V\left(\Lambda_{0}\right)=\mathcal{N}^{-1}\left(\bar{\mu}\right)-J. \tag{13}\] We note that \(V\left(\Lambda_{0}\right)\) is still positive, since we take the subleading interaction term \(J\) to be much smaller than the dominant Coulomb repulsion energy scale. In the first step, we integrate out high energy electrons down to \(\omega^{*}\), the scale at which retarded attractive interactions come in. In the case of acoustic phonon mediated attraction, this would be the Debye frequency. Keeping only the leading term in \(\Delta\), since we are interested only in the vicinity of the superconducting transition, the effective interaction at this point is somewhat reduced, \[V\left(\omega^{*}\right)^{-1}=V\left(\Lambda_{0}\right)^{-1}+\frac{1}{2\Omega }\sum^{*}\frac{1+\mathrm{sgn}\left(\xi_{\mathbf{k},\tau}\xi_{-\mathbf{k},- \tau}\right)}{\left|\xi_{\mathbf{k},\tau}\right|+\left|\xi_{-\mathbf{k},-\tau} \right|}, \tag{14}\] where the sum \(\sum^{*}\) is over energies \(\omega^{*}<\left|\xi_{\mathbf{k},\tau}\right|\leq\Lambda_{0}\). We have taken the limit \(T\to 0\) in the above expression, since the energies integrated over are assumed to be much higher than the temperature. Notice that we generally allow \(\xi_{\mathbf{k},\tau}\neq\xi_{-\mathbf{k},-\tau}\), which is excluded by \(H_{0}\), but will be made possible by orbital effects of the magnetic field. In the next step, we introduce the attraction \(g\) at the scale \(\omega^{*}\), and calculate the vertex function \(\chi_{\mathrm{SC}}\) by integrating out the remaining electrons (assuming \(\left|g\right|>V\left(\omega^{*}\right)\)), \[\chi_{\mathrm{SC}}^{-1}=\left[\left|g\right|-V\left(\omega^{*} \right)\right]^{-1}\\ -\frac{1}{\Omega}\sum_{\left|\xi_{\mathbf{k},\tau}\right|\leq \omega^{*}}\frac{1-f\left(\xi_{\mathbf{k},\tau}\right)-f\left(\xi_{-\mathbf{k},- \tau}\right)}{\xi_{\mathbf{k},\tau}+\xi_{-\mathbf{k},-\tau}}, \tag{15}\] where \(f\left(x\right)=1/\left(1+e^{x/T}\right)\). We will extract \(T_{c}\) as the temperature at which the vertex function diverges, i.e., \[\chi_{\mathrm{SC}}^{-1}\left(T_{c}\right)=0. \tag{16}\] ### Orbital magnetic field effect When an in-plane magnetic field is applied to a BLG device, the Zeeman term coupling to to electron spins is not the only perturbation to the Hamiltonian. As a finite flux is penetrating the space between the graphene layers, one should modify \(H_{0}\to H_{0}+\sum_{\mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}h_{\mathrm{orb}}\Psi _{\mathbf{k}}\), with [25] \[h_{\mathrm{orb}}=\frac{2v^{2}}{\gamma_{1}}\left(\mathbf{b}\times\mathbf{k} \right)_{z}\left(\frac{v_{4}}{v}\sigma_{z}+2\frac{U}{\gamma_{1}}\right), \tag{17}\] where \(\mathbf{b}=e\mathbf{B}d/2\), \(\mathbf{B}\) is the in-plane magnetic field, \(d\) is the interlayer separation, and we only consider leading-order terms in \(v_{4}/v\) and \(U/\gamma_{1}\). For definition of the different parameters, see Eq. (3). We have verified numerically that \(h_{\mathrm{orb}}\) has a negligible effect on the phase transitions studied in Sec. II.1 for experimentally relevant magnetic fields of order \(\mathcal{O}\left(1\,\mathrm{T}\right)\) or less. Notice that \(h_{\mathrm{orb}}\) is odd in momentum \(\mathbf{k}\) and even with respect to valley. The relevant effect of this term regarding superconductivity is to make \(\epsilon_{\mathbf{k},\tau}\neq\epsilon_{-\mathbf{k},-\tau}\), resulting in a non-negligible pair-breaking effect. Although the orbital energy is rather small compared to the Zeeman energy associated with the magnetic field (due to small layer separation \(d\), and relevant Fermi momenta), it becomes important compared to the tiny superconducting \(T_{c}\)s which are presumably realized in experimental devices. As can be seen in Fig. 3, this leads to a narrowing of the superconducting region with increased \(B\), whereas the pure Zeeman effect would not lead to such an effect. The latter can be understood from the fact that the Fermi level DOS grows monotonically with magnetic field, just as the Stoner blockade picture would imply. An important consequence of the Stoner-blockaded superconductivity at zero field, the mechanism that we propose here, is an _extraordinary sensitivity to Coulomb repulsion strength_. Let us compare panels (a) and (b) in Fig. 3, where in the latter we slightly reduce the interaction parameter \(U_{C}\) by a mere 10% compared to the former. As one might expect from the discussion in Sec. II, the Fermi levels in the reduced-repulsion-strength scenario may come much closer to the vicinity of the vHS, significantly increasing the Fermi level DOS \(\mathcal{N}\left(\bar{\mu}\right)\). In turn, thanks to an effective weak-pairing lever factor [along the lines of Eq. (6)], this gives rise to enhancement of superconductivity. Both the superconducting transition temperature and the regions where superconductivity is stabilized are enhanced. We stress that change here was made to \(U_{C}\) alone, which determines the variational ground state. The initial coupling in the Cooper channel, \(V\left(\Lambda_{0}\right)\) remains unaltered for panels (a)-(b). Thus, the effect we demonstrate in Fig. 3 is not due to introducing additional attraction in the superconducting channel, but rather due to _modification of the normal state properties_. ## IV Refinements ### Ising spin-orbit coupling Inspired by the experiment in Ref. [15], we consider replacing the Zeeman term by a substrate-induced Ising spin-orbit coupling (ISOC), i.e., \[\mathcal{H}_{\mathrm{SB}}^{\mathrm{ISOC}}=-\lambda_{\mathrm{ISOC}}\sum_{ \mathbf{k}}\Psi_{\mathbf{k}}^{\dagger}s_{z}\tau_{z}\Psi_{\mathbf{k}}, \tag{18}\] which promotes so-called spin-valley locking, with the spin in the out-of-plane direction. The Stoner blockade mechanism explored in this work may also help explain the findings of Ref. [15] regarding the stabilization of superconductivity in BLG when an ISOC-inducing substrate (e.g., \(\mathrm{WSe}_{2}\)) is included. To demonstrate this, it is instructive to consider the following two limits. 1. _Opposite sign (antiferromagnetic) Hund's interaction.-_ As we demonstrate explicitly in Appendix A, flipping the sign of the intervalley Hund's term \(J\to-J\), exactly maps the scenario of spontaneous spin-polarization (\(\langle s_{z}\rangle_{\mathrm{H.F.}}\neq 0\)) in the presence of a Zeeman term, to polarization of different spin-valley locked sectors (\(\langle s_{z}\tau_{z}\rangle_{\mathrm{H.F.}}\neq 0\)) in the presence of ISOC. Whereas this limit is quite extreme, a plausible mechanism for the sign change of this term in the presence of a substrate is discussed in Appendix C. Thus, a moderate amount of ISOC, \(\lambda_{\mathrm{ISOC}}\sim\mathcal{O}\left(1\,\mathrm{meV}\right)\) as measured in experiments, acts as an effective magnetic field of the order of 10 Tesla in the context of the Stoner blockade (although without the adverse orbital effects). One would thus expect Stoner-blockaded superconductivity to be much stronger in this scenario, as compared to the Zeeman-triggered one. This is of course entirely consistent with experimental results thus far [15; 16]. 2. \(\lambda_{\mathrm{ISOC}}\rightarrow\infty.\)- If the ISOC overwhelms the other energy scales in the problem, one may consider the scenario where one spin-valley sector is inert as it occupies some remote low-DOS region, whereas the other two flavors may develop some valley polarization near Figure 3: Superconductivity near the ferromagnetic phase transition boundary. Left panels: The DOS at the Fermi level in the superconducting sector as a function of density and in-plane magnetic field. Right panels: The corresponding superconducting transition temperature, calculated by the methods of Sec. III. Both features follow closely the phase transition line, as one would expect within the Stoner blockade mechanism. (a) Calculations with Coulomb repulsion parameter \(U_{C}=1.8\) eV\(\cdot\)nm\({}^{2}\). (b) Same as (a), with \(U_{C}\) reduced by 10%. Notice the colorscales are identical for panels (a) and (b), emphasizing the immense potential impact of slight modifications of the Coulomb interaction strength. For example, the maximal \(T_{c}\) increased by a factor of \(\sim 2.5\) (66 mK to 158 mK) after the the 10% reduction in \(U_{C}\). Other parameters used: \(U=60\) meV, \(J=0.25\) eV\(\cdot\)nm\({}^{2}\), \(\Lambda_{0}=25\) meV, \(\omega_{c}=0.6\) meV, and \(g=0.63\) eV\(\cdot\)nm\({}^{2}\) (the last three parameters are defined in Sec. III.1). the van-Hove filling, (see Fig. 4). This will thus suppress pairing between electrons in this sector, in what we will denote as "mini-blockade". Comparing with the spontaneously developed spin polarization in the \(\lambda_{\mathrm{ISOC}}=0\) case, one has half the DOS. Thus, whereas the former blockade is dominated by the interaction \(U_{C}\), the mini-blockade is effectively controlled by \(U_{C}/2\). As one gleans from Fig. 1c, this significantly reduces the blockaded region, hence stabilizing superconductivity. Whereas these two limits are probably far from exact in the experimental scenario, they help us make sense of ISOC-enhanced superconductivity in the context of the Stoner blockade mechanism presented here. We provide a more quantitative example of the cascade of phase transitions in the presence of ISOC (comparable in magnitude to experiments) in Fig. 4. For the most part, the ISOC splits the occupation the two spin-valley locked sectors by an amount which gradually increases as the two sectors Fermi energy approaches the vHS, as expected. This effect is similar, up to a change of flavor labels, to the separation of spin-polarized sectors in the presence of a large Zeeman energy. Sufficiently close to the vHS, we observe the intra-sector mini-blockade dominated by a smaller \(\sim U_{C}/2\) repulsion. Qualitatively, \(\lambda_{\mathrm{ISOC}}\) should increase monotonically with the displacement field, up to some saturation field. This is due to the spin-orbit term originating in proximity to a \(\mathrm{WSe}_{2}\) layer, which is maximized when the valence band electron wavefunctions are entirely layer polarized (\(\left\langle\sigma_{z}\right\rangle\approx 1\) in our notation). Therefore, our theory predicts the superconducting region to expand and \(T_{c}\) to increase with the growing displacement field, consistent with the experimental scenario [15; 16]. ### Insufficiency of an Anderson's theorem In this Section, we argue that the superconductivity described here is remarkably delicate and disorder-sensitive, despite the simple pairing channel we consider is node-less, and even in the presence of so-called "protection" by Anderson's theorem [17]. A disorder that leads to density fluctuations will blunt the vHS, reduce the DOS at its vicinity, and will cause a decrease in the superconducting critical temperature. Within our spin-polarized valley-degenerate subspace, and neglecting orbital magnetic field effects, we write the mean-field superconducting Hamiltonian as \[H_{\mathrm{SC}}=\sum_{\mathbf{k}}C_{\mathbf{k}}^{\dagger}\left(\bar{\xi}_{ \mathbf{k}}\nu_{z}+\delta\xi_{\mathbf{k}}\tau_{z}+\Delta\tau_{x}\nu_{x}\right) C_{\mathbf{k}}, \tag{19}\] with the Nambu spinor \(C_{\mathbf{k}}=\left(c_{\mathbf{k},+},c_{\mathbf{k},-},c_{-\mathbf{k},+}^{ \dagger},c_{-\mathbf{k},-}^{\dagger}\right)^{T}\), and the Pauli matrices \(\tau_{i}\) and \(\nu_{i}\) operating on valley and particle-hole space, respectively. We also defined \(\bar{\xi}_{\mathbf{k}}=\left(\xi_{\mathbf{k},+}+\xi_{\mathbf{k},-}\right)/2\) and \(\delta\xi_{\mathbf{k}}=\left(\xi_{\mathbf{k},+}-\xi_{\mathbf{k},-}\right)/2\). It is instructive to apply a unitary transformation (along the lines of Ref. [26]) \(C_{\mathbf{k}}\rightarrow\mathcal{U}C_{\mathbf{k}}\), with \(\mathcal{U}=\frac{1+\nu_{z}}{2}+\frac{1-\nu_{x}}{2}\tau_{x}\). The transformed Hamiltonian at momentum \(\mathbf{k}\) is \(h_{\mathbf{k}}=\left(\bar{\xi}_{\mathbf{k}}+\delta\xi_{\mathbf{k}}\tau_{z} \right)\nu_{z}+\Delta\nu_{x}\). This Hamiltonian has intrinsic particle-hole symmetry, i.e., it anti-commutes with the unitary \(\mathcal{P}=\nu_{y}\tau_{y}\mathcal{K}\) (\(\mathcal{K}\) is the complex conjugation operator). Notice that although the phase we consider is spin-polarized, the Hamiltonian still possesses a residual spinless time-reversal symmetry, \(\mathcal{T}=\tau_{x}\mathcal{K}\). The key observation regarding disorder here, is that any perturbation to the normal state which is \(\mathcal{T}\)-symmetric (and also adheres to \(\mathcal{P}\) by construction) is proportional to \(\nu_{z}\). Thus, such perturbations anti-commute with the superconducting order parameter \(\Delta\nu_{x}\). In this scenario, it has been shown [17; 27; 28] that the only change to the self-consistent superconducting gap equation is replacement of the DOS of the pristine Hamiltonian \(H_{0}\), by that of the perturbed normal-state. For example, Eq. (15) will be modified as (notice once more we do not consider the orbital effect of the magnetic field in this section), \[\chi_{\mathrm{SC}}^{-1}=\left[\left|g\right|-V\left(\omega^{*} \right)\right]^{-1}-2\int_{0}^{\omega^{*}}d\xi\tilde{\mathcal{N}}\left(\xi \right)\frac{\tanh\left(\frac{\xi}{2T}\right)}{\xi}, \tag{20}\] where \(\tilde{\mathcal{N}}\) is the DOS in the presence of disorder. In graphene, the relevant sources of disorder are ripples [29; 30], charge impurities [31], and strain variations [32]. It has been argued that strain disorder, which acts as a random gauge field [33], is the dominant type of disorder in state-of-the-art graphene devices [34], and plays an important role in twisted graphene multilayers [35]. In any case, these all inherently preserve the spinless time-reversal symmetry, so that one may apply Anderson's theorem to the superconductivity discussed above. However, _this does not necessarily mean superconductivity persists in the presence of such disorder_. To simplify the remaining discussion and illustrate our point, we ascribe a Figure 4: (a) Flavor resolved densities \(\nu_{i}\) (calculated by the variational Hartree-Fock method) in the presence of strong ISOC, \(\lambda_{\mathrm{ISOC}}=0.7\) meV, consistent with Ref. [15]. Here, \(U_{C}=1.8\) eV-nm\({}^{2}\), \(J=-0.1\) eV-nm\({}^{2}\). (b) Zoom-in on one of the blockaded region close to the van-Hove filling. The blockaded region, where the intra-spin-valley sector spontaneously polarizes, thus suppressing intervalley pairing, is demarcated by a gray rectangle. Notice the y-axis scale is the same as Fig. 1b, showing the blockaded region is significantly smaller due to the large ISOC. single parameter to describe the strength of disorder in the system - the charge inhomogeneity \(\delta n\). This quantity is usually extracted as roughly the width of the resistance peak of a graphene device at charge neutrality and zero displacement field [36]. It has been previously shown to be directly related to the mobility in monolayer graphene and BLG devices [34], and thus will provide a useful metric for our discussion. We consider only the effect of DOS broadening brought on by inhomogeneity. We thus broaden the computed DOS by convolution with a normal distribution with standard deviation \(\sigma\approx\delta n/2.355\) (such that \(\delta n\) is the full-width-at-half-maximum of the distribution). Examples of the DOS broadening can be found in the inset to Fig. 5. One clearly sees that the immediate casualty of the broadening is the vHS, which loses much of its sharpness. Recalling Eq. (6), this suppression of the DOS peak may have dire consequences for superconductivity in this system. Let us now demonstrate this point. We repeat our procedure of extracting the superconducting \(T_{c}\) from Sec. III as a function of disorder. The effect on the critical temperature in the scenario where the Fermi energy is close to the vHS is illustrated in Fig. 5. Superconductivity, in this case, is quite delicate and sensitive to even a moderate amount of charge inhomogeneity. As a consequence, the theory presented here predicts (or rather post-dicts) that only exceptionally high-quality devices are expected to display the unique phenomenon of delicate superconductivity triggered by a magnetic field. These are devices where the charge inhomogeneity is of order \(10^{10}\) cm\({}^{-2}\) or lower. This is precisely the order of magnitude of disorder in current state-of-the-art devices [37; 38; 39], providing a sensible explanation for the relative elusiveness of superconductivity in BLG devices. ## V Conclusions In this work, we have presented a theory of delicate superconductivity which is brought to light by either a finite magnetic field or spin-orbit coupling of the Ising type. We dub the underlying mechanism at work here the Stoner blockade. Namely, strong electronic correlation tends to cause spontaneous polarization and reconstruction of the Fermi surfaces, steering them away from the vicinity of the vHS and high DOS fillings. Coupled with weak-enough electron pairing interactions, this scenario is devastating for superconductivity, which would have otherwise persisted in the non-reconstructed case (i.e., zero Coulomb repulsion). However, we have shown that external perturbations, e.g., in-plane magnetic fields, may significantly alleviate the blockade under the right circumstances. Moreover, the highest Fermi-level DOS, and thus strongest superconductivity, would be expected to occur in the vicinity of the symmetry-braking phase transition. Such circumstances are consistent with the experimental observations in BLG [1; 15; 16]. It thus becomes entirely reasonable to have a scenario where superconductivity is absent (or too weak to detect) unless a strong enough magnetic field is applied, in which case superconductivity stabilizes on the magnetic phase transition line. This is precisely the previously-enigmatic phenomenology of BLG. The mechanism depicted here gives rise to a very distinct prediction that may be tested experimentally. Namely, a small modification of the electron-electron repulsion strength, achieved by, e.g., changing the distance of the BLG to a nearby metallic gate, is expected to have outsize effects on superconductivity, as illustrated in Fig. 3. By virtue of allowing the Fermi energy to come closer to the vHS, the reduced repulsion will lead to an appreciable increase in \(T_{c}\), and to lowering of the critical Zeeman energy required for superconductivity - up to a point where superconductivity may be detected without a magnetic field at all. This prediction is in contrast to previous works discussing BLG superconductivity, where changing the distance to a metallic gate either weakly influences \(T_{c}\)[23], or has the opposite effect altogether [40; 41]. Our work highlights the universality of Stoner blockaded superconductivity in BLG. We have demonstrated that in-plane magnetic field or Ising SOC perturbations to the BLG Hamiltonian are on equal footing in terms of bypassing the blockade and revealing a superconducting phase. These two types of perturbations differ, though in detail. An in-plane magnetic field has a non-negligible orbital effect on superconductivity. This is despite the tiny intervalley pair breaking effect, as it is still sizable compared to the small superconducting transition temperature. As we have shown, the presence of the secondary Hund's-type interaction also introduces subtle differences between the two \(SU\left(2\right)\) symmetry-breaking perturbations, which will manifest through subtle details in the flavor-resolved Fermi surface structure in the normal state. Moreover, the induction of Ising SOC in BLG, which requires close proximity to a \(\mathrm{WSe}_{2}\), may further modify the Hamiltonian in important ways and depend on various details of the stacking itself [42]. These are expected to bear an impact on the phase diagram, which we leave to future investigations. Figure 5: Main panel: Superconducting \(T_{c}\) as a function of charge inhomogeneity \(\delta n\) induced by time-reversal symmetric disorder. The calculation was done for \(U=60\) meV and \(\tilde{\mu}=-57.85\) meV, with parameters \(\Lambda_{0}=25\) meV, \(\omega_{c}=0.6\) meV, \(J=0.25\) eV\(\cdot\)nm\({}^{2}\), and \(g=0.65\) eV\(\cdot\)nm\({}^{2}\). Inset: Comparison of the pristine DOS to the broadened DOS for several values of \(\delta n\) (indicated by legend). Several noteworthy issues related to the experimental phenomenology of intrinsic superconductivity in BLG, are not addressed by the unusual mechanism presented here. The precise nature of the electron pairing glue, be it phonons [23; 43] or a Kohn-Luttinger-like mechanism [44; 40; 41], is intentionally kept ambiguous. Various possibilities may comfortably fit within our framework, which requires only that the pairing glue itself is weak enough such that there is an appreciable lever effect with regards to small Fermi-level DOS modifications [Eq. (6)]. The related issue of an unconventional nodal pairing symmetry, e.g., p-wave (cf. Ref. [45]), is also unresolved. For the sake of clarity, we just considered the simplest possible pairing channel, yet the conclusions drawn here should be generalized. Experimentally, it is evident that superconductivity favors the vicinity of the phase transition boundary closer to charge neutrality over the boundary at higher hole-doping. This is observed both in Refs. [1; 15], where superconductivity appears only there, and Ref. [16], where the superconductor is far more robust closer to charge neutrality. In our theory, a weak inherent asymmetry in the DOS around the vHS does exist, leading to a small asymmetry in the phase transition itself. Notice, for example, Fig. 2a depicting the magnitude of the magnetization jump - it is somewhat smaller for the superconductivity-favoring region, consistent with the Stoner blockade picture. However, the differences we find in our calculations are not sufficiently large to account for the significant experimental disparity between the two regions enclosing the spin-polarized phase. This might suggest that the small effect we observe in our phenomenological description of the system is greatly enhanced by the nature of the pairing itself, its dependence on electron density, or the Fermi-surface topology [44; 45]. We finally comment on the zero-field normal state, which is observed to be more resistive near the magnetic phase transition, whose origin is not yet well-understood. Our analysis does not exclude the possibility of a correlated insulator that onsets at a low enough temperature [40] or the emergence of an intervalley coherent spontaneous order [46; 47]. We would like, however, to put forward another possibility, which is natural, given the mechanism explored here. Namely, the formation of a micro-emulsion of the fully-symmetric and spin-polarized phases, which is argued to inevitably occur in the vicinity of a first-order phase transition of the kind discussed here [48]. Since the two constituent phases have different densities, magnetizations, and Fermi-surface topology, it is reasonable to expect that domain walls should contribute to the overall resistivity. In this scenario, the resistivity would peak near the phase transition as long as superconductivity has not emerged. The qualitative and quantitative feasibility of this crudely-described mechanism require further investigation. ###### Acknowledgements. This project was partially supported by grants from the ERC under the European Unions Horizon 2020 research and innovation programme (grant agreement LEGOTOP No 788715), the DFG CRC SFB/TRR183, the BSF and NSF (2018643), the ISF (1335/16), and the ISF Quantum Science and Technology (2074/19). ## Appendix A Variational Hartree-Fock calculation This appendix explains the details of the Hartree-Fock calculations we have performed. We begin by computing the grand-potential associated with the normal-state Hamiltonian, Eq. (1), at a given chemical potential, \(\Phi=\left\langle H-\mu N_{0}\right\rangle_{\mathrm{H.F.}}\), where \(\left\langle\right\rangle_{\mathrm{H.F.}}\) denotes the expectation value calculated using the variational wavefunction appearing in Eq. (5), describing possible flavor symmetry breaking phases. We define the flavor-resolved densities and kinetic energies, denoted by indices (\(\tau,s\)) for (valley, spin), as \[\nu_{\tau s}=\int_{0}^{\mu_{\tau s}}d\mathcal{N}\left(\epsilon\right),\;\;\; \mathcal{E}_{\tau s}=\int_{0}^{\mu_{\tau s}}d\mathcal{N}\left(\epsilon\right)\epsilon, \tag{10}\] where \(\mathcal{N}\left(\epsilon\right)\) is the density of states per flavor obtained from Eq. (2), and \(\mu_{\tau s}\) are the variational chemical potentials [denoted by \(\mu_{i}\) with a single flavor index in Eq. (5)]. Combining the different ingredients of \(H\), accounting for the possible Zeeman and Ising spin-orbit terms, a straightforward calculation allows one to obtain the grand potential density, \[\frac{\Phi}{\Omega} =\sum_{\tau s}\left[\mathcal{E}_{\tau s}+\left(-\mu+V_{Z}s_{z}^{ ss}+\lambda_{\mathrm{ISOC}}\sigma_{z}^{ss}\tau_{z}^{\tau\tau}\right)\nu_{\tau s}\right]\] \[+\frac{1}{2}\sum_{\tau s\tau^{\prime}s^{\prime}}\nu_{\tau s} \left[U_{C}\left(1-\delta^{ss^{\prime}}\delta^{\tau\tau^{\prime}}\right)+ \left(U_{V}-J\left(\delta^{ss^{\prime}}-s_{x}^{ss^{\prime}}\right)\right)\tau _{x}^{\tau\tau^{\prime}}\right]\nu_{\tau^{\prime}s^{\prime}}. \tag{11}\] We now compare two scenarios of possible flavor symmetry breaking. (i) _Spin polarized (SP), valley degenerate, \(\lambda_{\rm ISOC}=0\)_ - In this case there are two distinct \(\mu_{\tau s}\), one for each spin. We denote \(\nu_{\uparrow}\equiv\nu_{+,\uparrow}=\nu_{-,\uparrow}\), \(\nu_{\downarrow}\equiv\nu_{+,\downarrow}=\nu_{-,\downarrow}\), \(\mathcal{E}_{\uparrow}\equiv\mathcal{E}_{+,\uparrow}=\mathcal{E}_{-,\uparrow}\), and \(\mathcal{E}_{\downarrow}\equiv\mathcal{E}_{+,\downarrow}=\mathcal{E}_{-,\downarrow}\) so that the grand potential \(\Phi_{\rm SP}\) is \[\frac{\Phi_{\rm SP}}{\Omega} =2\left(\mathcal{E}_{\uparrow}+\mathcal{E}_{\downarrow}\right)-2 \mu\left(\nu_{\uparrow}+\nu_{\downarrow}\right)+2V_{\rm Z}\left(\nu_{\uparrow} -\nu_{\downarrow}\right)\] \[+\left(\nu_{\uparrow}+\nu_{\downarrow}\right)^{2}\left(\frac{3 }{2}U_{C}+U_{V}\right)\] \[-\left(\nu_{\uparrow}-\nu_{\downarrow}\right)^{2}\left(\frac{U_{C }}{2}+J\right). \tag{10}\] (ii) _Spin-valley locked (SVL), \(V_{Z}=0\)_ - Here, we denote \(\nu_{1}\equiv\nu_{+,\uparrow}=\nu_{-,\downarrow}\), \(\nu_{2}\equiv\nu_{+,\downarrow}=\nu_{-,\uparrow}\), \(\mathcal{E}_{1}\equiv\mathcal{E}_{+,\uparrow}=\mathcal{E}_{-,\downarrow}\), \(\mathcal{E}_{2}\equiv\mathcal{E}_{+,\downarrow}=\mathcal{E}_{-,\uparrow}\), and we find \(\Phi_{\rm SVL}\), \[\frac{\Phi_{\rm SVL}}{\Omega} =2\left(\mathcal{E}_{1}+\mathcal{E}_{2}\right)-2\mu\left(\nu_{ 1}+\nu_{2}\right)+2\lambda_{\rm ISOC}\left(\nu_{1}-\nu_{2}\right)\] \[+\left(\nu_{1}+\nu_{2}\right)^{2}\left(\frac{3}{2}U_{C}+U_{V}\right)\] \[-\left(\nu_{1}-\nu_{2}\right)^{2}\left(\frac{U_{C}}{2}-J\right). \tag{11}\] Notice that up to a change of labels of the different flavors, \(\Phi_{SVL}\) is identical to \(\Phi_{\rm SP}\) with the replacements \(V_{Z}\rightarrow\lambda_{\rm ISOC}\) and \(J\rightarrow-J\). ## Appendix B First-order phase transition and the Stoner Blockade Our purpose here is to relate the magnetization jump at the phase transition points to the Stoner-blockaded region of flavor-resolved densities. For simplicity, we will assume that the transition is symmetric around the vHS density \(n_{\rm vHS}\), such that the two critical densities \(n_{1}^{c}\) and \(n_{2}^{c}\) are equidistant from \(n_{\rm vHS}\), and the magnetization jump \(\Delta m\) is also identical at both transition points, see Fig. 6. It thus becomes clear that the size of the blockade region is \[\Delta\nu_{b}=\left(\frac{1}{4}n_{1}^{c}+\frac{\Delta m}{2}\right)-\left(\frac {1}{4}n_{2}^{c}-\frac{\Delta m}{2}\right)=\Delta m+\frac{1}{2}\left|n_{\rm vHS }-n^{c}\right|, \tag{12}\] where we have suppressed the number index of the critical point at the right hand side for simplicity, and recovered Eq. (7). For completeness, we detail the calculation of the magnetization jump at the transition. Starting from the free-energy density in Eq. (8) at \(B=0\), we find \[\frac{\partial f}{\partial m^{2}}=\alpha-\beta m^{2}+\gamma m^{4}. \tag{13}\] Combining the conditions for the phase transition \[\frac{\partial f}{\partial m^{2}}|_{\Delta m^{0},\,\alpha_{c}}=0,\ \ \ f\left(\Delta m^{0}\right)=f\left(0\right), \tag{14}\] one obtains \(\left(\Delta m^{0}\right)^{2}=\frac{3\beta}{4\gamma}\), and \(\alpha_{c}=\frac{3\beta^{2}}{16\gamma^{2}}\). We now examine the magnetic susceptibility near the transition, and allow for finite infinitesimal \(B\). We obtain the saddle point equation \(\partial f/\partial m=0\), and consider small variations of \(m\) and \(B\). We find \[2dm\left(\alpha-3\beta m^{2}+5\gamma m^{4}\right)=dB. \tag{15}\] Plugging in \(\alpha=\alpha_{c}\), the susceptibilities on both sides of the transition are \[\frac{dm}{dB}_{m=0}=\frac{8\gamma^{2}}{3\beta^{2}},\ \ \frac{dm}{dB}_{m=\Delta m^{0}}=\frac{2 \gamma^{2}}{3\beta^{2}}. \tag{16}\] The difference in susceptibilities on both sides of the transition is responsible for the reduced magnetization jump, which is linear in \(B\) (at small \(B\)), given by Eq. (9). ## Appendix C Modification of the Hund's coupling Here we demonstrate the mechanism by which the sign of the intervalley Hund's term may change in the presence of the substrate. It has been shown in Ref. [42] that proximity to a \(\mathrm{WSe}_{2}\) substrate tends to induce short-range _attractive_ interactions between electrons in the proximate layer. Let us write a particular piece of this interaction, \[H_{\mathrm{inter}}=\frac{1}{2\Omega}\sum_{\mathbf{k},\mathbf{k}^{\prime}, \mathbf{q}}\sum_{s,s^{\prime},\tau}\tilde{U}A^{\dagger}_{\tau s\mathbf{k}}A_{ \tau s\mathbf{k}+\mathbf{q}}A^{\dagger}_{\tilde{\tau}s^{\prime}\mathbf{k}^{ \prime}}A_{\tilde{\tau}s^{\prime}\mathbf{k}^{\prime}-\mathbf{q}}, \tag{10}\] where \(\tilde{U}\) is the strength of the induced attraction (simplified to be extremely short-range), and \(A_{\tau s\mathbf{k}}\) annihilates an electron in layer \(A\), valley \(\tau\), spin \(s\), and momentum \(\mathbf{k}\). Employing the Fierz identity \(\delta^{\alpha\beta}\delta^{\mu\nu}=2\delta^{\alpha\nu}\delta^{\mu\beta}- \mathbf{s}^{\alpha\beta}\cdot\mathbf{s}^{\mu\nu}\) with respect to the spin indices in Eq. (10), we may extract an intervalley Hund's term \[H_{\mathrm{Hund}}=-\frac{1}{\Omega}\sum_{\mathbf{k},\mathbf{k}^{\prime}, \mathbf{q}}\tilde{U}\left(A^{\dagger}_{+,\alpha,\mathbf{k}}\mathbf{s}^{\alpha \beta}A_{+,\beta,\mathbf{k}+\mathbf{q}}\right)\cdot\left(A^{\dagger}_{-,\mu, \mathbf{k}^{\prime}}\mathbf{s}^{\mu\nu}A_{-,\nu,\mathbf{k}^{\prime}-\mathbf{q }}\right). \tag{11}\] Crucially, the minus sign signals that for attraction, \(\tilde{U}<0\), the induced intervalley Hund's interaction is _antiferromagnetic_, i.e., of the opposite sign of the presumed intrinsic ferromagnetic one in the absence of the \(\mathrm{WSe}_{2}\) substrate. Let us finally note that when projecting Eq. (11) to the valence band electrons, one must consider momentum-dependent form factors. However, in the regime of interest where superconductivity is observed, and where we perform our analysis, the applied vertical electric displacement field polarizes the valence bands electrons almost completely to the \(A\) layer. Thus, one expects these form factors may be fairly approximated by unity.
2306.10905
Determining cancer cells division strategy
Heterogeneity in the size distribution of cancer cell populations has been recently linked to drug resistance and invasiveness. However, despite many progresses have been made in understanding how such heterogeneous size distributions arise in fast-proliferating cell types -like bacteria and yeast-, comprehensive investigations on cancer cell populations are still lacking mainly due to the difficulties of monitoring the proliferation of the time scales typical of mammalian cells. From a reductionist cell dynamics point of view, the strategies allowing size homeostasis are roughly grouped into three classes, \emph{i.e.} timer, sizer, or adder. These strategies are empirically distinguishable given the phenomenological measurable relationship between the cell size at birth and at division, which requires following the proliferation at the single-cell level. Here, we show how it is possible to infer the growth regime and division strategy of leukemia cell populations using live cell fluorescence labeling and flow cytometry in combination with a quantitative analytical model where both cell growth and division rates depend on powers of the cell size. Using our novel approach, we found that the dynamics of the size distribution of leukemia Jurkat T-cells is quantitatively reproduced by (i) a sizer-like division strategy, with (ii) division times following an Erlang distribution given by the sum of at least three independent exponentially-distributed times and (iii) fluctuations up to 15\% of the inherited fraction of size at division with respect to the mother cell size. Finally, we note that our experimental and theoretical apparatus can be easily extended to other cell types and environmental conditions, allowing for a comprehensive characterization of the growth and division model different cells can adopt.
Mattia Miotto, Simone Scalise, Marco Leonetti, Giancarlo Ruocco, Giovanna Peruzzi, Giorgio Gosti
2023-06-19T13:05:30Z
http://arxiv.org/abs/2306.10905v2
# Determining cancer cells division strategy ###### Abstract Cell size exhibits a huge variability across different kinds of cells. However, cells of isogenic populations have a well defined typical size, preserved despite the complex and noisy machinery of cellular processes. How this is achieved is a question that remains still largely unanswered. From a reductionist cell dynamics point of view, the strategies allowing size homeostasis are roughly grouped into three classes, _i.e._ timer, size or adder. These strategies are empirically distinguishable given the phenomenological measurable relationship between the cell size at birth and at division. Here, we show how it is possible to infer the growth regime and division strategy using flow cytometric data of properly marked cells and forward side scattering as a proxy for cell size. In particular, comparing the experimental data with the prediction of a minimal mathematical model, we found that (i) leukemia Jurkat T-cells in standard growth conditions follow a sizer-like strategy, with (ii) division times following an Erlang distribution given by the sum of at least three independent exponentially-distributed times. Moreover, our work shows that (iii) the dynamics of the size distribution is reproduced by a minimal model where both cell growth and division rates depends on powers of the cell size. Finally, we note that our experimental and theoretical apparatus can be easily extended to other cell types and environmental conditions, allowing for a comprehensive characterization of the growth and division model different cells can adopt. ## I Introduction Life-sustaining processes like transcription, translation and metabolism are dependent on the cell size [1; 2], as cell volume and surface area affects molecule reactions and nutrient exchanges [3]. As a result, under steady-state conditions, isogenic populations tend to maintain cell sizes around typical, type-specific values, although significant cell-to-cell variability is often observed [4]. Despite decades of research, there is still little to no consensus on how cell populations achieve such size homeostasis [3]. This being mostly due to the difficulties of direct measures of cell size for sufficient times and number of cells [5]. Studies on bacteria and yeast populations showed that size homeostasis can be obtained by modulating the amount of growth produced during the cell cycle in such a way to have that on average larger cells at birth grow less than small ones. The strategies that cellular populations may adopt to reach and maintain such size homeostasis have been roughly classified into three distinct models, depending on whether divisions occur after a certain time (timer), upon reaching a certain size (sizer), or after the size of the cell has increased by a finite volume (adder). In particular, a phenomenological linear relation has been observed between the size at birth (\(s_{b}\)) and the size at division (\(s_{d}\)) [6]: \[s_{d}=as_{b}+\eta \tag{1}\] where \(\eta\) represent the biological noise, while the slope, \(a\), defines the size control models. Usually, experiments look at the quantity, \(\Delta=\left\langle s_{d}\right\rangle-\left\langle s_{b}\right\rangle=\left( a-1\right)\left\langle s_{b}\right\rangle+\left\langle\eta\right\rangle\). So that, for \(a=2\), the size at division is directly proportional to the size at birth; such the timer model hypothesizes that the cell size is controlled by a cell cycle timer that sets a time limit for the growth phase, and once the time limit is reached, the cell divides. Intuitively, if growth is exponential such a mechanism will end up in big cells to proliferate faster than small cells thus producing a divergent size variance. Only a linear growth regime under specific constraints on division symmetry is compatible with this homeostatic strategy [7]. Because if divisions are not symmetric, the average size would be stable but unfortunately the variance would still increase with time. For \(a=1\), a certain volume is added which is uncorrelated to the initial size. In particular, the adder model proposes that the cell size increases by a constant amount during each cell cycle, regardless of the initial size. This behavior has been proposed for various populations of bacteria, cyanobacteria, and for budding yeast populations [8; 9]. Finally, if \(a=0\), the size at division is completely set by the stochastic term, which is called a sizer mechanism. The sizer model suggests that the cell size is determined by a threshold size, and once the cell reaches this threshold size, it triggers the cell division process. A perfect sizer mechanism has been found for organisms like the fission yeast _S.Pombe[10; 11]_. To determine which model is followed by the different kinds of cell types and determine the cell size distribution in both bacterial and eukariotic cell populations, various experimental techniques have been developed [12; 13], comprising time-lapse microscopy [14], single-cell tracking [15], and gene tagging [16]. In particular, progresses in the quantitative measurement of single-cell features, using, for instance, mother machines, allow for the characterization of the growth and division strategy for fast-proliferating cell populations, like bacteria and yeasts (whose doubling times are on the order of hours). For other cell types, like mammalian ones, it is harder to get track several divisions due to lower division rates and more stringent growing conditions. Parallel to experimental progress, different models of cell size regulation (with few key parameters) have been proposed. From the pioneering works of [17; 18], that compared analytical predictions with cell counts experiments to more recent works that proposed mathematical models to interpret both population and single cell-based data [8; 19; 20; 21; 22; 23; 24; 25; 26; 27]. In this paper, we propose an novel experimental protocol coupled with a minimal mathematical model to determine the homeostatic strategy adopted by populations of liquid tumor cells. In particular, the experimental protocol we propose makes use of flow cytometry measurements which yield information on both the cell size via the collected forward scattering signal [28] and the cell lineage and partition noise through fluorescent markers that selectively bound cell internal organelles [29]. Ex Figure 1: **Experimental protocol.****a)** Schematic representation of a fluorescence-activated cell sorter (FACS) working principle. Stained cells pass one at a time in front of a laser source that excites the fluorophores of the markers. The forward scattered (FS), side scattered (SS) light, together with the one emitted by the dye fluorophores is collected and analyzed. Eventually, a sorter divides cells according to certain thresholds on the measured intensities. **b)** Side scattering vs Forward scattering intensities for the initial population of marked cells together with the distribution of CTV (CellTrace Violet, cytoplasm marker) intensity. **c)** Schematic representation of the time course protocol: initial sorted population is kept in culture (see Methods) and samples are collected at different time points and analyzed via a flow cytometer that collects both FSC, SSC and CTV intensities for each analyzed cell. **d)** Time evolution of the population CTV fluorescence intensity. Colors from purple to dark green represent different time points along the experimental time course, from time zero to 72 hours. Inset shows an example of Gaussian Mixture fitting of the CTV distribution where four different generations have been identified. **e)** Density distribution of the forward scattering intensity of the cell population at different times. Colors ranges from purple to dark green as the time goes from zero (sorting of cells, i.e. start of the experiment) to 90 hours. perimental data are compared with the predictions of a minimal model where the variation of a single parameter allows the exploration of the different size-homeostasis strategies. In particular, we show that (i) using forward scattering as a proxy for cell size [30] allows to observe the dynamics of cell size distributions, which are in qualitative agreement with those shown by both numerical simulations of agent-based systems in which each agent can growth and divide and a minimal model based on a population balance equation. (ii) A simple exponential distribution of division times can not reproduce the observed dynamics, that instead requires an Erlang distribution with at least the sum of three independent exponentially distributed intermediate times. Finally, (iii) stratifying data according to cell generations allows to fully infer the homeostatic strategy adopted by leukemia cells. Overall, our results provide insight into the mechanisms of cell size control and may contribute to the development of novel therapies for diseases that are characterized by abnormal cell size distribution. ## II Results ### Experimental protocol To follow the dynamics of the cell size distribution, we developed a protocol based on flow cytometry measurements. Extending the procedure we previously proposed to measure the partition noise of cellular compounds [29], we made use of CellTraceViolet (CTV), a fluorescent dye, to mark cell cytoplasm (see Method section) and follow the proliferation of the population by looking at the dynamics of the fluorescence and forward scattering signals in time via a series of flow cytometry measurements. As depicted in Figure 1 and explained in more details in the Methods, marked cells are first sorted (see panel a), so that an initial population with a narrow CTV distribution is selected (Figure 1b). The sorted population is then collected and cultured in standard growth conditions (see Methods). Samples of the population are then collected at different times, recording both CTV and forward scattering intensities for each analysed cell (see Figure 1c). Figure 1d shows the evolution of the distributions of \(\log_{2}\) CTV fluorescence intensities. The purple curve (low-right corner of the panel) corresponds to the time zero, post-sorting distribution. Looking the CTV intensity at different times during the proliferation of the cell population, one observes a progressive shift of the initial fluorescence and the appeance of multiple peaks distributions. As CTV homogeneously binds to cytoplasmic proteins, we expect that upon division, the fluorescence of each mother cell is divided into the two daughters as a result of the cell division process. Each division produces two daughters, thus on average the CTV distribution of the daughters cells has half the mean fluorescence of the mother distribution (see inset in Figure 1d). Parallel to the evolution of the CTV intensity, we track the evolution of the FS intensity. As it can be seen from Figure 1e, the distribution shifts toward higher values than those presented at the initial time point (purple curve), while its variance increases. This behaviour can be explained recalling that initial population is sorted, i.e. a aliquot of cell is sampled from a population that has been kept growing from a week and so has reached the homeostatic size distribution. ### Forward scattering evolution probes cell size dynamics To identify the different generations from the CTV intensity profiles, we applied a Gaussian Mixture fitting of the form: \(P(\ln x)=\sum_{g}w_{g}N(\ln x,\bar{x}_{g},\sigma_{g})\) combined with an Expectation Maximization algorithm (see Methods for details). An example of the result of the fitting procedure is shown in Figure 2a. From left to right, the distributions of the log2 of the CTV intensity for different time points are shown in grey, while the Gaussian distributions obtained as best fits of the experimental data are reported in different colors corresponding to the different identified generations. Together with the mean and variance of each Gaussian, the fitting procedure yields the probability of each cell to belong to the various identified generations (we note that the EM procedure gives the fraction of cells found in each generation at every acquisition time). Since for each cell both CTV and FS signals are measured at each time point, we can use the information coming from the GM procedure to identify the subpopulations corresponding to different generations from the FS distributions. Figure 2b displays the FS intensity distributions of the same population from which the CTV distributions [shown in panel a)] were measured. In this case, grey curves mark the total population, while colored ones correspond to the different generations. Comparing the distributions of the same generation in different snapshots, one clearly note that newer generations have distributions shifted toward smaller FS values with respect to older ones. In particular, the distributions (dots) of the grand-daughter cells are reported in Figure 2c, together with the best fits of a normal distribution for each distinct time point. Mean and variance as a function of the snapshot times are reported in Figure 2d. Finally, we evaluate the Pearson correlation coefficients between CTV and FS intensities as a function of time (Figure 2e). Results show that the correlation is null just after the sorting procedure and then reaches a high positive values (around 0.6) in the next snap times. This behaviour can be explained appears reasonable since cytoplasm volume scales with the cell size; on the other hand, an almost zero correlation after sorting may indicate that the rates of CTV uptake are not strongly dependent on cell size. ### Minimal model for size dynamics At odd with single cell size measurements, where the homeostatic strategy can be inferred simply looking at the slope of the linear relation between birth and division sizes of the cells, the interpretation of the data collected by our experimental protocol require a comparison with a proper theoretical framework able to account the time evolution of the cell size distribution under different possible growth and division regimes, i.e. different size-homeostasis strategies. To allow for a comparison, we aimed at modeling the growth and division dynamics of a population of cells growing in controlled environment. We assume that each cell of the population is thus characterized by a size, \(s\), which changes in time as the cell grows and divides and by its generation, \(g\). So that if we monitor the population at different times, we can follow the evolution of the size distribution, eventually stratified by generation. Since from each mother cell two daughters are generated, at each duplication the marked compounds split in the two daughters with a certain ratio, that is affected by division noise. Referring to \(n_{g}(s(t),t)\) as the number of cells in the population having size \(s\) at time \(t\) and having divided \(g\) times, this number will evolve in time according to: Figure 2: **Analysis of forward scattering and CTV intensity data a)** Density distribution (grey curves) and best fit of a Gaussian Mixture (coloured curves) of CTV fluorescence intensity of a Jurkat population at different times during its proliferation. From left to right, snapshots at 0, 19, 23, 27 and 43 hours from the CTV staining process. Curves colored from blue to purple are ordered according to the idenitified generations. **b)** Same as in a) but for the measured toward scattering intensities. Coloured curve highlight the subpopulations corresponding to the different generations that have been identified thank to the Gaussian Mixture fitting of the CTV intensities. Intensities has been rescaled by a factor \(10^{5}\). **c)** Density distributions (dots) and best normal fit (lines) of the rescaled forward scattering intensity of the granddaughters cells (second generation) measured at different times during the proliferation of the population. The values of the \(R^{2}\) for each fit are reported in the figure legend. **d)** Mean and variance of the size of the grand-daughter subpopulation as a function of time. Size is quantified by the rescaled forward scattering intensity. Marker sizes are comparable with the Standard Error over the Mean and Variance of each time point. **e)** Pearson correlation coefficient of CTV and forward scattering intensities for the snapshots. \[\frac{\partial n(s,t)_{g}}{\partial t}+\frac{\partial\left(g(s) \cdot n_{g}(s,t)\right)}{\partial s}=-\gamma(s)n_{g}(s(t),t)+\\ +2\int_{0}^{\infty}d\eta\;\gamma(\eta)\;\phi(s|p\eta)\ n_{g-1}( \eta,t) \tag{2}\] where \(g(s)\) and \(\gamma(s)\) are the size-dependent growth and division rate, respectively, and \(\phi(x|py)\) quantifies the probability that a daughter cell inherits a fraction p of mother cell size y. See Appendix A for the derivation of Eq. 2. Along the lines of previous works [19; 21; 25; 31], we assumed that both growth and division rates are given by power of the cell size, i.e. we assumed \(g(s)=\frac{ds}{dt}=\lambda s^{\alpha}\) and \(\gamma(s)=\kappa s^{\beta}\). It can be easily shown that with this minimal assumption it is possible to recover all main size-homeostatic strategies tuning the parameter \(\omega=\beta-\alpha\) (see Appendix D). If we now look at the variation of the total number of cells at generation \(g\), we have: \[\dot{N}_{g}=\frac{d}{dt}N_{g}(t)=\int_{0}^{\infty}\frac{\partial}{\partial t}n _{g}(s,t)ds \tag{3}\] which can be recasted as \[\dot{N}_{g}=\int ds\big{[}-\frac{\partial}{\partial s}\left( \lambda s^{\alpha}n_{g}(s,t)\right)-ks^{\beta}n_{g}(s,t)+\\ +2k\int d\eta\eta^{\beta}\int ds\phi(s|p\eta)\ n_{g-1}(\eta,t) \tag{4}\] Figure 3: **Model of size-homeostasis.****a)** Schematic representation of cell growth and division. A mother cell with starting size \(s_{b}\) grows up to a size \(s_{d}\) and then split in two daughter cells whose starting sizes are fractions of the mother cell size. **b)** Probability density distribution of the division times, \(\tau_{d}\) as a function of the number of tasks a cell must accomplish before dividing. Time to accomplish a single task is assumed to be exponentially distributed. **c)** From left to right, schematic representation of the sizer, time and adder mechanisms: cells grow (i) until a certain size is reached, for a certain time interval, or until a determined amount of size is added to the starting one. **d)** Rescaled difference between size at division and birth, \(\Delta=s_{d}-s_{b}\) vs rescaled birth size for the three size-homeostasis models. Both quantities are rescaled by the respective mean values. Light green dots represent the values obtained via a numerical simulation a cell population in each regime, while orange, red and blue dots a obtained binning over the x-axis. **f)** Fraction of cells having divided \(g\) times since the initial time of the simulation as a function of simulation time. From left to right, cells grow and divide according to a sizer, timer,or adder strategy, respectively. Again the first term of the integral goes to zero thanks to the fact that either the size or the number of cells is zero in the integration extrema; thus the above equation becomes: \[\frac{\dot{N}_{g}}{N_{g}}=-k\left\langle s^{\beta}\right\rangle_{g}+2k\left\langle s ^{\beta}\right\rangle_{g-1}\frac{N_{g-1}}{N_{g}}=\Phi_{g} \tag{5}\] The fraction \(N_{g-1}/N_{g}\) is difficult to be experimentally measured. Thus, we want to recast it in a more handy form. To do so, we define the fraction of cells belonging to a certain population at each time as: \[P_{g}(t)=\frac{N_{g}(t)}{\sum_{q}N_{q}(t)} \tag{6}\] Eq. 5 can be used to compute the dynamics of the fractions, \(P_{g}\). In fact, we have that [26]: \[\dot{P_{g}}=\left(\frac{\dot{N_{g}}(t)}{\sum_{q}N_{q}(t)}\right)= \frac{\dot{N_{g}}\left(\sum_{q}N_{q}\right)-N_{g}\sum_{q}\dot{N_{q}}}{\left( \sum_{q}N_{q}\right)^{2}}=\\ =\frac{\dot{N_{g}}}{\sum_{q}N_{q}}-P_{g}\frac{\sum_{q}\dot{N_{q}} }{\sum_{q}N_{q}} \tag{7}\] Using Eq. 5, we can explicit \(\dot{N_{g}}\) and obtain: \[\dot{P_{g}}=\frac{-k\left\langle s^{\beta}\right\rangle_{g}N_{g} }{\sum_{q}N_{q}}-P_{g}\frac{\sum_{q}-k\left\langle s^{\beta}\right\rangle_{q} N_{q}}{\sum_{q}N_{q}}=\\ =-k\left\langle s^{\beta}\right\rangle_{g}P_{g}+2k\left\langle s ^{\beta}\right\rangle_{g-1}P_{g-1}+\\ +P_{g}\sum_{q}\left(k\left\langle s^{\beta}\right\rangle_{q}P_{q }-2k\left\langle s^{\beta}\right\rangle_{q-1}P_{q-1}\right) \tag{8}\] It remains to obtain expressions for the moments dynamics. Again, we can start from Eq. 5. In fact, we can introduce the probability of finding a cell with size \(s\) at time \(t\) and generation \(g\) as \[\rho_{g}(s,t)=\frac{n_{g}(s,t)}{N_{g}(t)} \tag{9}\] and \[\dot{n_{g}}=\dot{N_{g}}\rho+N_{g}\dot{\rho_{g}} \tag{10}\] \[-\frac{\partial}{\partial s}\left(\lambda s^{\alpha}n_{g}(s,t) \right)-ks^{\beta}n_{g}(s,t)+\\ +2k\int d\eta(\eta)^{\beta}\ \phi(s|p\eta)\ n_{g-1}(\eta,t)=\\ =\Phi_{g}N_{g}\rho_{g}+N_{g}\dot{\rho_{g}} \tag{11}\] and reordering, dividing by \(N\) we get \[\dot{\rho_{g}}=-\Phi_{g}\rho_{g}-\frac{\partial}{\partial s}\left( \lambda s^{\alpha}\rho_{g}\right)-ks^{\beta}\rho_{g}+\\ +2k\int d\eta\eta^{\beta}\ \phi(s|p\eta)\rho_{g-1}(\eta,t)\frac{N_{g-1 }}{N_{g}} \tag{12}\] Without loss of generality, one can express \(\phi\) as \[\phi(s|p\eta)=\int_{0}^{1}dp\ \pi(p)\delta(s-p\eta) \tag{13}\] where \(\pi(p)\) is a general probability function of the fraction of inherited cell size. Thanks to Eqs. 12 and 13, we can easily compute the distribution moments evolution equations as: \[\left\langle\dot{s}^{i}\right\rangle_{g}=\lambda\cdot i\cdot\left \langle s^{(\alpha+i-1)}\right\rangle_{g}-\Phi_{g}\left\langle s^{i}\right\rangle _{g}-k\left\langle s^{(\beta+i)}\right\rangle_{g}+\\ +2\ k\left\langle p^{i}\right\rangle_{\pi}\left\langle s^{( \beta+i)}\right\rangle_{g-1}\frac{P_{g-1}}{P_{g}} \tag{14}\] where \(<p^{i}>_{\pi}\) refers to the i-th moment of \(\pi(p)\) (see Appendix C for details on how the last term is obtained). Equations 8 and 14 fully describe the dynamics of the cell population, however except for some specific sets of parameters, this set of equation is not closed; in fact, the time derivative of the i-th moment may contain higher moments depending on the values of \(\alpha\) and \(\beta\). In particular, the set is closed in the case of a division rate that does not depend on the cell size (i.e. \(\beta=0\)). To solve the system, we must choose a moment closure strategy. To do so, we exploit our findings on the FS distributions stratified by generations. In fact, as the latter are fairly enough fitted by normal distributions, we assume that the single generation size distributions have normal moments (note that this is clearly not the case of the total population size distribution) opt for a normal moment closure (see SI for details). To validate the obtained relations and test the adopted moment closure, we compare the solution of the differential equations with the results of stochastic simulations of an agent based model, where an initial population of cells grow and divide following the same grow and division rates functional form used in Eq. 2. To associate at each cell a proper division time, a Gillespie procedure has been adopted. See Method section for the detailed description of the stochastic simulation protocol. The outcomes of the simulations are recapitulated in Figure 3. In particular, Figure 3a provides a schematic representation of the life cycle of a single agent, i.e. cell in the population: a cell is born with an initial size, \(s_{b}\); it grows according to a certain grow rate, \(g(s)\), for a certain set of times, \(\tau_{qq}^{Q}\), which encode a series of independent intermediate task the cell has to carry out before actual division. From a biological point of view, such task can be linked to the phases of the cell cycle. Upon reaching the division size, \(s_{d}\), the mother cell splits into the two daughter cells, one inheriting a fraction p of the mother volume and the other keeping the remaining \(1-p\) fraction. To begin with, we verified that such framework reproduces the expected sizer, timer and adder (see Figure 3c.d) behavior upon varying the \(\gamma=\beta-\alpha\) parameter. Indeed, simulations with \(\alpha=1\) and \(\beta\) taking values 2,0, or 1 produced the expected trend for sizer, timer and adder, respectively. Next, we compared the solution of the model, in the normal closure approximation, with the results of the agent based stochastic simulations. In Figure 2f, we show the results for the fractions of cells found in different generations as a function of time. There is perfect accord between model and simulations as testified by the values of the \(R^{2}\) of \(~{}0.9\). This results both confirm the analytical calculations and the choice of the moment closure for such kind of dynamical process. ### Model parameters govern distinct and measurable aspects of cell dynamics Next, we proceeded to characterize the role of the different model parameters and their effects on the quantities we are able to measure experimentally, i.e. the population size distribution moments, their per-generation stratifications and the relative abundances of cells in different generations during the dynamics. At first, we focused on the mean size of the whole population. Measuring the mean FS as a function of time, we found that it reaches a well defined oscillating NESS (non-equilibrium steady state), which does not depend on the initial value (see Figure 3a). Comparing the trend shown by experimental data with those of the model (Figure 3b1-b2), we found that the asymptotic mean size is not a function of the starting mean sizes (see Figure 3b1) but its is modulated by the ratio of the rate coefficients, \(\lambda/\kappa\). In particular, Figure 3b2 clearly shows that the higher the ration \(\lambda/\kappa\), the higher is the population mean size in the long time limit. Looking at the trend of the Coefficient of Variation, CV, instead, we found that the size of the Jurkat population displays a CV with an oscillating behavior around a value slightly higher than twenty percent as shown in Figure 3c. A comparison with the model trends again shows that this behaviour is qualitatively reproduced by the model. Moreover, the key parameter modulating the variance of the cell size distribution is the exponent of the division rate, \(\beta\). In particular, the higher the exponent the lower the fluctuations of the cell sizes (see Figure 3d2). To explore the role of the remaining model parameters, i.e. the number of intermediate tasks and the division noise, we moved to consider the time evolution of the fractions of cells per generations. Figure 3e shows the results for three time courses (note that times have been rescaled as described in Methods section to compare different repetition of the measurement) as dots. As one can see, experimental data exhibit a trend qualitatively similar to those obtained solving Eqs. 8 and shown in Figure 2e. Notably, the maximum fraction of observed first generation cells in the population is \(0.85\pm 0.05\). This values is not compatible with a single-task model of cell growth and division, but requires a cell division time given by the sum of at least 2 independent exponentially distributed times. Indeed, this can be seen comparing the predicted maximum fraction of daughter cells obtained solving Eqs.8. Th e inset in Figure 3e displays the maximum fraction values obtained by changing the number of task, T, from one to 6. Finally, we quantified the role of division noise in terms of the shapes of the generation fractions curves. As discussed in the previous sections, given the high correlation between TS and CTV intensity, we assume that the size at division follows the same statistics of the cytoplasmic components. In a previous work, we found that Jurkat cells partition their cytoplasm symmetrically [29]. Thus, we assumed that even the fraction of inherited volume is a random variable with a normal distribution, centered in 1/2 and having a certain variance, \(\sigma_{p}^{2}\) (see inset in Figure 3f). Solving the model equations with different values of \(\sigma_{p}\) while keeping all other parameters fixed gives the trend reported in Figure 3f. It can be seen that the higher the level of division noise, the more the maximum fraction of cells per generation decreases, while the same generation tend to endure for longer times, i.e. smaller cells can be produced that require longer times to divide. Note that this reflect in an increase of the total population variance. ### Size dynamics behaves according to a size-like homeostatic strategy. Finally, we compared the prediction of the model against the collected data. As discussed in the previous section, Eqs. 14 and Eqs.8 depend on four parameters governing the growth and division rates, two parameters fixing the first two moments of the initial size distribution, the number of intermediate tasks, Q, and the variance of the inherited size fraction. In particular, mean and variance of the initial size distribution are directly measurable from the starting post-sorting forward scattering distribution, while the exponent of growth rate is fixed to 1, as required to reproduce the exponential growth dynamics compatible with the trend of the variance evolution of the mother cell size variance. Note that we re-scaled forward scattering intensities by the mean of the starting population in order to work with smaller numbers. Figure 5 shows the results of the best fit between the experimental data and the model. In particular, performed a standard \(\chi^{2}\) minimization of the squared residues of the fractions of cells in the different generations (Figure 5a), the asymptotic mean and variance of the whole population size (Figure 5b1-b2). We opted to leave the mean size per generation out of the scoring function to act as an independent validation of the model. As one can see from Figure 5c, the parameters of the best fit perfectly reproduce also the \(<s>_{g}\) (\(t\)) trends (as also testified by the values of the \(R^{2}=0.85\)). Notably, the optimal value of the \(\beta\) exponent of the cell division rate is equal to 5, which indicates a near-adder strategy for size homeostasis (see Figure 5d). Note that in the proposed model, a \(\beta\) of 1 would have indicated a perfect adder strategy, while \(\beta\lim\inf\) a perfect size sensing. ## III Discussions Cell size is a phenotype that exhibits a huge variability across different kinds of cells. Typical sizes of bacteria are in the range of 1-10 \(\mu m\), eukariotic cells have linear sizes of 5-100 \(\mu m\), to end at neuronal cells whose size is up to some meters. Beside such inter-kind size heterogeneity, cells of isogenic populations have a well defined typical size [32]. How this typical size is preserved despite the complex and noisy machinery of cellular processes that are at play in proliferating cells, is a question that remains still largely unanswered [33]. Over the years, several models have been proposed to explain how cells regulate their size. These models provide insights into the molecular mechanisms that govern cell growth and division, and help researchers to identify key regulatory pathways that could be targeted for therapeutic purposes. In particular, (i) cells that divide after a certain time from birth are said to follow a timer process; (ii) if division takes place when the cell reaches a certain size one speak of sizer model; while (iii) an adder mechanism consists in adding a certain volume which does not depend on the birth size. The 'canonical' way to assess the size homeostatic strategy adopted by a certain cell type is based on the trend shown by the size at birth vs that at division. To obtain such relation, one has to follow the proliferation of single cells and measure the size of the same cell at its birth and just after its entry in the mitotic phase. While such procedure provides a reliable way to determine the Figure 4: **Role of the model key parameters.****a)** Mean of the rescaled size distribution, \(\) as a function of time for three Jurkat cell populations that have been sorted for low, medium and high values of forward scattering intensity at time zero. Dotted horizontal lines mark the average values \(\)\(\) reaches after 2 days of proliferation. **b1)** Mean size of the population as a function of time obtained solving Eqs.14 for different values of the mean size of the population at time zero, \(\)\(s(0)>\). **b2)** Same as in panel b1) but for different values of the ration \(\lambda/\kappa\). **c)** Same as in panel a) but for the coefficient of variation, CV. **d1)** Same as in panel b1) but for the CV. **d2)** Same as in panel d1) but varying the division rate exponent, \(b\). **e)** Fraction of mother (blue), daughter (green) or granddaughter (red) sub populations as a function of time. The maximum observed fraction of cells for the three different generations are reported in the figure inset. Maximum values are computed fitting the mother fraction with a sigmoidal function, while daughter and gran-daughter ones are fitted with two normal distributions. Expected maximum fractions obtained solving Eqs. for different number of intermediate tasks, T, are shown in shades of purple. Maxima increases as a function of T. **f)** Fraction of cells belonging to different generations as a function of time obtained solving equations 8 for levels of noise in the cell size division between daughter cells. Fraction of inherited volume, p, is described by a normal distribution centered in 1/2 and with different variances as shown in the inset. homeostatic behavior and a sure way to track cell lineages, it also has the limitation of measuring the birth and divison sizes, i.e. follow the dynamics of individual cells. To address such problems, we sought for an alternative/complementar procedure able to provide high statistics while preserving cell grow conditions. We propose an experimental protocol that does not explicitly consider birth and division size but instead utilizes flow cytometry data in combination with a minimal mathematical model to determine the growth and division mechanism of Jurkat cancer T-cells. Our main finding is that a model based on power functions of the size for division and growth rates successfully reproduces the key features of cell size dynamics, including an Erlang distribution of division times (tasks) and a size-like strategy. Interestingly, our results are in accordance with those of Tzur et al. [34] show that growth rate is size-dependent throughout the cell cycle in lymphoblasts. Moreover, the model we present is not limited to exponential growth, a major assumption in most of the analytical modelizations present in literature [25]. In fact, while this common assumption holds for various cell types, it is not universal. For example, the evolution of the size distributions in Schizosaccharomyces pombe (fission yeast) is a case where the increase of cell size with time after birth is non-exponential [35; 36]. Analyzing the maximum fraction of cells found in the first generation, we found that in order for the model to correctly reproduce observations, cell division time has to be given by the sum of a minimum of three independent and exponentially distributed intermediate times. Notably, this is in accordance with the evidence Chao and coworkers provide of the human cell cycle as a series of uncoupled, memory-less phases [37]. In particular, it is well known that the cell cycle is canonically described as a series of four consecutive phases: G1, S, G2, and M. In single cells, the duration of each phase varies, but the quantitative laws that govern phase durations are not well understood. Using time-lapse microscopy, they found that each phase duration follows an Erlang distribution and is statistically independent from other phases. Interestingly, we found that subpopulation distributions are well described by Gaussians, thus a good description of the size dynamics is provided following the evolution of first and second moments. Finally, we measure the division noise as CTV noise through correlation analysis, finding that the shapes of the generation fraction curves were compatible with a Figure 5: **Model vs experimental data.****a)** Measured fractions of cells in different generations as a function of time (dots) and curves given by the best fit of the minimal model, described by Eqs. 14 and Eqs. 8. **b1-b1)** Same as in panel a) but for the mean re-scaled total size and its CV given by the forward scattering measurements. **d)** Mean rescaled size for different generations as a function of time (dots) and trends given by the best fit solution of the model (lines). **d)** Rescaled difference between size at division and birth, \(\Delta=s_{d}-s_{b}\) vs rescaled birth size for the best fit solution of the model. Grey dots represent the values obtained via a numerical simulation of a cell population, while red dots a obtained binning over the x-axis. Best fit parameters are: \(\lambda/\kappa=3.2\), \(Q=5\), \(\alpha=1\), and \(\beta=6\). symmetrical division with fluctuations around the mean up to ten percent. We note that our results depends on both the generations fractions and the FSS signal. While fraction signals are reasonably solid, the forward scattering can only be considered as a proxy for cell size, thus future works should focus on finding a better descriptor. On the analytical point of view, further investigations on both fluctuations on the key parameters (e.g. \(\kappa\) and \(\lambda\)), and on combining different strategies in different tasks should be performed. In conclusion, we proposed an experimental and theoretical apparatus to characterize the growth and division of leukemia cells. We found that (i) while following a size-like homeostatic strategy, Jurkat cell (ii) need to complete a certain number of tasks before dividing, which are independent and exponentially distributed. (iii) Experimental data are well reproduced by a minimal model that depends on relatively few, physically meaningful parameters. ## IV Materials and Methods ### Cell culture E6.1 Jurkat cells (kindly provided by Dr. Nadia Peragine, Department of Cellular Biotechnologies and Hematology, Sapienza University of Rome) were used as a cell model for proliferation study and maintained in RPMI-1640 complete culture media containing 10% FBS, penicillin/streptomycin plus glutamine at 37 C in 5% CO2. Upon thawing, cells were passaged once prior to amplification for the experiment. Cells were then harvested, counted, and washed twice in serum-free solutions and re-suspended in PBS for further staining. ### Cells fluorescent dye labeling To track cell proliferation by dye dilution establishing the daughter progeny of a completed cell cycle, cells were stained with CellTraceViolet stain (CTV, C34557, Life Technologies, Paisley, UK), typically used to monitor multiple cell generations, and MitoTracker(r) Deep Red 633 (M22426, Molecular Probes, Eugene, USA). To determine cell viability, prior dyes staining, the collected cells were counted with the hemocytometer using the dye exclusion test of Trypan Blue solution, an impermeable dye not taken up by viable cells. To reduce the time that cells are in incubation with different dyes, we optimized the protocol performing the simultaneous staining of CTV and MitoTracker Deep Red. For the dyes co-staining, highly viable \(20\times 10^{6}\) cells were incubated in 2ml solution of PBS containing both CTV (1/1000 dilution according to the manufacturer's instruction) and Mitotracker Deep Red (used at a final concentration of 200nM) for 25 min at room temperature (RT) mixing every 10min to ensure homogeneous cell labeling. Afterward, complete media was added to the cell suspension for additional 5 min incubation before the final washing in PBS. ### Cell sorting Jurkat cells labeled with dyes were sorted using a FACSAriaIII (Becton Dickinson, BD Biosciences, USA) equipped with Near UV 375nm, 488nm, 561nm, and 633nm lasers and FACSDiva software (BD Biosciences version 6.1.3). Data were analyzed using FlowJo software (Tree Star, version 9.3.2 and 10.7.1). Briefly, cells were first gated on single cells, by doublets exclusion with morphology parameters, both side and forward scatter, area versus width (A versus W). The unstained sample was used to set the background fluorescence for each channel. For each fluorochrome, a sorting gate was set around the max peak of fluorescence of the dye distribution [38]. In this way, the collected cells were enriched for the highest fluorescence intensity for the markers used. Following isolation, an aliquot of the sorted cells was analyzed at the same instrument to determine the post sorting purity and population width, resulting in an enrichment \(>\) 99 % for each sample. ### Time course kinetic for dye dilution assessment The sorted cell population was seeded into a single well of a 6 well plate (BD Falcon) at 1x10*6 cells/well and kept in culture for up to 72 hours. To monitor multiple cell division, an aliquot of the cells in culture was analyzed every 18, 24, 36, 48, 60, and 72 hours for the fluorescence intensity of CTV dye by the LSRFortessa flow cytometer. In order to set the time zero of the kinetic, prior culturing, a tiny aliquot of the collected cells was analyzed immediately after sorting at the flow cytometer. The unstained sample was used to set the background fluorescence as described above. Every time that an aliquot of cells was collected for analysis, a same volume of fresh media was replaced to the culture. ### Expectation-Maximization and the Gaussian Mixture Model We used the Expectation-Maximization (EM) algorithm to detect the clusters in Gaussian Mixture Models [39]. The EM algorithm is composed of two steps the Expectation (E) step and the Maximization (M) step. In the E-step, for each data point \(\mathbf{f}\), we used our current guess of \(\pi_{g}\), \(\mu_{g}\), and \(\sigma_{g}\), to estimate the posterior probability that each cell belongs to generation \(g\) given that its fluorescence intensity measure as \(\mathbf{f}\), \(\gamma_{g}=\mathrm{P}(g|\mathbf{f})\). In the M-step, we use the fact that the gradient of the log-likelihood of \(p(\mathbf{f_{i}})\) for \(\pi_{g}\), \(\mu_{g}\), and \(\sigma_{g}\) can be computed. Consequently, the expression of the optimal value of \(\pi_{g}\), \(\mu_{g}\), and \(\sigma_{g}\) is dependent on \(\gamma_{g}\). It is shown that under, certain smoothness condition the iterative computation of E-stem and M-step leads us to the locally optimal estimate of the parameters \(\pi_{g}\), \(\mu_{g}\), and \(\sigma_{g}\), and returns the posterior probability \(\gamma_{g}\) which weights how much each point belongs to one of the clusters. Here, we used this model to perform cluster analysis and detect the peaks which correspond to different generations. Then, we estimated \(\pi_{g}\), E\([f_{g}]\), and Var\([f_{g}]\) from these clusters. ### Gillespie simulation To validate the mathematical model that was formulated, stochastic simulations of the growth and dividing cell population were carried out. Note that, through simulation we can also know the birth and division sizes and can therefore compare the trends of \(\Delta\) Vs \(<s_{b}>\). In particular, simulations were performed starting from \(N=1000\) initial cells, having initial size randomly sampled from a normal distribution of mean \(\mu_{s}\) and variance \(\sigma_{s}^{2}\). For each cell, a division time is extracted from the probability distribution \(P(t_{d})\) via inverse transform sampling. For the considered system, \(P(t_{d})\) is given by [21]: \[P(t_{d})=1-\exp(-\int_{0}^{t_{d}}dth(s)) \tag{15}\] Upon division, each cell is splitted into two new daughter cells, each inherediting a fraction \(p\) and \((1-p)\) of the mother size, respectively. ### Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ### Code Availability All code used to produce the findings of this study are available from the corresponding author upon request. The code for the Gaussian Mixture algorithm is available at [https://github.com/ggosti/fcGMM](https://github.com/ggosti/fcGMM). ### Author contributions statement M.M., G.G. and G.P. conceived research; M.L. and G.R. contributed additional ideas; G.P., and S.S. performed experiments; M.M., S.S., and G.G. analyzed data; M.M. performed analytical calculations, numerical simulations and statistical analysis; all authors wander results; all authors wrote and revised the paper. ### Competing Interests The authors declare no competing interests. ### Acknowledgements M.M. and G.R. thank the European Research Council Synergy grant ASTRA (n. 855923) for support. M.L. and G.G. thank Project LOCALSCENT, Grant PROT. A0375-2020- 36549, Call POR-FESR "Gruppi di Ricerca 2020". ## Appendix A Derivation of the population size equation To derive the population size equation in Eq. 2, we start noting considering the general case of the phase space spanned by state vectors, \(\vec{x}=(x_{1},x_{2}..,x_{N})\). An infinitesimal volume in this N-dimensional space is \(\delta V=\delta x_{1}\delta x_{2}\cdot\ldots\cdot\delta x_{N}\), so that the area of each hyper-plane is \(\delta A=\frac{\delta V}{\delta x_{i}}\). Finally, we introduce the number of particles in the volume as \(\delta N(\vec{x})=n(\vec{x})\delta V\), where \(n(\vec{x})\) is the particle density. Imposing the general balance condition, the accumulation of particles in each volume must equal the flow of particles entering that volume, plus the ones that are generated (e.g. by cell division), minus the ones exiting the volume. Thus, one finds that: \[\frac{\partial}{\partial t}(\delta N)=-\sum_{i}^{N}\delta(nu_{i})\delta A_{i} +\mathcal{G}\delta V \tag{16}\] In particular, the accumulation on the left is give by the rate of change of the number of particles in the control volume, \(\frac{\partial}{\partial t}(\delta N)=\frac{\partial}{\partial t}(N(\vec{x}) \delta V)\). The first term on the right side instead model the total net flow of particles in the control volume. In fact, it is the sum of the different between in and out flows of particles across all hyper-planes. Each of these flows is given by \(n\vec{u}\cdot(\delta A_{i}\hat{e}_{i})=nu_{i}\delta A_{i}\), which is the in flow (scalar product of the flux and hyperplane perpendicular to the flow). Note that \(\vec{u}\) is the velocity vector, while \(\hat{e}_{i}\) is a versor parallel to the \(x_{i}\) component of the state vector. The out flow can be defined as \((nu_{i}+\delta(nu_{i}))\delta A_{i}\). Finally, \(\mathcal{G}\) contains all production/degradation processes taking place inside the infinitesimal volume. Substituting all terms in Eq. 16, one obtains \[\frac{\partial}{\partial t}(\delta n(\vec{x})\delta V)=-\sum_{i}^{N}\delta( nu_{i})\delta\frac{\delta V}{\delta x_{i}}+\mathcal{G}\delta V \tag{17}\] Dividing each term by \(\delta V\) and taking the limit \(\delta x\to 0\), we get the final form: \[\frac{\partial n}{\partial t}+\vec{\nabla}\cdot(\vec{u}n)-\mathcal{G}=0 \tag{18}\] ## Appendix B Grow and division with intermediate tasks The model derived in the main text produce an exponential distribution of the division times [21]. To retrieve an Erlang-like distribution, we introduce a series of tasks/checkpoints, the cell has to go through, before starting the division. Each of this checkpoint has an exponential distribution of times. To do so, we can modify the population balance equation adding a second index that accounts for the checkpoint state of the cell. Thus one can introduce another index accounting for the intermediate task the cell is performing. \(\rho_{g,q}\) now indicates the density of cells that divided \(g\) times and passed the \(q\)-th checkpoint. The population balance equation becomes \[\frac{\partial}{\partial t}n_{g,q}(s,t)+\frac{\partial}{\partial s }\left(\Lambda n_{g,q}(s,t)\right)=-\gamma(s)n_{g,q}(s,t)+\\ +\begin{cases}\int_{0}^{\infty}d\eta\ \gamma(\eta)\ \delta(s-\eta)\ n_{g,q-1}(\eta,t)&\text{if q $>0$}\\ 2\int_{0}^{\infty}d\eta\ \gamma(\eta)\ \phi(s|p\eta)\ n_{g-1,0}(\eta,t)&\text{if q $=0$.}\end{cases} \tag{25}\] If we now look at the variation of the total number of cells at generation \(g\) and checkpoint \(q\), we have: \[\dot{N}_{g,q}=\frac{d}{dt}N_{g,q}(t)=\int_{0}^{\infty}\frac{\partial}{\partial t }n_{g,q}(s,t)ds \tag{26}\] which yields \[\frac{\dot{N}_{g,q}}{N_{g,q}}=-k\left\langle s^{\beta}\right\rangle_{g,q}+k \left\langle s^{\beta}\right\rangle_{g,q-1}\frac{N_{g,q-1}}{N_{g,q}}=\Phi_{g,q }\quad\text{if}\quad q>0 \tag{27}\] or \[\frac{\dot{N}_{g,0}}{N_{g,0}}=-k\left\langle s^{\beta}\right\rangle_{g,0}+2k \left\langle s^{\beta}\right\rangle_{g-1,0}\frac{N_{g-1,0}}{N_{g,0}}=\Phi_{g,0 }\quad\text{if}\quad q=0 \tag{28}\] Similarly, \[P_{g,q}(t)=\frac{N_{g,q}(t)}{\sum_{h,w}N_{h,w}(t)} \tag{29}\] Eq. 28 can be used to compute the dynamics of the fractions, \(P_{g,q}\) since \[P_{g,q}^{{}^{\shortmid}} =\left(\frac{\dot{N_{g,q}}(t)}{\sum_{h,w}N_{h,w}(t)}\right)=\] \[=\frac{N_{g,q}\left(\sum_{h,w}N_{h,w}\right)-N_{g,q}\sum_{h,w}N_{ h,w}}{\left(\sum_{h,w}N_{h,w}\right)^{2}}=\] \[=\frac{N_{g,q}^{{}^{\shortmid}}}{\sum_{h,w}N_{h,w}}-P_{g,q}\frac{ \sum_{h,w}N_{h,w}^{{}^{\shortmid}}}{\sum_{h,w}N_{h,w}} \tag{30}\] Using Eq. 28, we can explicit \(\dot{N_{g,q}}\) and obtain: \[P_{g,q}^{{}^{\shortmid}}=-k\left\langle s^{\beta}\right\rangle_{g,q}P_{g,q} +2k\left\langle s^{\beta}\right\rangle_{g,q-1}P_{g,q-1}+\\ +P_{g,q}\sum_{h,w}\left(k\left\langle s^{\beta}\right\rangle_{h,w}P_{h,w}-2 ^{\delta_{w,0}}k\left\langle s^{\beta}\right\rangle_{h\cdot Q+w-1}P_{h\cdot Q +w-1}\right) \tag{31}\] for \(q>0\), while \[P_{g,0}^{{}^{\shortmid}}=-k\left\langle s^{\beta}\right\rangle_{g,0}P_{g,0 }+k\left\langle s^{\beta}\right\rangle_{g-1,0}P_{g-1,0}+\\ +P_{g,0}\sum_{h,w}\left(k\left\langle s^{\beta}\right\rangle_{h,w}P_{h,w}-2 ^{\delta_{w,0}}k\left\langle s^{\beta}\right\rangle_{h\cdot Q+w-1}P_{h\cdot Q +w-1}\right) \tag{32}\] for \(q=0\), where we changed notation in the last term of the equation to obtain a more compact description: the two indexes notation \(X_{g,h}\) is equivalent to the one index notation \(X_{g\cdot Q+h}\) while \(\delta_{w,0}\) is the delta of Kronecker. Finally, the last term of both equation can be simplified noting that the summation goes like: \[\sum_{h,w}k\left(\left\langle s^{\beta}\right\rangle_{h,w}P_{h,w}-2 ^{\delta_{w,0}}\left\langle s^{\beta}\right\rangle_{h\cdot Q+w-1}P_{h\cdot Q+w -1}\right)=\\ =k(\left\langle 0,0\right\rangle+\left\langle 0,1\right\rangle- \left\langle 0,0\right\rangle+\cdots\left\langle 1,0\right\rangle-2\left\langle 0,Q \right\rangle+\left\langle 1,1\right\rangle-\\ -\left\langle 1,0\right\rangle\cdots)=-k\sum_{h}\left\langle s^{\beta} \right\rangle_{h,0}P_{h,0} \tag{33}\] To obtain expressions for the new moments dynamics, we can start from the probability of finding a cell with size \(s\) at time \(t\) and generation \(g\) and at checkpoint \(q\) as \[\rho_{g,q}(s,t)=\frac{n_{g,q}(s,t)}{N_{g,q}(t)} \tag{34}\] and \[n_{g,q}^{{}^{\shortmid}}=\dot{N_{g,q}}\rho+N_{g,q}\rho_{g,q}^{{}^{\shortmid}} \tag{35}\] With a few calculations, one gets to the distribution moments evolution equations: \[\left\langle\dot{s}^{i}\right\rangle_{g,q} =\lambda\cdot i\cdot\left\langle s^{(\alpha+i-1)}\right\rangle_ {g,q}-\Phi_{g,q}\left\langle s^{i}\right\rangle_{g,q}-k\left\langle s^{(\beta+ i)}\right\rangle_{g,q}+\\ +k\left\langle 2\ p^{i}\right\rangle_{\pi}^{g,q}\left\langle s^{(\beta+ i)}\right\rangle_{g\cdot Q+q-1}\frac{P_{g\cdot Q+q-1}}{P_{g,q}} \tag{36}\] ## Appendix C Division noise The last term of Eq. 14 is obtained as following: \[2\int dss^{i}\int d\eta\gamma(\eta)K(s|p\eta)\rho(\eta)=\\ =2\int ds\ s^{i}\int d\eta\gamma(\eta)\int_{0}^{1}dp\ \pi(p)\delta(s-p\eta)\rho(\eta)=\\ =2\int ds\ s^{i}\int\frac{df}{p}\gamma(f/p)\int_{0}^{1}dp\ \pi(p)\delta(s-f)\rho(f/p)=\\ =2\int_{0}^{1}dp\ \frac{\pi(p)}{p}\int\ ds\ s^{i}\gamma(s/p)\rho(s/p)=\\ =2<s^{i}\gamma(s)>_{\rho}\int_{0}^{1}dp\ \frac{\pi(p)}{p}p^{i}i+1)=\\ =2<s^{i}\gamma(s)>_{\rho}<p^{i}>_{\pi} \tag{15}\] where we performed two changes of variables, i.e. \(p\eta=f\) and \(s/p=v\). ## Appendix D Derivation of size-homeostasis strategies To do so, we start considering the two observable commonly measured in mother machine experiment, i.e. the division and birth sizes, \(s_{d}\) and \(s_{b}\), respectively. In particular, the average size at division can be expressed as \[\langle s_{d}\rangle=\int ds_{b}ds_{d}s_{d}\rho(s_{b},s_{d})=\int ds_{b}\rho( s_{b})\int_{s_{b}}^{\infty}s_{d}\rho(s_{d}|s_{b})ds_{d} \tag{16}\] where \(\rho(s_{b},s_{d})\) is the distribution of the birth and division sizes and we make use of the chain rule of the conditional probability of having a size of division \(s_{d}\) given that the size at birth was \(s_{b}\), \(\rho(s_{d}|s_{b})\). Using probability conservation, we have that \[\rho(s_{d}|s_{b})ds_{d}=\rho_{d}(t|s_{b})dt \tag{17}\] and \[\rho(s_{d})=\frac{\rho(t)}{\frac{ds_{d}}{dt}}=\frac{\rho_{d}(t_{d})}{g(s_{d})} \tag{18}\] where \(\rho_{d}(t|s_{b})\) is the probability density function of a cell to divide at time, \(t\) for a cell of initial size \(s_{b}\); and \(\frac{ds}{dt}=g(s)\) is the definition of growth rate. This in turn is given by \[\rho_{d}(t|s_{b})=\frac{dP_{d}(t|s_{b})}{dt}=\frac{d}{dt}\left(1-e^{-\int h(s( t^{\prime}))dt^{\prime}}\right) \tag{19}\] where \(h\) is the rate of division and the associated probability can be obtained as one minus the probability of not dividing, which evolves in time as: \[\frac{dP_{0}}{dt}=-h(t)P_{0}\quad P_{0}(t)=e^{-\int_{0}^{t}hdt^{\prime}} \tag{20}\] Let us now assume that growth and division rates are of the form \(g(s)=\lambda s^{\alpha}\) and \(h(s)=\kappa s^{\beta}\), respectively. Thus, one has \[\rho(s_{d})=\exp\left(-\int_{s_{b}}^{s_{d}}\frac{\kappa}{\lambda}s^{\beta- \alpha}ds\right)\frac{\kappa}{\lambda}s_{d}^{\beta-\alpha} \tag{21}\] We can see from Eq. 21 that the division size distribution depends only on the ratio between the rate coefficients and the difference of their exponents. Moreover, Eq. 16 can be analytically solved and it yields different scenarios depending on the value of the exponent difference. #### d.0.1 Adder-like behaviour To begin with, we start considering the simpler case, i.e. the one when the two exponents are equal (\(\beta-\alpha=0\)). In this case, Eq. 21 becomes: \[\langle s_{d}\rangle=\int ds_{b}\rho(s_{b})\int_{s_{b}}^{\infty}s_{d}\exp \left(-\frac{\kappa}{\lambda}(s_{d}-s_{b})\right)\frac{\kappa}{\lambda}ds_{d}= \tag{22}\] \[=\int ds_{b}\rho(s_{b})(s_{b}+\frac{\lambda}{\kappa}=\langle s_{b}\rangle+ \frac{\lambda}{\kappa} \tag{23}\] Thus, we ended up with: \[\Delta=\langle s_{d}\rangle-\langle s_{b}\rangle=\frac{\lambda}{\kappa} \tag{24}\] which is the constant trend one finds in an adder-like growth and division regime. #### d.0.2 Time-like regime One can repeat the same calculations considering the case in which \(\beta-\alpha<0\), that it is to say, when the exponent associate with growth is bigger than the one regulating division. In particular, if we consider \(\beta-\alpha=-1\), Eq. 16 becomes \[\langle s_{d}\rangle=\int ds_{b}\rho(s_{b})\int_{s_{b}}^{\infty}\exp\left(- \frac{\kappa}{\lambda}\ln\left(\frac{s_{d}}{s_{b}}\right)\right)\frac{\kappa}{ \lambda}ds_{d}= \tag{25}\] \[=\int ds_{b}\rho(s_{b})s_{b}^{\frac{\kappa}{3}}\int_{s_{b}}^{\infty}\left(\frac{1}{ s_{d}}\right)^{\frac{\kappa}{3}}\frac{\kappa}{\lambda}ds_{d}=\frac{\kappa}{\kappa- \lambda}\left\langle s_{b}\right\rangle \tag{45}\] and \[\Delta=\frac{\lambda}{\kappa-\lambda}\left\langle s_{b}\right\rangle \tag{46}\] The pure timer model is obtained when \(\kappa=2\lambda\). If \(\beta-\alpha\) is smaller than zero but differnt from one, we obtain: \[\left\langle s_{d}\right\rangle=\int ds_{b}\rho(s_{b})\int_{s_{b}}^{\infty} \exp\left(-\frac{\kappa}{\lambda(b+a+1)}\ln\left(\frac{s_{d}}{s_{b}}\right) \right)\frac{\kappa}{\lambda}ds_{d}= \tag{47}\] #### d.2.3 Sizer-like regime Finally, if we consider \(\beta-\alpha>0\), Eq. 45 becomes \[\left\langle s_{d}\right\rangle=\int ds_{b}\rho(s_{b})\int_{s_{b}}^{\infty}ds_ {d}s_{d}\exp\left(-\int_{s_{b}}^{s_{d}}\frac{\kappa}{\lambda}s^{\beta-\alpha} ds\right)\frac{\kappa}{\lambda}s_{d}^{\beta-\alpha} \tag{48}\] \[\left\langle s_{d}\right\rangle\simeq\sqrt{\frac{\pi}{2\beta}}-\left\langle s _{b}\right\rangle \tag{49}\]
2305.01380
The Readiness of EVN Telescopes for the SKA-VLBI Era
The application of VLBI to scientific problems has undergone a relentless expansion since its conception, yet the potential for further expansion is still large. We are on the cusp of revolutionary progress given the arrival of a host of next-generation instruments. Over the last few years the community has been working hard to ensure the SKA design includes the capability to enable multiple simultaneous tied-array beams, which is a crucial technology to deliver ultra-precise astrometry and improve survey speed capabilities. However, to reach the full potential requires that the network of antennas is upgraded to match the SKA capabilities. We identify multiple-pixel technology, on large telescopes and connected arrays, as a crucial missing component and here will make recommendations for the upgrade path of the partner EVN (and other network) telescopes. Our feasibility studies on SKA-VLBI suggest an order of magnitude improvement in the precision and also in the frequency range at which astrometry can be performed today, if the full network has the required capabilities.
María J. Rioja, Richard Dodson
2023-05-02T12:54:48Z
http://arxiv.org/abs/2305.01380v1
# The Readiness of EVN Telescopes for the SKA-VLBI Era ###### Abstract: The application of VLBI to scientific problems has undergone a relentless expansion since its conception, yet the potential for further expansion is still large. We are on the cusp of revolutionary progress given the arrival of a host of next-generation instruments. Over the last few years the community has been working hard to ensure the SKA design includes the capability to enable multiple simultaneous tied-array beams, which is a crucial technology to deliver ultra-precise astrometry and improve survey speed capabilities. However, to reach the full potential requires that the network of antennas is upgraded to match the SKA capabilities. We identify multiple-pixel technology, on large telescopes and connected arrays, as a crucial missing component and here will make recommendations for the upgrade path of the partner EVN (and other network) telescopes. Our feasibility studies on SKA-VLBI suggest an order of magnitude improvement in the precision and also in the frequency range at which astrometry can be performed today, if the full network has the required capabilities. ## 1 Introduction Precision astrometry measurements add a new dimension to the research of many astrophysical fields. It has provided deep insight into the astrophysical processes in a huge range of environments, and is a probe for fundamental properties of the Universe. VLBI has traditionally provided the highest astrometry precision measurements and the next-generation of instruments hold the potential for an order of magnitude of improvement. However, the current conventional astrometric methods are limited by systematics in most cases. Hence we need a commensurate improvement in methods as well as in sensitivity for the next-generation of instruments. Since the beginnings of VLBI, there has been a relentless quest for increase of astrometric accuracy and wider applicability. In the last decade there have been huge strides in achieving the full astrometric potential, arising from advanced phase referencing (PR) calibration strategies [i.e. GeoBlocks: \(8\)] that accurately compensate for the dominant tropospheric propagation residual errors that have led to a few tens of micro-arcsecond precision measurements, most notably at \(\sim\)22 GHz. Instead the field of astrometry at lower frequencies (\(<\)8 GHz) has lagged behind, with an order of magnitude larger errors at L-band (\(\sim\)1.6 GHz). This regime is dominated by residual ionospheric propagation effects that pose a rather different set of challenges. Chief of those challenges is the fact that the ionospheric effects have a strong spatial structure (i.e. they are direction-dependent), which limits the use of observations of a reference source (necessarily along a different line of sight than that of the target) to correct for the atmospheric errors. We note as an aside the multi-frequency advanced PR calibration strategy [3], where ionospheric residuals along the line of sight from the target are removed with "ICE-blocks" without using a reference source. Thus the direction-dependent corrections are not required. The arrival of the SKA, currently under construction, will focus on the lower frequencies and will revitalise all aspects of VLBI astronomy at these wavelengths with joint observations with EVN telescopes. Among these, the ultra precise astrometric capability is of great importance and is a scientifically driven motivation in the SKA-VLBI era; the goal is to improve by an order of magnitude both the astrometric accuracy and the range of applicable frequencies [4, 7]. The high sensitivity and long baselines of SKA-VLBI observations will result in a much reduced thermal noise level and high spatial resolution. Therefore this goal is achievable, as long as a sufficiently accurate ionospheric phase-calibration strategy is in place. This paper discusses the way to achieve this using a next-generation method, namely MultiView [12], and an upgrade of the network telescopes to implement its requirements for optimal performance. Section 2 describes the basics of MultiView, presents current empirical demonstrations with existing instruments and our estimates for the expected performance in the SKA-VLBI era. Section 3 describes the technological developments relevant to astrometry for the telescope network, namely multiple-pixel capabilities. Section 4 are the conclusions. ## 2 cm/m-VLBI Microarcsecond astrometry using MultiView methods MultiView [12] is a next-generation calibration method that has the potential for optimum correction of the dominant ionospheric residual errors, which are the main challenge and limit the measurements at the SKA frequency range with PR. The result is ultra high precision astrometry measurements with wide applicability, that is, to many sources and across a much wider frequency range than can be used today [10]. Standard PR strategies using a single calibrator result in uncorrected systematic residual phase errors caused by the spatial structure in the atmospheric propagation effects (even when the sources are close i.e. \(\sim 1^{o}\)). These systematics strongly affect the analysis and impose limits in the astrometric accuracy. Figure 2 illustrates these astrometric limits across the frequency range. Instead, MultiView calibration uses observations of multiple calibrators surrounding the target and combines their phases with spatial interpolation to effectively result in using a virtual calibrator at \(\sim 0^{o}\) angular separation. PR performance increasingly deteriorates towards the low frequency regime, with errors much larger than at the higher frequencies, such as 22 GHz, limiting its application. A clear example of the residual ionospheric propagation effects becoming the dominant source of errors at observing frequencies less than \(\sim~{}8\) GHz is to be found in the analysis of 6.7 GHz methanol maser observations in the BeSSeL project, using similar PR with GeoBlocks strategies than for 22 GHz water maser observations. The precision of the results from 2016 [14], are significantly worse than the re-analysis [13] with a variation on the MultiView strategy [9]. MultiView calibrators can be further away than for PR because the solutions are interpolated rather than transferred. This has been demonstrated at 1.6 GHz [12], 8 GHz [2, 6] and 6.7 GHz [5] with calibrators up to \(6^{o}\) from the target. Encouraged by the outstanding performance, explorations in PR corner cases are under way, with observations at very low elevations, very low (0.3GHz) and high (43GHz) frequencies, where we are exploring the application for next-generation instruments such as SKA, ngVLA-LONG and FAST. For maximum precision one requires simultaneous observations of (nearby) calibrators uniformly distributed surrounding the target. Nominally, following from empirical ionospheric spatial structure studies with MWA [11], the expectations of residual errors with MultiView would be about 1 mTECU. See Figure 1 for an example of the ionospheric phase screens as observed with the MWA. Residuals of this level would result in MultiView systematic astrometric errors at the 1 micro-as level above \(\sim\)5GHz (see Table 1) with the final error to be comprising of the additional contributions set by the thermal noise or measurement errors, the dynamic range of the image and the stability of the reference points selected within the sources for multi-epoch comparisons. This precision is more than an order of magnitude improvement over the current limits. The improvement that MultiView can provide to VLBI astrometric observations has been demonstrated with an increasing number of empirical astrometric measurements, showing outstanding performance compared to standard PR methods, reaching the thermal noise limit of current VLBI networks, as predicted by our error analysis. Thus we are confident in our estimates of an order of magnitude improvement for SKA-VLBI, assuming an upgraded network of antennas that match the SKA capabilities. This is predicated on the sensitivity improvement from the increased collecting area (and bandwidth) and the quasi-perfect compensation of systematic atmospheric effects, as provided by MultiView (see Figure 3). Note that MultiView is expected to achieve an order of magnitude improvement compared to in-beam PR. As stated above, the ultimate limit, after correcting for the currently dominant atmospheric propagation errors, we expect to be related to intrinsic source structure effects and the definition and stability of the reference points over time. To alleviate their impact, and other potential phase ambiguity related issues in the analysis, we plan for an over-determined fit to the phase plane, i.e. to use more than the minimum number of three calibrators surrounding the target, and a dense network (to provide uv-coverage) of moderate to large sized telescopes for joint MultiView observations. To provide simultaneous observations of all sources we identify multiple-pixel technology, on large telescopes and connected arrays, as the crucial missing component. In the next section we describe the technology and make recommendations for the upgrade path of the partner EVN (and other network) telescopes. ## 3 Implementation of MultiView Technological requirements for the telescope network Optimum MultiView performance comes from simultaneous high-SNR observations of the multiple sources involved. This translates to observational requirements, namely sensitive observations with large collecting areas and (relatively) wide FoVs. Multiple-pixel technology is fundamental to bring the two requirements together, for large telescopes and connected arrays across the network. These MultiView observational requirements have driven the community efforts that resulted in an Engineering Change Proposal to ensure that the SKA design includes the innovative capability to enable multiple simultaneous tied-array beams from the connected phased-up array and/or from subarraying, which has been approved. These support a number of concurrent pencil beams ranging in number between 4 full-sensitivity VLBI beams with 2.6 GHz Figure 1: Example of an ionospheric residual phase screen as observed over the MWA at 88MHz after direction independent and bandpass calibration. Shown as a mesh are the station-based calibration phases above the site for one direction, color-coded in degrees, with the X and Y axis in meters for the 128 stations (at the nodes of the mesh). The wireframe shows the second order fit to the data, underlining the high curvature of the surface. The typical residuals to a planar fit are about 1mTECU. See Rioja & Dodson [11] for details. Figure 3: The theoretical thermal (dotted line) and systematic (solid lines) limits for the next-generation of instruments and methods, taken from Rioja & Dodson [10]. This illustrates the astrometric improvements of MultiView, which matches the thermal limit of a dynamic range 1000:1. Some suggested astrometric projects are indicated. See Rioja & Dodson [10] for details. Figure 2: Illustration of theoretical systematic limits for a range of astrometrical techniques (solid lines: for conventional PR in black and brown for in-beam; Advanced Topospheric Calibration (ATC) in green; Advanced Ionospheric Calibration (AIC) in blue) compared to actual observational results from the literature shown with symbols, taken from Rioja & Dodson [10]. This emphasises how these limits are greater than the thermal limit for a dynamic range 100:1 (dotted line) and that the systematics dominate observationally except for the next-generation methods, MultiView (red) and SFPR (pink). See Rioja & Dodson [10] for details. bandwidth and up to 46 beams in total, with bandwidth tradeoffs [for details see 4]. These beams can simultaneously point in any direction within the individual single-antenna FoV with the full sensitivity of the connected array, or have even wider separations by using subarraying. The remaining missing component is to upgrade the rest of the network of large telescopes and connected arrays, to match the capabilities of SKA. To deliver this upgrade path for the partner EVN (and other) telescopes we need to both define the technological multiple-pixel VLBI beam requirements and carry out practical end-to-end demonstrations to discover the key issues. These two activities necessarily depend on each other, as one sets what can be done and the other sets \begin{table} \begin{tabular}{|c|c||l|l|} \hline Frequency & MultiView error & No. in-beam & No. in-beam \\ \(\nu\) & \(\sigma\Delta\theta^{MV}\) & sources for & sources for \\ (GHz) & (\(\mu\)as) & current arrays & SKA-VLBI \\ \hline 0.3 & 150 & 1.2\({}^{\dagger}\) & 14\({}^{\dagger}\) \\ 0.9 & 17 & 3.5 & 15 \\ 1.6 & 6 & 2.9 & 5.5 \\ 5.0 & \(\sim\)1 & 0.4 & 0.4 (6) \\ 8.0 & \(\sim\)1 & 0.1 & 0.1 (2) \\ 15.0 & \(\sim\)1 & 0.0 & 0.0 (0.4) \\ \hline \end{tabular} \end{table} Table 1: Table to characterise the performance of MultiView and its feasibility for current and next-generation instruments, across the SKA spectrum, taken from Rioja & Dodson [10]. Col. 1 is the observing frequency, Col. 2 is the estimated systematic astrometric error using MultiView, as discussed in Rioja & Dodson [10]. Col. 3 is the number of 100\(\sigma\) calibrator sources for current arrays, expected within the primary beam of a single pixel 20 m antenna if FoV< 1\({}^{o}\), otherwise 1\({}^{o}\) (marked with \({}^{\dagger}\)), calculated using the source count prediction from Bonaldi et al. [1]. Col. 4 is the same as Col. 3, but using the sensitivity for SKA-VLBI Phase-1 (in brackets for Phase-2 at the higher frequencies) [4] that are strong enough to exceed the MultiView systematic limits (\(\sim 1000\sigma\), see Rioja & Dodson [10] for details). We note that the number of in-beam sources for ngVLA-LONG observations would fall between SKA Phase 1 and 2 estimates. Based on Col. 3 & 4, simultaneous (e.g. within primary beam, in-beam) MultiView would be feasible at frequencies <1.4, <2 and <6.7 GHz, with the sensitivities of current VLBI, SKA-VLBI Phase 1 and Phase 2, respectively. At higher frequencies, MultiView is possible using nodding observations or using simultaneous observations with sites with multi-antennas (e.g. ngVLA-LONG) or subarraying (e.g. SKA) capabilities. Figure 4: Left: The CSIRO MKII PAF installed at the Effelsberg telescope prime focus. MPIfR are now developing their own version with increased capabilities. Of particular importance for multiple-pixel VLBI will be the ability for the PAF beams to track the same point in the sky during the parallactic angle rotation. Right: The APERTIF PAF in WRST. what must be done. The benefits of innovative multiple-pixel technologies available to increase the FoV such as Phased Array Feeds (PAFs) and Multi-Beam receivers are recognized within the EVN, with the largest telescopes already equipped or with plans to do so. For example, Effelsberg (100m, Germany) and Lovell (76m, UK) telescopes are equipped with CSIRO MKII PAFs, the Sardinia Radio Telescope (64m, Italy) has multi-beam receivers, and WSRT array is equipped with the APERTIF PAFs system. A 100m telescope such as Effelsberg with a 25-beam PAF has the same FoV as a 20m and a connected array with multiple tied-array beams, such as WRST, has the same FoV as the individual elements. Nevertheless, its application to VLBI has not been implemented so far, and plans to do it have a very low priority. Figure 4 shows the current PAFs on Effelsberg and WSRT. Such an upgrade will benefit the operations of the EVN as an stand alone instrument as well as providing an order of magnitude improvement in astrometric precision for SKA-VLBI observations. Other than SKA, the multiple-pixel capability is a part of the design of other next generation instruments, such as FAST and ngVLA-LONG. ## 4 Conclusions The arrival of sensitive next generation instruments brings exciting opportunities and challenges for VLBI observations [10]. Among the former, the realization of ultra precise astrometric measurements that will enable the addressing of a host of innovative open questions in astrophysics. The list of challenges includes the readiness of the telescope network to reach the astrometric potential in joint observations with existing telescopes. This paper is mainly concerned with an upgrade of the EVN telescopes, for improvement and benefits of joint observations with SKA and also as a stand-alone instrument. The proposed multiple-pixel capability for large telescopes and arrays is predicated on the next-generation calibration methods and their observational requirements. MultiView was originally conceived to address the poor astrometric performance of conventional PR methods at low (<8 GHz) frequencies. MultiView has clearly demonstrated superior performance, with increased astrometric precision from removing the dominant ionospheric errors and wide applicability, to many sources, at frequencies \(<\)8 GHz using existing instruments. Based on this outstanding performance a number of efforts are ongoing to extend the MultiView method beyond its original scope, to higher and lower frequencies, as well as to better characterise the performance limits in corner cases. These include astrometric observations at very low elevations and with ever wider angular separations. The on-going investigations at higher frequencies show very promising outcomes for the correction of the tropospheric propagation medium effects as well as the ionospheric effects. SKA has adopted the new technologies required for the next-generation of calibration method MultiView, i.e. multiple tied-array beam technologies in this particular case. It is imperative that we ensure that the keystone telescopes that make up the EVN are equally prepared for these new techniques. Thus it is urgent that a roadmap for the telescope network, that includes fleshing out the requirements of individual EVN partners and for the various technological multiple-pixel solutions, is developed. As part of this, further practical end-to-end demonstrations are vital to define what is needed (e.g. number of beams, angular range) that will affect the technological options. These technological upgrades will impact the VLBI observations of the EVN with the SKA, but also the capabilities of the EVN as a stand alone array both for astrometry and survey speed. Furthermore, these considerations are also of interest to FAST-VLBI and ngVLA-LONG and space VLBI astrometry.
2310.15143
Hyperparameter optimization of hp-greedy reduced basis for gravitational wave surrogates
In a previous work we introduced, in the context of gravitational wave science, an initial study on an automated domain-decomposition approach for reduced basis through hp-greedy refinement. The approach constructs local reduced bases of lower dimensionality than global ones, with the same or higher accuracy. These ``light'' local bases should imply both faster evaluations when predicting new waveforms and faster data analysis, in particular faster statistical inference (the forward and inverse problems, respectively). In this approach, however, we have previously found important dependence on several hyperparameters, which do not appear in global reduced basis. This naturally leads to the problem of hyperparameter optimization (HPO), which is the subject of this paper. We tackle the problem through a Bayesian optimization, and show its superiority when compared to grid or random searches. We find that for gravitational waves from the collision of two spinning but non-precessing black holes, for the same accuracy, local hp-greedy reduced bases with HPO have a lower dimensionality of up to $4 \times$ for the cases here studied, depending on the desired accuracy. This factor should directly translate in a parameter estimation speedup, for instance. Such acceleration might help in the near real-time requirements for electromagnetic counterparts of gravitational waves from compact binary coalescences. In addition, we find that the Bayesian approach used in this paper for HPO is two orders of magnitude faster than, for example, a grid search, with about a $100 \times$ acceleration. The code developed for this project is available as open source from public repositories.
Franco Cerino, Andrés Diaz-Pace, Emmanuel Tassone, Manuel Tiglio, Atuel Villegas
2023-10-23T17:48:11Z
http://arxiv.org/abs/2310.15143v1
# Hyperparameter optimization of hp-greedy reduced basis for gravitational wave surrogates ###### Abstract In a previous work [1] we introduced, in the context of gravitational wave science, an initial study on an automated domain-decomposition approach for reduced basis through hp-greedy refinement. The approach constructs local reduced bases of lower dimensionality than global ones, with the same or higher accuracy. These "light" local bases should imply both faster evaluations when predicting new waveforms and faster data analysis, in particular faster statistical inference (the forward and inverse problems, respectively). In this approach, however, we have previously found important dependence on several hyperparameters, which do not appear in global reduced basis. This naturally leads to the problem of hyperparameter optimization (HPO), which is the subject of this paper. We tackle the problem through a Bayesian optimization, and show its superiority when compared to grid or random searches. We find that for gravitational waves from the collision of two spinning but non-precessing black holes, for the same accuracy, local hp-greedy reduced bases with HPO have a lower dimensionality of up to \(4\times\) for the cases here studied, depending on the desired accuracy. This factor should directly translate in a parameter estimation speedup, for instance. Such acceleration might help in the near real-time requirements for electromagnetic counterparts of gravitational waves from compact binary coalescences. In addition, we find that the Bayesian approach used in this paper for HPO is two orders of magnitude faster than, for example, a grid search, with about a \(100\times\) acceleration. The code developed for this project is available as open source from public repositories. 1 Footnote 1: This paper is an invited contribution to the Special Issue “Recent Advances in Gravity: A Themed Issue in Honor of Prof. Jorge Pullin on his 60th Anniversary”. ## I Introduction For several problems, in particular data-driven ones as is the case of this paper, surrogate models have proved useful to make both predictions and analyses on new data computationally faster. These models are constructed by learning from a limited dataset, obtained for example from high-fidelity simulations or experiments. This paper uses the reduced basis approach to construct surrogates, for a review see Ref. [2]. Parameter estimation (PE) of the source of gravitational waves is a key aspect of gravitational wave (GW) science [3; 4; 5; 6; 7; 8; 9; 10]; its goal is to infer the properties of, for example, the black holes or neutron stars involved in a binary collision [11; 12; 13; 14; 15; 16; 17; 18]. Along this line, speeding up PE can enable the possibility of measuring electromagnetic counterparts of gravitational waves in the presence of a neutron star [19; 20]. This counterpart refers to the electromagnetic signal(s) received after a gravitational wave. Bayesian inference is a standard approach in PE [21; 22; 23; 24; 25; 26; 27] and several software tools have been developed in GW science, such as LALInference [28], PyCBC [29] and Bilby [30; 31]. The main factors contributing to the PE computational costs are waveform evaluations and likelihood computations. One way to overcome the first one is through surrogate models. Analogously, likelihood evaluations can be sped up through the use of reduced order quadratures (ROQ) [32; 33; 34], which are based on reduced order models and the Empirical Interpolation method [35]. Several efficiency improvements for PE have also been reported using standard Machine learning (ML) techniques [36; 37; 38; 39; 40; 41; 42]. Even though the acceleration of likelihood evaluations -and PE thereof- using ROQ is significant, they are not yet enough to allow for the follow-up of electromagnetic counterparts. One further acceleration being proposed is the use of focused reduced order quadratures (FROQ) [43], which are built from a reduced basis in a neighborhood of the parameter region found by the trigger (detection) pipeline, as opposed to a global basis covering the whole parameter domain of physical possibilities. Since the parameter region is smaller, the basis has a lower dimensionality, and the cost of evaluating ROQs is linear with respect to the dimensionality of the basis. In a recent paper [1] we proposed a divide-and-conquer approach to build accurate local reduced bases of low dimensionality in an automated way, which can be seen as complementary to FROQ. More precisely, we use a data driven version of hp-greedy reduced basis [44]1; a methodology that adaptively partitions the parameter space and builds local reduced bases. In that reference we emphasized that the hp-greedy approach has significant online speed improvements, given that it obtains a set of reduced bases with lower dimensionality and equal or higher accuracy than a global basis. At the same time, the hp-greedy approach is more complex than the standard reduced basis one. In particular, in [1] we also found that there are hyperparameters to which the resulting models are very sensitive and which do not appear (or are irrelevant) in the standard reduced basis framework. We have identified the two most relevant ones to optimize for: Footnote 1: The hp-greedy approach, as well as reduced basis, was originally introduced in the context of parameterized partial differential equations. 1. The seed \(\hat{\Lambda}_{0}\) to initialize the greedy-reduced basis construction. This was largely unexpected, since it has been consistently reported in the literature in the past that it has no relevance in global reduced basis 2, see for example Figure 1 and its associated discussion in Ref. [45]. Footnote 2: Which can be intuitively understood in that case being the greedy approach a _global_ optimization algorithm. 2. The maximum depth \(l_{max}\) of the resulting tree (briefly described in the next section), which limits the number of recursive partitions. As with any tree in ML, deeper trees lead to higher accuracies when training but at the same time they risk overfitting. The previous discussion motivates the subject of this paper: our approach of hp-greedy reduced basis requires an efficient search for an optimal choice of the hp-greedy hyperparameters. This is referred to as hyperparameter optimization (HPO) in the ML field. Here we follow a Bayesian approach; more precisely, Sequential Model-Based Optimization (SMBO) through a Tree-Structured Parzen Estimator (TPE) [46], as implemented in the OPTUNA open source package [47]. The rest of the paper is organized as follows. In section II we briefly review the hp-greedy reduced basis approach. In Section III we state the hyperparameter optimization problem and describe how we approach it using Bayesian optimization, SMBO and TPE. We also include a benchmark comparison of HPO with a function commonly used in optimization tests. In Section IV we present our results for the collision of two non-precessing aligned-spins black holes, the same physical setup that we used in our previous work [1]. We close in Section V presenting the conclusions of this work and potential future paths. ## II hp-greedy reduced basis The reduced basis-greedy approach finds a quasi-optimal -in a rigorous mathematical sense- low-dimensional approximation of a dataset with respect to the Kolmogorov n-width [48; 49]. When applied to gravitational waves, the dataset consists of waveforms parameterized, for example, by the mass and spin of each object in the case of a compact binary coalescence. To further accelerate online evaluations and data analysis, a divide-and-conquer strategy can be pursued, which recursively partitions the parameter space and builds local reduced bases of lower dimensionality. We therefore proposed [1] a data-driven version of the _hp-greedy_ approach [44] as a way of improving the construction of reduced bases within gravitational wave science. As a summary, hp-greedy generalizes the standard reduced-basis framework, allowing for partitioning of the parameter space by combining reduced basis functions (p-refinement) with an adaptive grid (h-refinement). Each subspace can be assigned to a node in a binary tree structure. If a subspace is partitioned, it is associated with a node of the tree and, after partitioning it, each of the obtained subspaces is associated with a children node. In ths way, the root of the tree represents the full parameter space. For more details see [1]. The process builds a set of local reduced bases with several stopping criteria for partitioning the parameter domain: a maximum dimension \(n_{max}\) for each local basis, a maximum depth \(l_{max}\) of the tree, and an accuracy threshold \(\epsilon_{tol}\) as defined by Eq. 5. Until any of these stopping criteria is met, the domain gets partitioned, and reduced bases for the resulting subdomains are built. Figure 1 shows an example of a partition structure for a domain with \(l_{max}=2\). The hp-greedy approach is driven by the idea that if the greedy error decreases slowly, leading to a large number of iterations, domain partitioning can help to improve accuracy; which is similar in spirit to spectral elements in numerical solutions of partial differential equations [50]. The choice of the partitioning is influenced by the rate of error reduction, which varies depending on the problem. Numerical experiments have demonstrated the effectiveness of hp-greedy for gravitational waves, reducing basis dimensionality while maintaining accuracy (cf. Fig. 12 of Ref. [1]). The algorithm involves performing a binary domain-decomposition, and for traversing the resulting tree one can take advantage of its structure; we discuss this in Section V. hp-greedy is particularly useful for problems with physical discontinuities in the parameter space (cf. Sec. III of Reference [1]). An interesting finding of our experiments with hp-greedy is that the initial seed of the algorithm _does_ affect the partitioning and subsequent reduced bases, and _significantly_ impacts their _accuracy_. This differs from the standard global reduced basis approach, in which the seed choice is irrelevant, see for example Figure 1 of [45] and the corresponding discussion. Hence, the optimization of hyperparameters such as \(l_{max}\), and the seed \(\hat{\Lambda}_{0}\) turns out to be crucial in hp-greedy. This observation is the key motivation for this paper. The optimization task can be carried out through hyperparameter optimization in the ML sense. ## III Hyperparameter optimization An HPO problem can be stated as follows: given a cost function \(f:X\to\mathbb{R}\) which returns the maximum validation error of a model trained with a combination of hyperparameters \(\mathbf{x}\in X\), the goal is to find \(\mathbf{x}^{*}\) such that \[\mathbf{x}^{*}=arg\min_{\mathbf{x}\in X}f(\mathbf{x})\,.\] In our context, we are interested in the optimal combination of hyperparameters from a domain \(X\) that gets the minimum representation error for a validation set. For a discussion about our results on test sets, see Section IV.3. In the hp-greedy approach, each value of the tuple \(\mathbf{x}\) represents a configuration of the two relevant hyperparameters for our scenario: \[\mathbf{x}=(l_{max},\hat{\Lambda}_{0})\,,\] for fixed values of \(n_{max}\). We decided to keep the latter fixed since the evaluation cost of both a surrogate based on reduced bases and the computation of likelihoods using ROQ is linear with the dimensionality of the basis, so we place it in a different hierarchy. In practice, the cost function (which we have not defined yet) does not have a closed form expression, but instead it results from training a model and evaluating the representation error of a validation set. This aspect usually makes the optimization problem computationally expensive. Several HPO approaches have been and are being developed, and one of the driving motivations nowadays are deep neural networks. Two well-known HPO techniques are grid and random searches, although they are often inefficient in computational terms since the whole space is blindly explored. A more promising technique in this regard is Bayesian optimization [51; 52], which was chosen for our problem. It attempts to minimize the number of evaluations of \(f\) to find \(\mathbf{x}^{*}\) and falls within the category of _Sequential Model-Based Optimization_ (SMBO) [46; 53]. In this work, we rely on the _Tree-Structured Parzen Estimator_ (TPE) [46] algorithm for Bayesian optimization, because it is one of the simplest algorithms. It works well with discrete search spaces, scales linearly with the number of dimensions and is optimized -as opposed to other methods such as Gaussian Processes (see [46] for more details). For the SMBO implementation, we used the Python package OPTUNA [47]. In Section III.4 we compare our results using Bayesian optimization with those of grid and random searches to quantify the advantages and computational savings of the former. Figure 1: Schematic domain decomposition and its associated tree representation. Figure from Ref. [44]. ### Bayesian optimization Basically, Bayesian optimization is an adaptive method that uses the information from previous evaluations of the cost function \(f\) to decide which value of \(\mathbf{x}\) should be used next, with the goal of reducing the number of necessary evaluations of \(f\) to find a (in general, local) minimum, see Figure 2. To explain the intuition behind this method, we begin with a description of SMBO. ### Sequential Model-Based Optimization (SMBO) The general idea is to approximate the cost function \(f\) with a substitute model \(\mathcal{M}\). Let us start with a set of observations \[D=\left\{(\mathbf{x}^{(1)},y^{(1)}),\cdots,(\mathbf{x}^{(k)},y^{(k)})\right\}, \tag{1}\] with \(y^{(j)}=f(\mathbf{x}^{(j)})\). Departing from this set, the substitute model \(\mathcal{M}\) is adjusted. Next, using the predictions of the model, an acquisition function \(S\) is maximized. This function chooses the next set of hyperparameters \(\mathbf{x}_{i}\in X\) to evaluate \(f\), and the pair \((\mathbf{x}_{i},f(\mathbf{x}_{i}))\) is added to the observation set \(D\). After that, \(\mathcal{M}\) is adjusted again, and the process is repeated for a fixed number of iterations. This procedure is captured by the pseudocode given in Algorit Figure 2: Three iterations of a Bayesian optimization for a cost function with one parameter. The dashed line shows the actual cost function, and the solid one the mean value of a statistical model (in this case using Gaussian processes). The blue area shows the uncertainty of the model, which approaches zero at the points where the observations are made. Underneath, in orange, the acquisition function, which shows the next point to evaluate. Figure taken from [54]. Using Bayes' theorem, if \(P(y|\mathbf{x})\) is the posterior probability, \(P(y)\) the prior and \(P(\mathbf{x}|y)\) the likelihood, then \[P(y|\mathbf{x})=\frac{P(\mathbf{x}|y)\ P(y)}{P(\mathbf{x})}\,.\] In a Bayesian approach to SMBO, \(P(y|\mathbf{x})\) is the prediction of the model, with \(y\) being an evaluation of \(f(\mathbf{x})\). We mentioned that, for selecting the points to evaluate, an acquisition function \(S\) is maximized. Several proposals exist for choosing the acquisition function. In this work, we use the _Expected Improvement_ (EI) [55] criterion: if \(y^{*}\) is a reference value, then EI with respect to \(y^{*}\) is defined as \[EI_{y^{*}}(\mathbf{x}):=\int_{-\infty}^{\infty}\max(y^{*}-y,0)P(y|\mathbf{x})\ dy\,. \tag{2}\] ### Tree-Structured Parzen Estimator The _Tree-Structured Parzen Estimator_ (TPE) [46] is a strategy to model \(P(x_{i}|y)\) for each \(x_{i}\in X_{i}\) (that is, each \(x_{i}\) represents a different hyperparameter) from two distributions built using the observations \(D\) (1): \[P(x_{i}|y)=\begin{cases}\ell(x_{i})&\text{if }y<y^{*}\\ g(x_{i})&\text{if }y\geq y^{*}\,.\end{cases} \tag{3}\] Here the densities \(\{\ell(x_{i}),g(x_{i})\}\) are built from two sets \(\{D_{\ell},D_{g}\}\subset D\), such that \(D_{\ell}\) has all the observations with \(y<y^{*}\), \(D_{g}\) the remaining ones, and \(D=D_{\ell}\cup D_{g}\). The reference value \(y^{*}\) is a quantile \(\gamma\in(0,1)\) of the observed values, so that \(P(y<y^{*})=\gamma\). This means that \(y^{*}\) is certain value between the best \(y\) and worst \(y\) found at some iteration (e.g. if \(\gamma\) is equal to \(0.5\), then \(y^{*}\) is equal to the median of the observed values of \(y\)). Building \(\ell(x_{i})\) and \(g(x_{i})\) implies adjusting the model (line 3 in Algorithm 1), and then using (3) in the definition of the expected improvement (2). In order to maximize the expected improvement (step 4 in Algorithm 1) one has to choose a value of \(x_{i}\) that maximizes the ratio \(\ell(x_{i})/g(x_{i})\)[46], \[x_{i}^{*}=arg\max_{x_{i}}\left(\ell(x_{i})/g(x_{i})\right)\,. \tag{4}\] In summary, the TPE algorithm constructs two probability density functions: i) \(\ell(x_{i})\) using "good" observations (\(y<y^{*}\)), and ii) \(g(x_{i})\) using "bad" observations (\(y\geq y^{*}\)). These functions are updated every time the objective function is evaluated (at every iteration of the algorithm), and the new \(x_{i}\) is chosen by maximizing \(\ell(x_{i})/g(x_{i})\), implying that the new \(x_{i}\) is much more likely to represent a good observation rather than a bad one. All density functions are constructed using Parzen estimators [56]. For each point \(x_{i}\) a truncated normal distribution centered at that point is added. A way to think of a Parzen window is to visualize it as a smoothed histogram. The choice of truncated normal distributions for the kernel function is done in the original TPE paper [46]. For more details about the implementation of the Parzen estimator in OPTUNA see Ref. [57]. ### A comparison between HPO, grid and random searches We compare grid and random searches against Bayesian optimization. To this end, let us consider the Himmelblau function, which is a widely used benchmark in optimization and ML, as the objective function to be minimized. This function is shown in Figure 3 and is defined as follows: \[f(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}.\] We assume that the Himmelblau function represents the accuracy of some ML model, where each input of the function represents a set of hyperparameters of the algorithm and the outcome is the resulting accuracy. The objective is to obtain a set of values that minimizes the function. We show results for three methods: grid and random searches, and Bayesian optimization. For each case, 100 different evaluations were used. Then, we assess which one performed better. Figure 4 shows the search patterns resulting from each approach. After 100 trials, Bayesian optimization found the lowest value among the three methods (recall that the four local minima are at \(f=0\)), with \(f=0.09\), while random search found \(f=1.71\) and grid search found \(f=3.68\). As can be seen from Fig. 4, Bayesian optimization has the advantage of exploring the search space adaptively, leading to faster convergence when compared to grid or random searches. This is so because neither grid nor random search keeps evidence from older trials, which makes them less effective than a Bayesian approach for the task. This example showcases the key features of Bayesian optimization, which are: efficiency (i.e., number of trials to reach a minimum), adaptive search, and a relatively low computational effort. ## IV Hyper-optimized HP-greedy reduced bases for gravitational waves In this section we apply hp-greedy to build accurate and low dimensional representations of gravitational waves, optimizing the search of hyperparameters with a Bayesian approach, as described in the previous section. Figure 3: Plot of the Himmelblau function. It has 4 global minima at \((-3.78,-3.28)\), \((-2.81,3.13)\), \((3.58,-1.85)\) and \((3,2)\), with the same value: \(f(x,y)=0\). ### Physical Setup The waveforms used to train hp-greedy and perform HPO were obtained from NRHybSur3dq8 [58]. This is a surrogate model for hybridized non-precessing numerical relativity and post-newtonian waveforms within the parameter range of mass ratios \(1\leq q\leq 8\), and dimensionless spins \(-0.8\leq\chi_{1z},\chi_{2z}\leq 0.8\). We focus in this work on the dominant angular mode \(l=m=2\) of the waveforms, which we sampled in the late inspiral and merger phases, with \(t\in[-2750,100]M\), and a time step of \(\Delta t=0.1M\). Additionally, we normalized the waveforms with respect to the \(\ell_{2}\) norm to emphasize structural characteristics rather than size or amplitude. In this paper we focus on two distinct cases: * 1D Case: This scenario involves no spin, where the sole free parameter is the mass ratio, \(q:=m_{1}/m_{2}\). * 2D Case: Two spins aligned in the same direction and with equal magnitudes, \(\chi_{1z}=\chi_{2z}\), are added to the 1D scenario. In both cases we generated multiple training sets of different sizes as shown in Figure 6. As error metric for the reduced basis representation \(\tilde{h}_{\lambda}(t)\) of a waveform \(h_{\lambda}(t)\) labeled by the parameter \(\lambda\), we use the maximum in the parameter space of the \(\ell_{2}\) norm, \[\epsilon:=\max_{\lambda}\left\|h_{\lambda}(\cdot)-\tilde{h}_{\lambda}(\cdot) \right\|^{2}, \tag{5}\] where \(\tilde{h}_{\lambda}(t):=\mathcal{P}h_{\lambda}(t)\) is the orthogonal projection of \(h_{\lambda}(t)\) onto the span of the basis. For the quadrature involved in the computation of the 2-norm, we use Riemann's rule. ### Optimization Methods Compared In section III.4 we compared the TPE algorithm with random and grid searches. Here we benchmark these methods in the context of gravitational waves using a small dataset in the 1D case. We used 400 waveforms for training and 800 for validation, all equally spaced in the 1D parameter domain (\(1<q<8\)). The hyperparameters being optimized were \(l_{max}\) (with \(0\leq l_{max}\leq 7\)) and the seed \(\hat{\Lambda}_{0}\), giving place to a search space with 3,200 different configurations, with 400 different seeds and 8 possible \(l_{max}\) values. Since we have a discrete search space, we can go through all the different configurations with one grid search. We divide the comparison into two parts: 1. On the speed of convergence of TPE compared to random search, and how consistent it is through multiple runs. 2. On the time difference between grid search and one run of the TPE optimization. Figure 4: Optimizations with grid search (left), random search (center) and Bayesian optimization (right). Contours represent level curves of the Himmelbluu function, the blue dots the position of the different evaluations and the red crosses the best trial of each case. As visually seen, Bayesian optimization tends to make trials. In Figure 5 we show the results of running 150 optimizations for TPE and random search: each point is the median of the best validation error found for the 150 optimizations at a given time, and the area of color represents the Q1 and Q3 quartiles (0.25 and 0.75, respectively). The black dashed lines show the best error found with grid search. We can see that the TPE curve is always below the random one, and that the colored area reduces drastically around 70 seconds of optimization for the TPE method. The latter shows that TPE consistently finds good results after this point. The optimum value found by grid search was a validation error of \(1.23\times 10^{-7}\). This value was found on 19% of the TPE runs, while th other 81% found the second best value \(1.59\times 10^{-7}\). On the other hand, none of the random search runs was able to find the optimum value, and only 10% found the second best value of \(1.59\times 10^{-7}\). These results show that TPE can find better results than random search, in less time, and more consistently. Grid search, with the 3200 configurations, took 9.8 hours to complete, meanwhile 50 iterations of the TPE optimization took about 5 minutes. _This is a difference of two orders of magnitude, a factor of \(117\times\) times in speedup_. In a more realistic optimization task, e.g. using \(5,000\) waveforms for training and \(10,000\) for validation, and a total of \(50,000\) hyperparameter configurations, 100 iterations of TPE would take around 16 hours. Using these values we can estimate that a grid search would take, in contrast, around \(8,000\) hours, or _11 months to complete_. Thus grid search is not a viable method for finding an optimal configuration in any realistic scenario. All of these runs were performed in the Serafin cluster from CCAD-UNC 3, where each node consists on 2 AMD EPYC 7532 of 200W with 32 cores of Zen2 microarchitecture, with 128 GiB of RAM DDR4-3200. Footnote 3: Full details of the Serafin cluster at [https://ccad.unc.edu.ar/equipamiento/cluster-serafin/](https://ccad.unc.edu.ar/equipamiento/cluster-serafin/). ### Optimized hp-greedy reduced bases versus global ones Here we present our results for Bayesian HPO and hp-greedy of the gravitational waveforms setup described in Section IV.1 for the hyperparameters \[\{\hat{\Lambda}_{0}\,,l_{max}\},\] with fixed maximum dimensionalities \(n_{max}\) of each reduced basis. The accuracy threshold in all cases is chosen to be \(\epsilon=10^{-7}\), which is the systematic error of the underlying model, NRHybSur3dq8 [58]. The _learning curves_ of figure [6] show the validation error achieved for each optimized hp-greedy reduced basis; the intent of these plots is to determine when a training set is dense enough. For example, in the 1D case around \(2,000\) Figure 5: Evolution of the best validation error found for TPE and random search with 400 waveforms for training and 800 for validation. The dashed lines represent the median of the best error found for 150 optimizations at a given time, while the shaded area indicates the interquartile range, from Q1 to Q3 (0.25 and 0.75). The black line depicts the optimum error found in the grid search. training samples are enough; while in the 2D case this number grows to \(\sim 15,000\), which is smaller than \(2,000\times 2,000\) and shows that there is increased redundancy in the waveforms as the dimensionality grows. We are interested in the smallest \(n_{max}\) for each case, since this implies the fastest online waveform evaluation and data analysis in general: these are \(n_{max}=3,10\) for the 1D and 2D cases, respectively. When compared to global bases, the hyperparameter-optimized hp-greedy bases are \(4-5\) times smaller, which should translate into a factor of \((4-5)\times\) speedup both in waveform evaluation but, most importantly, in parameter estimation. ## V Discussion In this paper we continued our work on local, unstructured reduced bases for gravitational waves using hp-greedy refinement, with the aim of accelerating both waveform predictions and inference (especially parameter estimation). In reference [1] we found that there are new hyperparameters to be optimized, which do not appear in global reduced basis. As usual in the ML context, parameters are learned from training, while hyperparameters remain fixed during Figure 6: Left and right panels: 1D and 2D learning curves for gravitational waves. Each curve represents the validation error of hyperparameter-optimized hp-greedy reduced bases for a fixed \(n_{max}\) and varying training samples size. Each dot represents an optimized hp-greedy basis with respect to \(l_{max}\) and \(\Lambda_{0}\). The dashed horizontal black line represents the value of \(\epsilon_{tol}=10^{-7}\). Figure 7: Test errors comparing global reduced basis with local, hp-greedy ones. each training stage. The resulting structure of hp-greedy reduced bases is that of a binary tree. In our simulations, though limited in size, we have empirically found that the trees of hp-greedy refinement end up being nearly balanced. When a representation is needed for a given parameter, the corresponding local basis can be searched for in the representation tree in an efficient way, avoiding the computational cost of a direct/naive search. To do so, two sequential steps are needed: i) find the subspace containing the local reduced basis, and ii) use that basis for the representation. The search utilizes \(\lambda\) as input and the hp-greedy tree structure to traverse the tree from the root to a leaf node, which contains the queried value of \(\lambda\) in its subspace (see equation (4.13) of [44] for more details). The advantage of this approach is the low computational cost to find the required subspace; for example, if there are \(n\) subspaces and the tree is balanced, the computational cost is of order \(\mathcal{O}(\log\,n)\). There are several stopping criteria for subdividing the parameter domain and avoiding overfitting. In this work we have taken them to be \(n_{max}\) (maximum dimensionality of each local reduced basis), \(l_{max}\) (maximum depth of the tree) and the error \(\epsilon_{tol}\); in practice the latter should not be smaller than the underlying systematical error of the data. Our results show that using a Bayesian approach is a promising path to HPO. Nonetheless, there are other alternatives such as evolutionary programming or genetic algorithms, which were left out of the scope of this paper and should be analyzed in future work. In conjunction with the computations outlined in the paper, we have made the corresponding code available on GitHub. These repositories contain the codebase for Bayesian optimization [59] and hp-greedy models [60] used in the paper. ## VI Acknowledgments This work was partially supported by CONICET and by project PICT-2021-00757, Argentina, it used computational resources from CCAD - Universidad Nacional de Cordoba ([https://ccad.unc.edu.ar/](https://ccad.unc.edu.ar/)), which are part of SNCAD - MinCyT, Republica Argentina. MT thanks the Horace Hearne Institute for Theoretical Physics at LSU for hospitality during the conference "Workshop on Gravity: classical, quantum, theoretical and experimental" on March 2023, where part of this work was done.
2302.01327
Dual PatchNorm
We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. We demonstrate that Dual PatchNorm outperforms the result of exhaustive search for alternative LayerNorm placement strategies in the Transformer block itself. In our experiments, incorporating this trivial modification, often leads to improved accuracy over well-tuned Vision Transformers and never hurts.
Manoj Kumar, Mostafa Dehghani, Neil Houlsby
2023-02-02T18:56:25Z
http://arxiv.org/abs/2302.01327v3
# Dual PatchNorm ###### Abstract We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms), before and after the patch embedding layer in Vision Transformers. We demonstrate that Dual PatchNorm outperforms the result of exhaustive search for alternative LayerNorm placement strategies in the Transformer block itself. In our experiments, incorporating this trivial modification, often leads to improved accuracy over well-tuned Vision Transformers and never hurts. ## 1 Introduction Layer Normalization (Ba et al., 2016) is key to Transformer's success in achieving both stable training and high performance across a range of tasks. Such normalization is also crucial in Vision Transformers (ViT) (Dosovitskiy et al., 2020) which closely follow the standard recipe of the original Transformer model. Following the "pre-LN" strategy in Baevski & Auli (2019) and Xiong et al. (2020), ViTs place LayerNorms before the self-attention layer and MLP layer in each Transformer block. We explore the following question: Can we improve ViT models with a different LayerNorm ordering? First, across five ViT architectures on ImageNet-1k (Russakovsky et al., 2015), we demonstrate that an exhaustive search of LayerNorm placements between the components of a Transformer block does not improve classification accuracy. This indicates that the pre-LN strategy in ViT is close to optimal. Our observation also applies to other alternate LayerNorm placements: NormFormer (Shleifer et al., 2021) and Sub-LN (Wang et al., 2022), which in isolation, do not improve over strong ViT classification models. Second, we make an intriguing observation: placing additional LayerNorms before and after the standard ViT-projection layer, which we call Dual PatchNorm (DPN), can improve significantly over well tuned ViT baselines. Experiments across three different datasets with varying number of examples, demonstrate the efficacy of DPN. Interestingly, our qualitative experiments show that the LayerNorm scale parameters upweight the pixels at the center and corners of each patch. ``` 1hp,wp=patch_size[0],patch_size[1] 2x=einops.rearrange( 3x,"b(htbp)(wtwp)c->b(htwt)(hpwpc)",hp=hp,wp=wp) 4x=nn.LayerNorm(name="ln0")(x) 5x=nn.Dense(output_features,name="dense")(x) 6x=nn.LayerNorm(name="ln1")(x) ``` Dual PatchNorm consists of a 2 line change to the standard ViT-projection layer. ## 2 Related Work Kim et al. (2021) add a LayerNorm after the patch-embedding and show that this improves the robustness of ViT against corruptions on small-scale datasets. Xiao et al. (2021) replace the standard Transformer stem with a small number of stacked stride-two \(3\times 3\) convolutions with batch normalizations and show that this improves the sensitivity to optimization hyperparameters and final accuracy. Xu et al. (2019) analyze LayerNorm and show that the derivatives of mean and variance have a greater contribution to final performance as opposed to forward normalization. Beyer et al. (2022a) consider Image-LN and Patch-LN as alternative strategies to efficiently train a single model for different patch sizes. Wang et al. (2022) add extra LayerNorms before the final dense projection in the self-attention block and the non-linearity in the MLP block, with a different initialization strategy. Shleifer et al. (2021) propose extra LayerNorms after the final dense projection in the self-attention block instead with a LayerNorm after the non-linearity in the MLP block. ## 3 Background ### Patch Embedding Layer in Vision Transformer Vision Transformers (Dosovitskiy et al., 2020) consist of a patch embedding layer (PE) followed by a stack of Transformer blocks. The PE layer first rearranges the image \(x\in\mathcal{R}^{H\times W\times 3}\) into a sequence of patches \(x_{p}\in\mathcal{R}^{\frac{HW}{P^{2}}\times P^{2}}\) where \(P\) denotes the patch size. It then projects each patch independently with a dense projection to constitute a sequence of "visual tokens" \(\mathbf{x}_{\mathbf{t}}\in\mathcal{R}^{\frac{HW}{P^{2}}\times D}\)\(P\) controls the trade-off between granularity of the visual tokens and the computational cost in the subsequent Transformer layers. ### Layer Normalization Given a sequence of \(N\) patches \(\mathbf{x}\in\mathcal{R}^{N\times D}\), LayerNorm as applied in ViTs consist of two operations: \[\mathbf{x} =\frac{\mathbf{x}-\mu(x)}{\sigma(x)} \tag{1}\] \[\mathbf{y} =\gamma\mathbf{x}+\beta \tag{2}\] where \(\mu(x)\in\mathcal{R}^{N},\sigma(x)\in\mathcal{R}^{N},\gamma\in\mathcal{R}^{D}, \beta\in\mathcal{R}^{D}\). First, Eq. 1 normalizes each patch \(\mathbf{x}_{\mathbf{i}}\in\mathcal{R}^{D}\) of the sequence to have zero mean and unit standard deviation. Then, Eq 2 applies learnable shifts and scales \(\beta\) and \(\gamma\) which are shared across all patches. ## 4 Methods ### Alternate LayerNorm placements: Following Baevski & Auli (2019) and Xiong et al. (2020), ViTs incorporate LayerNorm before every self-attention and MLP layer, commonly known as the pre-LN strategy. For each of the self-attention and MLP layer, we evaluate 3 strategies: place LayerNorm before (pre-LN), after (post-LN), before and after (pre+post-LN) leading to nine different combinations. ### Dual PatchNorm Instead of adding LayerNorms to the Transformer block, we also propose to apply LayerNorms in the stem alone, both before and after the patch embedding layer. In particular, we replace \[\mathbf{x}=\text{PE}(\mathbf{x}) \tag{3}\] with \[\mathbf{x}=\text{LN}(\text{PE}(\text{LN}(\mathbf{x}))) \tag{4}\] and keep the rest of the architecture fixed. We call this Dual PatchNorm (DPN). Experiments ### Setup We train ViT architectures (with and without DPN) in a supervised fashion on 3 different datasets with varying number of examples: ImageNet-1k (1M), ImageNet-21k (21M) and JFT (4B) (Zhai et al., 2022a). In our experiments, we apply DPN directly on top of the baseline ViT recipes without additional hyperparamter tuning. We split the ImageNet train set into a train and validation split, and use the validation split to arrive at the final DPN recipe. ImageNet 1k:We train 5 architectures: Ti/16, S/16, S/32, B/16 and B/32 using the AugReg (Steiner et al., 2022) recipe for 93000 steps with a batch size of 4096 and report the accuracy on the official ImageNet validation split as is standard practice. We additionally evaluate a S/16 baseline (S/16+) with more optimal hyperparameters on ImageNet (Beyer et al., 2022b). ImageNet 21k:We adopt a similar setup as in ImageNet 1k. We report ImageNet 25 shot accuracies in two training regimes: 93K and 930K steps. Jft:We evaluate the ImageNet 25 shot accuracies of 3 variants (B/32, B/16 and L/16) on 2 training regimes: (220K and 1.1M steps) with a batch size of 4096. In this setup, we do not use any additional data augmentation or mixup regularization. On ImageNet-1k, we report the \(95\%\) confidence interval across 3 independent runs. On ImageNet-21k and JFT, because of expensive training runs, we train each model once and report the mean 25 shot accuracy with \(95\%\) confidence interval across 3 random seeds. ### DPN versus alternate LayerNorm placements Each Transformer block in ViT consists of a self-attention (SA) and MLP layer. Following the pre-LN strategy (Xiong et al., 2020), LN is inserted before both the SA and MLP layers. We first show that the default pre-LN strategy in ViT models is close to optimal by evaluating alternate LN placements on ImageNet-1k. We then contrast this with the performance of NormFormer, Sub-LN and DPN. For each SA and MLP layer, we evaluate three LN placements: Pre, Post and Pre+Post, that leads to nine total LN placement configurations. Additionally, we evaluate the LayerNorm placements in NormFormer (Shleifer et al., 2021) and Sub LayerNorm (Wang et al., 2022) which add additional LayerNorms within each of the self-attention and MLP layers in the transformer block. Figure 1 Figure 1: The plot displays the accuracy gains of different LayerNorm placement strategies over the default pre-LN strategy. Each blue point (**Other LN placement**) corresponds to a different LN placement in the Transformer block. None of the placements outperform the default Pre-LN strategy on ImageNet-1k (Russakovsky et al., 2015). Applying DPN (black cross) provides consistent improvements across all 5 architectures. shows that none of the placements outperform the default Pre-LN strategy significantly, indicating that the default pre-LN strategy is close to optimal. NormFormer provides some improvements on ViT models with a patch size of 32. DPN on the other-hand provides consistent improvements across all 5 architectures. ### Comparison to ViT In Table 1 left, DPN improved the accuracy of B/16, the best ViT model by 0.7 while S/32 obtains the maximum accuracy gain of 1.9. The average gain across all architecture is 1.4. DPN improves all architectures trained on ImageNet-21k (Table 1 Right) and JFT (Table 2) on shorter training regimes with average gains of 1.7 and 0.8 respectively. On longer training regimes, DPN improves the accuracy of the best-performing architectures on JFT and ImageNet-21k by 0.5 and 0.4 respectively. In three cases, Ti/16 and S/32 with ImageNet-21k and B/16 with JFT, DPN matches or leads to marginally worse results than the baseline. Nevertheless, across a large fraction of ViT models, simply employing DPN out-of-the-box on top of well-tuned ViT baselines lead to significant improvements. ### Finetuning with DPN We finetune four models trained on JFT-4B with two resolutions on ImageNet-1k: (B/32, B/16) \(\times\) (220K, 1.1M) steps on resolutions \(224\times 224\) and \(384\times 384\). On B/32, we observe a consistent improvement across all configurations. On L/16, the baselines without DPN match the transfer performance with DPN. ### Contrastive Learning We adopt the LiT contrastive-learning setup (Zhai et al., 2022) and evaluate models trained with DPN on zero-shot ImageNet accuracy. We evalute 4 frozen image encoders: 2 architectures (B/32 and L/16) trained with 2 schedules (220K and 1.1M steps). We resue standard hyperparameters and train only the text encoder using a contrastive loss for 55000 steps with a batch-size of 16384. Table 3 shows that on B/32, DPN improves over the baselines on both the setups while on L/16 DPN provides improvement when the image encoder is trained with shorter training schedules. \begin{table} \begin{tabular}{c c c} \hline \hline Arch & Base & DPN \\ \hline \hline \multicolumn{3}{c}{93K Steps} \\ \hline Ti/16 & \(52.2\pm 0.07\) & \(\mathbf{53.6}\pm 0.07\) \\ S/32 & \(54.1\pm 0.03\) & \(\mathbf{56.7}\pm 0.03\) \\ B/32 & \(60.9\pm 0.03\) & \(\mathbf{63.7}\pm 0.03\) \\ S/16 & \(64.3\pm 0.15\) & \(\mathbf{65.0}\pm 0.06\) \\ B/16 & \(70.8\pm 0.09\) & \(\mathbf{72.0}\pm 0.03\) \\ \hline \multicolumn{3}{c}{930K Steps} \\ \hline Ti/16 & \(\mathbf{61.0}\pm 0.03\) & \(\mathbf{61.2}\pm 0.03\) \\ S/32 & \(63.8\pm 0.00\) & \(\mathbf{65.1}\pm 0.12\) \\ B/32 & \(72.8\pm 0.03\) & \(\mathbf{73.1}\pm 0.07\) \\ S/16 & \(\mathbf{72.5}\pm 0.1\) & \(\mathbf{72.5}\pm 0.1\) \\ B/16 & \(78.0\pm 0.06\) & \(\mathbf{78.4}\pm 0.03\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Left:** ImageNet-1k validation accuracies of five ViT architectures with and without dual patch norm after 93000 steps. **Right:** We train ViT models on ImageNet-21k in two training regimes: 93k and 930k steps with a batch size of 4096. The table shows their ImageNet 25 shot accuracies with and without Dual PatchNorm ### Ablations and Analysis Is normalizing both the inputs and outputs of the embedding layer optimal?In Eq 4, DPN applies LN to both the inputs and outputs to the embedding layer. We assess three alternate strategies: **Pre, Post** and **Post Pos**(Radford et al., 2021). **Pre** applies LayerNorm only to the inputs, **Post** only to the outputs and **Post PosEmb** to the outputs after being summed with positional embeddings. Table 4 displays the accuracy gains with two alternate strategies: **Pre** is unstable on B/32 leading to a significant drop in accuracy. Additionally, **Pre** obtains minor drops in accuracy on S/32 and Ti/16. **Post** and **Post PosEmb** achieve worse performance on smaller models B/32, S/32 and Ti/16. Our experiments show that applying LayerNorm to both inputs and outputs of the embedding layer is necessary to obtain consistent improvements in accuracy across all ViT variants. Normalization vs Learnable Parameters:As seen in Sec. 3.2, LayerNorm constitutes a normalization operation followed by learnable scales and shifts. We also ablate the effect of each of these operations in DPN. Applying only learnable scales and shifts without normalization leads to a significant decrease in accuracy across all architectures. (See: **Only learnable** in Table 4). Additionally, removing the \begin{table} \begin{tabular}{c c c c c c} \hline \hline & B/16 & S/16 & B/32 & S/32 & Ti/16 \\ \hline Pre & -0.1 & 0.0 & -2.6 & -0.2 & -0.3 \\ Post & 0.0 & -0.2 & -0.5 & -0.7 & -1.1 \\ Post PosEmb & 0.0 & -0.1 & -0.4 & -0.9 & -1.1 \\ \hline Only learnable & -0.8 & -0.9 & -1.2 & -1.6 & -1.6 \\ RMSNorm & 0.0 & -0.1 & -0.4 & -0.5 & -1.7 \\ No learnable & -0.5 & 0.0 & -0.2 & -0.1 & -0.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablations of various components of DPN. **Pre:** LayerNorm only to the inputs of the embedding layer. **Post:** LayerNorm only to the outputs of the embedding layer. **No learnable:** Per-patch normalization without learnable LayerNorm parameters. **Only learnable:** Learnable scales and shifts without standardization. \begin{table} \begin{tabular}{c c c} \hline \hline Arch & Base & DPN \\ \hline \multicolumn{3}{c}{220K steps} \\ \hline B/32 & \(63.8\pm 0.03\) & \(\mathbf{65.2}\pm 0.03\) \\ B/16 & \(72.1\pm 0.09\) & \(\mathbf{72.4}\pm 0.07\) \\ L/16 & \(77.3\pm 0.00\) & \(\mathbf{77.9}\pm 0.06\) \\ \hline \multicolumn{3}{c}{1.1M steps} \\ \hline B/32 & \(70.7\pm 0.1\) & \(\mathbf{71.1}\pm 0.09\) \\ B/16 & \(\mathbf{76.9}\pm 0.03\) & \(76.6\pm 0.03\) \\ L/16 & \(80.9\pm 0.03\) & \(\mathbf{81.4}\pm 0.06\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c c} \hline \hline Arch & Resolution & Steps & Base & DPN \\ \hline B/32 & \(224\) & \(220\)K & \(77.6\pm 0.06\) & \(\mathbf{78.3}\pm 0.00\) \\ B/32 & \(384\) & \(220\)K & \(81.3\pm 0.09\) & \(\mathbf{81.6}\pm 0.00\) \\ B/32 & \(224\) & \(1.1\)M & \(80.8\pm 0.1\) & \(\mathbf{81.3}\pm 0.00\) \\ B/32 & \(384\) & \(1.1\)M & \(83.8\pm 0.03\) & \(\mathbf{84.1}\pm 0.00\) \\ \hline \hline L/16 & \(224\) & \(220\)K & \(\mathbf{84.6}\pm 0.06\) & \(\mathbf{84.5}\pm 0.13\) \\ L/16 & \(384\) & \(220\)K & \(\mathbf{86.4}\pm 0.00\) & \(\mathbf{86.4}\pm 0.03\) \\ L/16 & \(224\) & \(1.1\)M & \(\mathbf{86.6}\pm 0.03\) & \(\mathbf{86.5}\pm 0.13\) \\ L/16 & \(384\) & \(1.1\)M & \(\mathbf{87.8}\pm 0.08\) & \(\mathbf{87.9}\pm 0.01\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Left:** We train 3 ViT models on JFT-4B in two training regimes: 200K and 1.1M steps with a batch size of 4096. The table displays their ImageNet 25 shot accuracies with and without DPN. **Right:** Corresponding full finetuneing results on ImageNet-1k. \begin{table} \begin{tabular}{c c c c} \hline \hline Arch & Steps & Base & DPN \\ \hline B/32 & \(220\)K & \(61.9\pm 0.12\) & \(\mathbf{63.0}\pm 0.09\) \\ B/32 & \(1.1\)M & \(67.4\pm 0.07\) & \(\mathbf{68.0}\pm 0.09\) \\ L/16 & \(220\)K & \(75.0\pm 0.11\) & \(\mathbf{75.4}\pm 0.00\) \\ L/16 & \(1.1\)M & \(\mathbf{78.7}\pm 0.05\) & \(\mathbf{78.7}\pm 0.1\) \\ \hline \hline \end{tabular} \end{table} Table 3: Zero Shot ImageNet accuracy on the LiT (Zhai et al., 2022b) contrastive learning setup. learnable parameters leads to unstable training on B/16 (**No learnable** in Table 4). Finally, removing the centering and bias parameters as done in RMSNorm (Zhang & Sennrich, 2019), reduces the accuracy of B/32, S/32 and Ti/16. We conclude that while both normalization and learnable parameters contribute to the success of DPN, normalization has a higher impact. ### Visualizing Scale Parameters Note that the first LayerNorm in Eq. 4 is applied directly on patches, that is, to raw pixels. Thus, the learnable parameters (biases and scales) of the first LayerNorm can be visualized directly in pixel space. Fig. 2 shows the scales of our smallest model and largest model which are: Ti/16 trained on ImageNet for 90000 steps and L/16 trained on JFT for 1.1M steps respectively. Since the absolute magnitude of the scale parameters vary across the R, G and B channel, we visualize the scale separately for each channel. Interestingly, for both models the scale parameter increases the weight of the pixels in the center of the patch and at the corners. #### Acknowledgments We would like to thank Lucas Beyer for detailed feedback on the draft and help with implementation details. We thank Xiahoua Zhai, Abhishek Kumar, Jonathan Heek, Alexander Kolesnikov and Alexey Gritsenko for fruitful dicussions throughout the project. We additionally thank Ross Wightman and Boris Dayma for suggesting the post positional embedding LayerNorm and RMSNorm ablations.
2301.00569
Elias Ideals
Let $(R, \mathfrak m)$ be a one dimensional local Cohen-Macaulay ring. An $\mathfrak m$-primary ideal $I$ of $R$ is Elias if the types of $I$ and of $R/I$ are equal. Canonical and principal ideals are Elias, and Elias ideals are closed under inclusion. We give multiple characterizations of Elias ideals and concrete criteria to identify them. We connect Elias ideals to other well-studied definitions: Ulrich, $\mathfrak m$-full, integrally closed, trace ideals, etc. Applications are given regarding canonical ideals, conductors and the Auslander index.
Hailong Dao
2023-01-02T09:02:57Z
http://arxiv.org/abs/2301.00569v1
# Elias ideals ###### Abstract. Let \((R,\mathfrak{m})\) be a one dimensional local Cohen-Macaulay ring. An \(\mathfrak{m}\)-primary ideal \(I\) of \(R\) is Elias if the types of \(I\) and of \(R/I\) are equal. Canonical and principal ideals are Elias, and Elias ideals are closed under inclusion. We give multiple characterizations of Elias ideals and concrete criteria to identify them. We connect Elias ideals to other well-studied definitions: Ulrich, \(\mathfrak{m}\)-full, integrally closed, trace ideals, etc. Applications are given regarding canonical ideals, conductors and the Auslander index. 2020 Mathematics Subject Classification: Primary: 13D02, 13H10. Secondary: 14B99 ## Introduction Let \((R,\mathfrak{m})\) be a local Cohen-Macaulay ring of dimension one and \(I\) be an \(\mathfrak{m}\)-primary ideal of \(R\). We say that \(I\) is Elias if the Cohen-Macaulay types of \(I\) and \(R/I\) coincide. From standard facts, principal ideals or canonical ideals are Elias, and we will soon see that this property begets a rather rich and interesting theory. Our work is heavily influenced by a nice result in [7], where Elias proves that any ideal \(\omega\) that lies inside a high enough power of \(\mathfrak{m}\) and such that \(R/\omega\) is Gorenstein must be a canonical ideal. Although not stated explicitly there, the proof showed that any ideal that lies in a high enough power of \(\mathfrak{m}\) is Elias, in our sense. Another inspiration for the present work is [2], where De Stefani studies, in our language, powers of \(\mathfrak{m}\) that are Elias in a Gorenstein local ring, and gives a counter-example to a conjecture by Ding (see Section 4 for the precise connection). In this note, we study Elias ideals in depth. They admit many different characterizations, and enjoy rather useful properties. For instance, they are **closed under inclusion**, and principal or canonical ideals are Elias. On the other hand, conductor ideals or regular trace ideals are not Elias. When \(R\) is Gorenstein, they are precisely ideals such that the Auslander index \(\delta(R/I)\) is \(1\). We are able to obtain many criteria to check whether an ideal is Elias, using very accessible information such as the minimal number or valuations of generators. Combining them immediately gives sharp bounds and information on conductor or canonical ideals, which can be tricky to obtain otherwise. There are several obvious ways to extend the present definitions and results to higher dimension rings or to modules. However, we choose to focus on the ideals in dimension one case here as they are already interesting enough, and also to keep the paper short. We hope to address the more general theory in future works. We now describe briefly the structure and key results of the paper. * In section 1 we give the formal definition of Elias ideals and prove several key results. Theorem 1.2 contains several equivalent characterizations of Elias ideals. Corollary 1.3 collects important consequences, for instance that Elias ideals are closed under ideal containment. Also, criteria for Elias ideals using colon ideals are given. Next, Proposition 1.4 establishes the fundamental change of rings result that are used frequently in the sequence. * Section 2 connects Elias ideals to several well-studied class of ideals: Ulrich ideals, \(\mathfrak{m}\)-full ideals, full ideals, integrally closed ideals, etc. After some basic observations, (2.3, 2.4, 2.5), we give Theorems 2.7 and Proposition 2.14, which contain concrete ways to recognize Elias ideals using basic information such as number of generators or valuations. We also derive that conductor ideals or regular trace ideals are not Elias (Corollary 2.13). This indicates one of the useful application: if we know, for instance, that \(\mathfrak{m}^{2}\) is Elias, then the conductor or any regular trace ideal must contains an element of \(\mathfrak{m}\)-adic order \(1\). * Given the previous section, it is natural to study the Elias index \(\operatorname{eli}(R)\), namely the first power of \(\mathfrak{m}\) that is Elias, and we do so in Section 3. The first main result here is Theorem 3.2, connecting this index to the generalized Loewy length and the regularity of the associated graded ring. Next, in Theorem 3.3, we characterize rings with small indexes: \(\operatorname{eli}(R)=1\) if and only if \(R\) is regular, and \(\operatorname{eli}(R)=2\) plus \(R\) is Gorenstein is equivalent to \(e(R)=2\). We give a large class of non-Gorenstein rings with Elias index \(2\) (3.4). * Lastly, in Section 4 we focus on the special case of Gorenstein rings. In such situation, we observe that Elias ideals are precisely ones whose quotient has Auslander \(\delta\)-invariant one. This immediately allows us to apply what we have to recover old results about the Auslander invariant and Auslander index in 4.1 and 4.3. We give a counter-example to a Theorem by Ding and also revisit a counter-example to a conjecture by Ding given in [2] (Examples 4.4 and 4.5). **Acknowledgements**: It is a pleasure to thank Juan Elias and Alessandro Di Stefani for helpful comments and encouragements. The author is partially supported by the Simons Collaboration Grant FND0077558. ## 1. Elias ideals: definitions and basic results Throughout the paper, let \((R,\mathfrak{m},k)\) be Cohen-Macaulay local ring of dimension one. For a module \(M\), set \(\operatorname{type}_{R}(M)=\dim_{k}\operatorname{Ext}_{R}^{\dim M}(k,M)\). Set \(Q=Q(R)\) to be the total ring of fractions of \(R\). Set \(e=e(R)\), the Hilbert-Samuel multiplicity of \(R\). For an element \(x\in R\), the \(\mathfrak{m}\)-adic order of \(x\), denoted \(\operatorname{ord}(x)\) is the smallest \(a\) such that \(x\in\mathfrak{m}^{a}\). The order of an ideal \(I\), denoted \(\operatorname{ord}(I)\), is the minimum order of its elements. **Definition 1.1**.: We say that a \(\mathfrak{m}\)-primary ideal \(I\) is an Elias ideal if it satisfies \(\operatorname{type}(I)=\operatorname{type}(R/I)\). **Theorem 1.2**.: _We always have \(\operatorname{type}(I)\geq\operatorname{type}(R/I)\). The following are equivalent._ 1. \(\operatorname{type}(I)=\operatorname{type}(R/I)\)_._ 2. _For any NZD_ \(x\in\mathfrak{m}\)_,_ \(xI:\mathfrak{m}\subseteq(x)\)_._ 3. _For any NZD_ \(x\in\mathfrak{m}\)_,_ \(xI:\mathfrak{m}=x(I:\mathfrak{m})\)_._ 4. _For some NZD_ \(x\in\mathfrak{m}\)_,_ \(xI:\mathfrak{m}\subseteq(x)\)_._ 5. _For some NZD_ \(x\in\mathfrak{m}\)_,_ \(xI:\mathfrak{m}=x(I:\mathfrak{m})\)_._ 6. \(I:_{Q}\mathfrak{m}\subseteq R\)_._ 7. \(K\subseteq\mathfrak{m}(K:_{Q}I)\) _(assuming_ \(R\) _admits a canonical ideal_ \(K\)_)._ Proof.: Let \(x\) be a NZD. Then \[\operatorname{type}(I)=\operatorname{type}(I/xI)=\dim_{k}\frac{xI:\mathfrak{ m}}{xI}\geq\dim_{k}\frac{x(I:\mathfrak{m})}{xI}=\dim_{k}\frac{I:\mathfrak{m}}{I}= \operatorname{type}(R/I)\] Thus, \(\operatorname{type}(I)=\operatorname{type}(R/I)\) if and only if \(xI:\mathfrak{m}=x(I:\mathfrak{m})\). Now, \(xI:\mathfrak{m}\subseteq(x)\) is equivalent to \(xI:\mathfrak{m}=xJ\) for some ideal \(J\), as \(x\) is a NZD. Rewriting it as \(xJ\mathfrak{m}\subseteq xI\), which is equivalent to \(J\mathfrak{m}\subseteq I\), we get \(J\subseteq I:\mathfrak{m}\). On the other hand \(x(I:\mathfrak{m})\subseteq xI:\mathfrak{m}\), thus \(J=I:\mathfrak{m}\). That establishes the equivalence of first five items. Note that for any NZD \(x\in\mathfrak{m}\), \(xI:\mathfrak{m}=x(I:_{Q}\mathfrak{m})\). Thus, (6) is equivalent to (3). Let \(K\) be a canonical ideal. Apply \(\operatorname{Hom}_{R}(-,K)\) to the sequence \(0\to I\to R\to R/I\to 0\), and indentifying \(\operatorname{Hom}_{R}(I,K)\) with \(K:_{Q}I\), we get \(0\to K\to K:_{Q}I\to\operatorname{Ext}_{R}^{1}(R/I,K)=\omega_{R/I}\to 0\). Since \(\operatorname{type}(I)=\mu(K:_{Q}I)\) and \(\operatorname{type}(R/I)=\mu(\omega_{R/I})\), the equivalence of (7) and (1) follows. **Corollary 1.3**.: _We have:_ 1. _If_ \(I\) _is isomorphic to_ \(R\) _or the canonical module of_ \(R\) _(assuming its existence), then_ \(I\) _is Elias._ 2. _If_ \(I\) _is Elias, then so is_ \(J\) _for any ideal_ \(J\subseteq I\)_. (being Elias is closed under inclusion)_ 3. _Let_ \(K\) _be a canonical ideal of_ \(R\) _and_ \(I\) _be an ideal containing_ \(K\)_. Then_ \(I\) _is Elias if and only if_ \(K\subseteq\mathfrak{m}(K:_{R}I)\)_._ 4. _Let_ \(K\) _be a canonical ideal of_ \(R\) _and_ \(I\) _be an ideal such that_ \(K\subseteq I\)_. Then_ \(K:I\) _is Elias if and only if_ \(K\subseteq\mathfrak{m}I\)_._ 5. _Suppose that_ \(I\) _contains a canonical ideal_ \(K\) _such that_ \(\operatorname{ord}(K)=1\)_. Then_ \(I\) _is Elias if and only if_ \(I=K\)_._ Proof.: For the first claim, \(I:_{Q}\mathfrak{m}\subset I:_{Q}I=R\). For the second claim, we have \(J:_{Q}\mathfrak{m}\subset I:_{Q}\mathfrak{m}\). For (3), first note that \(K:_{Q}I\subset K:_{Q}K=R\), so \(K:_{Q}I=K:_{R}I\), and we can use part (7) of Theorem 1.2. For part (4), note that \(K:(K:I)=I\) hence we can apply part (3). For part (5), we again apply part (3): if \(K\subsetneq I\), then \(\mathfrak{m}(K:_{R}I)\subseteq\mathfrak{m}^{2}\), contradicting \(\operatorname{ord}(K)=1\). The following change of rings result would be used frequently in what follows. **Proposition 1.4**.: _Let \((R,\mathfrak{m})\to(S,\mathfrak{n})\) be a local, flat rings extension such that \(\dim S=1\) and \(S\) is Noetherian. Then \(I\) is an Elias ideal of \(R\) if and only if \(IS\) is an Elias ideal of \(S\)._ Proof.: Under the assumption we have \(\operatorname{type}_{R}(M)\operatorname{type}_{S/\mathfrak{m}S}(S/\mathfrak{ m}S)=\operatorname{type}_{S}(M\otimes_{R}S)\) for any finitely generated \(R\)-module \(M\) (see for instance [11]), thus the result follows. ## 2. Elias ideals and other special ideals **Definition 2.1**.: Let \(I\) be an \(\mathfrak{m}\)-primary ideal. * \(I\) is called Ulrich (as an \(R\)-module) if \(\mu(I)=e(R)\). Assuming \(k\) is infinite, then \(I\) is Ulrich if and only if \(xI=\mathfrak{m}I\) for some \(x\in\mathfrak{m}\) (equivalently, for any \(x\in\mathfrak{m}\) such that \(\ell(R/xR)=e(R)\)). * \(I\) is called \(\mathfrak{m}\)-full if \(I\mathfrak{m}:x=I\) for some \(x\in\mathfrak{m}\). * \(I\) is called full (or basically full) if \(I\mathfrak{m}:\mathfrak{m}=I\). **Remark 2.2**.: When the definition of special ideals such as Ulrich or \(\mathfrak{m}\)-full ones involves an element \(x\), we say that the property is witnessed by \(x\). Note that being such \(x\) is a Zariski-open condition (for the image of \(x\) in the vector space \(\mathfrak{m}/\mathfrak{m}^{2}\)). For more on these ideals, see [3, 10, 9, 12]. **Proposition 2.3**.: _Let \(I\) be an \(\mathfrak{m}\)-primary ideal. Let \(e\) be the Hilbert-Samuel multiplicity of \(R\). The following are equivalent._ 1. \(I\) _is Ulrich._ 2. \(\operatorname{type}(I)=e\)_._ Proof.: We can assume \(k\) is infinite by making the flat extension \(R\to R[t]_{(\mathfrak{m},t)}\). Let \(x\in\mathfrak{m}\) be such that \(\ell(R/xR)=e\). Then \(\ell(I/xI)=e\). Note that \(\operatorname{type}(I)=\ell(\operatorname{soc}(I/xI))\leq\ell(I/xI)=e\), and equality happens precisely when \(\mathfrak{m}(I/xI)=0\), in other words, \(I\) is Ulrich. **Proposition 2.4**.: _Let \(I\) be an \(\mathfrak{m}\)-primary ideal._ 1. _Suppose_ \(k\) _is infinite. If_ \(I\) _is Ulrich, then it is_ \(\mathfrak{m}\)_-full._ 2. _Suppose_ \(k\) _is infinite. If_ \(I\) _is integrally closed, then it is_ \(\mathfrak{m}\)_-full._ 3. _If_ \(I\) _is_ \(\mathfrak{m}\)_-full, then it is full._ Proof.: (1): We can find a NZD \(x\) such that \(Ix=I\mathfrak{m}\), so \(I\mathfrak{m}:x=Ix:x=I\). (2): see [8, Theorem 2.4]. (3): We have \(I\subseteq I\mathfrak{m}:\mathfrak{m}\subseteq I\mathfrak{m}:x\), from which the assertion is clear. **Proposition 2.5**.: _If \(I\) is \(\mathfrak{m}\)-full, witnessed by a NZD \(x\in\mathfrak{m}\). The following are equivalent:_ 1. \(I\) _is Elias._ 2. \(I=xJ\) _for some Ulrich ideal_ \(J\)_._ Proof.: Assume \(I\) is Elias, witnessed by a NZD \(x\), so \(I\mathfrak{m}:x=I\). We will show that \(I\subseteq(x)\). If not, then \(I\) contains an element \(s\) whose image in \(R/(x)\) is in the socle. Thus \(s\mathfrak{m}\subset I\mathfrak{m}\cap(x)=x(I\mathfrak{m}:x)=xI\), so \(s\in xI:\mathfrak{m}\subset(x)\), a contradiction. Since \(I\subseteq(x)\) we must have \(I=xJ\) for some \(J\). We have \(Jx=I=I\mathfrak{m}:x=Jx\mathfrak{m}:x=J\mathfrak{m}\), so \(J\) is Ulrich. Assume (2). Then \(I\) is Ulrich and also full by 2.4, so \(xI:\mathfrak{m}=\mathfrak{m}I:\mathfrak{m}=I=xJ\subset(x)\), thus \(I\) is Elias. **Corollary 2.6**.: _If \(e=2\) and \(k\) is infinite, then \(I\) is Elias if and only if \(I\subseteq(x)\) for some NZD \(x\in\mathfrak{m}\)._ Proof.: Since \(e=2\), any ideal is either principal or Ulrich, and 2.4 together with 2.5 give what we want. **Theorem 2.7**.: _The following hold for an \(\mathfrak{m}\) primary ideal \(I\)._ 1. _If_ \(\mu(I)<e\) _and_ \(\operatorname{type}(R/I)\geq e-1\)_, then_ \(I\) _is Elias._ 2. _Assume_ \(\mu(\mathfrak{m}I)\leq\mu(I)=e-1\)_. Then_ \(I\mathfrak{m}\) _is Elias and_ \(I\mathfrak{m}:\mathfrak{m}=I\)_._ 3. _Furthermore, assume_ \(R=S/(f)\) _is a hypersurface, here_ \(S\) _is a regular local ring of dimension_ \(2\)_. Let_ \(J\) _be an_ \(S\) _ideal minimally generated by_ \(e\) _elements, one of them is_ \(f\)_. Then_ \(JR\) _is Elias._ Proof.: By the inequality \(\operatorname{type}(I)\geq\operatorname{type}(R/I)\), we must have \(\operatorname{type}(I)\) is \(e\) or \(e-1\). But if \(\operatorname{type}(I)=e\), then \(\mu(I)=e\) by 2.3, contradiction. Next, we have: \[\operatorname{type}(R/I\mathfrak{m})=\dim_{k}\frac{I\mathfrak{m}:\mathfrak{m} }{I\mathfrak{m}}\geq\dim_{k}\frac{I}{I\mathfrak{m}}=\mu(I)\geq e-1\] and \(I\mathfrak{m}\) is not Ulrich by assumption. So \(I\mathfrak{m}\) is Elias and \(\operatorname{type}(I\mathfrak{m})=e-1\), which by the chain above implies that \(I\mathfrak{m}:\mathfrak{m}=I\). For the last part, let \(I=JR\). Then \(\mu_{R}(I)=e-1\) and \(\operatorname{type}(R/I)=\operatorname{type}(S/J)=e-1\), and we can apply the first part. **Example 2.8**.: Let \(R=k[[t^{4},t^{5},t^{11}]]\cong k[[a,b,c]]/(a^{4}-bc,b^{3}-ac,c^{2}-a^{3}b^{2})\). Then \(\mathfrak{m}^{2}\) is Elias: one can check directly or note that \(\mu(\mathfrak{m})=\mu(\mathfrak{m}^{2})=3=e(R)-1\) and use 2.7. But \(\mathfrak{m}^{2}\) is not contained in \((x)\) for any \((x)\). **Example 2.9**.: Let \(R=k[[t^{6},t^{7},t^{15}]]\cong k[[a,b,c]]/(a^{5}-c^{2},b^{3}-ac)\). Then the Hilbert function is \(\{1,3,4,5,5,6,\dots\}\), thus \(\mathfrak{m}^{4}\) is Elias. In this case, \(\mathfrak{m}^{4}\subseteq(a)\), so \(\mathfrak{m}^{4}\) is trivially Elias. Let \(R\subset S\) be a finite birational extension. We recall that the conductor of \(S\) in \(R\), denoted \(c_{R}(S)\), is \(R:_{Q(R)}S\). **Proposition 2.10**.: _Let \(R\subset S\) be a finite birational extension. If \(IS=I\) (i.e, \(I\) is an \(S\)-module) and \(I\) is Elias, then \(I:\mathfrak{m}\subseteq c_{R}(S)\)._ Proof.: Let \(Q=Q(R)\). We have \(R\supset I:_{Q}\mathfrak{m}=IS:_{Q}\mathfrak{m}S\supset(I:\mathfrak{m})S\), so \(I:\mathfrak{m}\subseteq R:_{Q}S=c_{R}(S)\) as desired. Note that if \(IS=I\), then \(\operatorname{trace}(I)\subseteq c_{R}(S)\). So naturally, one can ask to extend 2.10 as follows: **Question 2.11**.: If \(I\) is Elias, do we have \(I:\mathfrak{m}\subseteq\operatorname{trace}(I)\)? The answer is no. In Example 2.8 above, Let \(R=k[[t^{4},t^{5},t^{11}]]\cong k[[a,b,c]]/(a^{4}-bc,b^{3}-ac,c^{2}-a^{3}b^{2})\). One can check that \(\operatorname{trace}(\mathfrak{m}^{2})=(a^{2},ab,b^{2},c)\) while \(\mathfrak{m}^{2}:\mathfrak{m}=\mathfrak{m}\). **Corollary 2.12**.: _Suppose \(\mathfrak{m}^{2}\) is Elias (e.g., if \(R\) has minimal multiplicity) and is integrally closed. If \(\mathfrak{m}^{2}\subseteq c_{R}(\overline{R})\) then \(\mathfrak{m}\subseteq c_{R}(\overline{R})\)._ Proof.: Apply 2.10 to \(I=\mathfrak{m}^{2}\). **Corollary 2.13**.: _Assume that the integral closure \(\overline{R}\) is finite. Then the conductor of \(\overline{R}\) in \(R\) is not Elias. A regular trace ideal is not Elias._ Proof.: Let \(\mathfrak{c}=c_{R}(\overline{R})\). Then \(\mathfrak{c}\) is a \(\overline{R}\)-module, so if it is Elias we would have \(\mathfrak{c}:\mathfrak{m}\subseteq\mathfrak{c}\), absurd! Any regular trace ideal must contain \(\mathfrak{c}\), see for instance [3], so it can not be Elias either by 1.3. The following is simple but quite useful for constructing Elias ideals from minimal generators of Ulrich ideals. See the examples that follow. **Proposition 2.14**.: _Let \(I\subset J\) be regular ideals with \(J\) Ulrich. Let \(x\in\mathfrak{m}\) be a minimal reduction of \(\mathfrak{m}\). Assume that \(\mathfrak{m}y\not\subseteq xI\) for any minimal generator of \(J\). Then \(I\) is Elias._ Proof.: The assumption implies that \(xI:\mathfrak{m}\subseteq\mathfrak{m}J=xJ\subset(x)\). **Example 2.15**.: Let \(R=k[[a_{1},\dots,a_{n}]]/(a_{i}a_{j})_{1\leq i<j\leq n}\). Apply 2.14 with \(J=\mathfrak{m},x=a_{1}+a_{2}+\dots+a_{n}\). Note that each element \(f\in\mathfrak{m}\) has the form \(f=\sum\alpha_{i}a_{i}^{s_{i}}\) where \(\alpha_{i}\)s are units or \(0\). Then \(a_{i}f=\alpha_{i}a_{i}^{s_{i}+1}\) and \(xf=\sum\alpha_{i}a_{i}^{s_{i}+1}\). It follows easily then that the condition \(\mathfrak{m}y\not\subseteq xI\) for any minimal generator \(y\) of \(\mathfrak{m}\) is equivalent to \(a_{i}^{2}\notin xI\) for each \(i\), which is equivalent to \(a_{i}\notin I\) for each \(i\). For instance, if \(R=\mathbb{Q}[[a,b,c]]/(ab,bc,ca)\), \(I=(a-b,b-c)\) is Elias. Since \(R/I=\mathbb{Q}[[a]]/(a^{2})\) is Gorenstein, \(I\) is a canonical ideal. One can use valuations to construct Elias ideals from part of a minimal generating set of some Ulrich ideal. **Example 2.16**.: Let \(R=k[[t^{n},t^{n+1},\ldots,t^{2n-1}]]\). Let \(I=(t^{n},\ldots,t^{2n-2})\). Apply 2.14 with \(J=\mathfrak{m},x=t^{n}\). Let \(\nu\) be the \(t\)-adic valuation on \(R\). Note that for any minimal generator of \(y\in J=\mathfrak{m}\), \(3n-2\in\nu(y\mathfrak{m})\). On the other hand \(3n-2\notin\nu(xI)\), so \(y\mathfrak{m}\not\subseteq xI\). It follows that \(I\), and any ideal contained in \(I\), is Elias. Note that again, since \(R/I\) is Gorenstein, \(I\) is actually a canonical ideal. ## 3. Elias index **Definition 3.1**.: One defines the following: * Let the Elias index of \(R\), denoted by \(\operatorname{eli}(R)\) be the smallest \(s\) such that \(\mathfrak{m}^{s}\) is Elias. * Let the generalized Loewy length of \(R\), denoted by \(\operatorname{gll}(R)\), be the infimum of \(s\) such that \(\mathfrak{m}^{s}\subseteq(x)\) for some \(x\in\mathfrak{m}\). * Let the Ulrich index of \(R\), denoted by \(\operatorname{ulr}(R)\) be the smallest \(s\) such that \(\mathfrak{m}^{s}\) is Ulrich, that is \(\mu(\mathfrak{m}^{s})=e\). **Theorem 3.2**.: _We have:_ 1. \(\operatorname{eli}(R)\leq\operatorname{gll}(R)\)_._ 2. \(\operatorname{gll}(R)\leq\operatorname{ulr}(R)+1\)_, if the residue field_ \(k\) _is infinite._ 3. _Suppose that the associated graded ring_ \(\operatorname{gr}_{\mathfrak{m}}(R)\) _is Cohen-Macaulay and the residue field_ \(k\) _is infinite. Then_ \(\operatorname{eli}(R)=\operatorname{gll}(R)=\operatorname{ulr}(R)+1\)_._ Proof.: If \(\mathfrak{m}^{s}\subseteq(x)\) then \(x\) must be a NZD. Thus \(\mathfrak{m}^{s}\) is Elias by 1.3. The second statement follows from definition. The condition that \(\operatorname{gr}_{\mathfrak{m}}(R)\) is Cohen-Macaulay implies that \(\mathfrak{m}^{s}\) is \(\mathfrak{m}\)-full for all \(s>0\), so the last assertion follows from 2.5. **Theorem 3.3**.: _We have:_ 1. \(\operatorname{eli}(R)=1\) _if and only if_ \(R\) _is regular._ 2. _Assume_ \(R\) _is Gorenstein, then_ \(\operatorname{eli}(R)=2\) _if and only if_ \(e(R)=2\)_._ 3. _Let_ \((A,\mathfrak{n})\) _be a Gorenstein local ring of dimension one. Suppose that_ \(R=\mathfrak{n}:_{Q(A)}\mathfrak{n}\) _is local. Then_ \(\operatorname{eli}(R)\leq 2\)_._ Proof.: (1): Assume \(\mathfrak{m}\) is Elias. To show that \(R\) is regular, we can make the extension \(R\to R[t]_{(\mathfrak{m},t)}\) and assume \(k\) is infinite. Choose a NZD \(x\in\mathfrak{m}-\mathfrak{m}^{2}\), we have \(\mathfrak{m}^{2}:x=\mathfrak{m}\), that is \(\mathfrak{m}\) is \(\mathfrak{m}\)-full witnessed by \(x\). Then 2.5 shows that \(\mathfrak{m}\subset(x)\), thus \(\mathfrak{m}\) is principal. (2): We can assume again by 1.4 that \(k\) is infinite. If \(e=2\), then \(\mathfrak{m}^{2}\subset(x)\) for a minimal reduction \(x\) of \(\mathfrak{m}\), thus \(\mathfrak{m}^{2}\) is Elias. Now, suppose \(\mathfrak{m}^{2}\) is Elias and \(e\geq 3\), and we need a contradiction. We first claim that any Ulrich ideal \(I\) of \(R\) must lie in \(\mathfrak{m}^{2}\). Take any minimal reduction \(x\) of \(\mathfrak{m}\). Then \(I\mathfrak{m}=xI\subseteq(x)\), so \(I\subset(x):\mathfrak{m}\subseteq(x)+\mathfrak{m}^{2}\) (otherwise the socle of \(R^{\prime}=R/xR\) has order 1, impossible as \(R^{\prime}\) is Gorenstein of length at least 3). As \(x\) is general, working inside the vector space \(\mathfrak{m}/\mathfrak{m}^{2}\), we see that \(I\subseteq\mathfrak{m}^{2}\). The set of \(\mathfrak{m}\)-primary Ulrich ideals in \(R\) is not empty, as it contains high enough powers of \(\mathfrak{m}\). Thus, we can pick an element \(I\) in this set maximal with respect to inclusion. By the last claim, \(I\subseteq\mathfrak{m}^{2}\), and hence \(I\) is also Elias by 1.3. Now 2.4 and 2.5 imply that \(I=xJ\) for some NZD \(x\in\mathfrak{m}\), so \(J\) is an Ulrich ideal strictly containing \(I\), and that's the contradiction we need. (3): If \(R=A\), then \(\mathfrak{n}\) is Elias by 1.2, hence \(A\) is regular by part (1). Thus \(R\) is also regular, and \(\operatorname{eli}(R)=1\). If \(R\) strictly contains \(A\), then \(c_{A}(R)=A:_{Q(A)}R=\mathfrak{n}\), hence \(\mathfrak{n}\cong\operatorname{Hom}_{A}(R,A)\cong\omega_{R}\). So \(\mathfrak{n}\) is a canonical ideal of \(R\). On the other hand, as \(A\) is not regular, \(\mu_{A}(R)=2\) (dualize the exact sequence \(0\to\mathfrak{n}\to A\to A/\mathfrak{n}\to 0\) and identify \(R\) with \(\mathfrak{n}^{*}=\operatorname{Hom}_{A}(\mathfrak{n},A)\)). Thus \(\ell_{A}(R/\mathfrak{n})=2\), so \(\ell_{R}(R/\mathfrak{n})\leq 2\), which forces \(\mathfrak{m}^{2}\subset\mathfrak{n}\), and since \(\mathfrak{n}\) is Elias, so is \(\mathfrak{m}^{2}\) by 1.3. **Example 3.4**.: We give some examples of item (3) in the previous Theorem. First let \(A=\mathbb{R}[[t,it]]\) with \(i^{2}=-1\). Then \(R=\mathbb{C}[[t]]\). Next, let \(H=\langle a_{1},\dots,a_{n}\rangle\) be any symmetric semigroup and \(b\) be the Frobenius number of \(H\). Let \(A=k[[H]]\) be the complete Gorenstein numerical semigroup ring of \(H\). Then \(R=k[[\langle a_{1},\dots,a_{n},b\rangle]]\) has Elias index \(2\), unless if \(H=\langle 2,3\rangle\), in which case \(\operatorname{eli}(R)=1\). Examples are \(R=k[[t^{e},t^{e+1},t^{e^{2}-e-1}]]\) for \(e\geq 3\). For such ring we have \(\operatorname{type}(R)=2\), \(e(R)=e\), \(\operatorname{gll}(R)=e-1\), \(\operatorname{ulr}(R)=e-1\), yet \(\operatorname{eli}(R)=2\). These examples show that one can not hope to get upper bounds for \(\operatorname{gll}(R)\) or \(\operatorname{ulr}(R)\) just using \(\operatorname{eli}(R)\). ## 4. Elias ideals in Gorenstein rings and Auslander index In this section we focus on Gorenstein rings. Throughout this section, let \((R,\mathfrak{m},k)\) be a local Gorenstein ring of dimension one and \(I\subset R\) an \(\mathfrak{m}\)-primary ideal. Recall that for a finitely generated module \(M\), the Auslander \(\delta\) invariant of \(M\), \(\delta(M)\) is defined to be the smallese number \(s\) such that there is a surjection \(R^{s}\oplus N\to M\). The first \(s\) such that \(\delta(R/\mathfrak{n}^{s})=1\) is called the Auslander index of \(R\), denoted \(\operatorname{index}(R)\). It turns out that Elias ideals are precisely those who quotient has Auslander invariant one. We collect here this fact and a few others. They are mostly known or can be deduced easily from results in previous sections, or both. **Proposition 4.1**.: _Let \((R,\mathfrak{m},k)\) be a local Gorenstein ring of dimension one and \(I\subset R\) an \(\mathfrak{m}\)-primary ideal. We have:_ 1. \(\delta(R/I)=1\) _if and only if_ \(I\) _is Elias._ 2. _Suppose_ \(R\) _is Gorenstein. Then_ \(I\) _is Elias if and only if for each NZD_ \(x\in I\)_,_ \(x\in\mathfrak{m}(x:I)\)_._ 3. _Suppose_ \(R\) _is Gorenstein. For a NZD_ \(x\in I\)_,_ \(x:I\) _is Elias if and only if_ \(x\in\mathfrak{m}I\)_. In particular, if_ \(x\in\mathfrak{m}^{2}\)_, then_ \(x:\mathfrak{m}\) _is Elias._ 4. \(I\) _is Elias if and only if_ \(1\in\mathfrak{m}I^{-1}\)_, where_ \(I^{-1}=R:_{Q}I\)_. If_ \(I\) _is Elias, then_ \(I\subseteq\mathfrak{m}\operatorname{trace}(I)\)_._ Proof.: Part (1) is a special case of a result by Ding, [6, Proposition 1.2] and our definition of Elias ideal. Part (2) and (3) are special cases of (3) and (4) of 1.3, as in that case (\(x\)) is isomorphic to the canonical module. Part (4) is [6, 2.4, 2.5], and also follows easily from results above: the first assertion is just a rewriting of (2). For the second assertion, it follows from the first that \(I\subseteq\mathfrak{m}II^{-1}=\mathfrak{m}\operatorname{trace}(I)\). There have been considerable interest in the following question: **Question 4.2**.: Given an ideal \(I\) with \(\delta(R/I)=1\), when can one say that \(I\subset(x)\) for some NZD \(x\in\mathfrak{m}\)? For instance, a conjecture of Ding asks whether \(\operatorname{index}(R)=\operatorname{gll}(R)\) always. From our point of view, this is of course just a question about Elias ideals and Elias index. Thus, one immediately obtains the following. **Corollary 4.3**.: _Let \((R,\mathfrak{m},k)\) be a local Gorenstein ring of dimension one and \(I\subset R\) an \(\mathfrak{m}\)-primary ideal._ 1. _If_ \(I\) _contains a NZD_ \(x\) _of order_ \(1\)_, then_ \(I\) _is Elias if and only if_ \(I=(x)\)_._ 2. \(\operatorname{index}(R)=\operatorname{eli}(R)\)_._ 3. \(\operatorname{index}(R)=\operatorname{gll}(R)=\operatorname{ulr}(R)+1\) _if_ \(k\) _is infinite and_ \(\operatorname{gr}_{\mathfrak{m}}(R)\) _is Cohen-Macaulay (this happens for instance if_ \(R\) _is standard graded or if_ \(R\) _is a hypersurface)._ Proof.: For part (1), we apply (5) of Corollary 1.3. Part (2) is trivial from part (1) of 4.1. Part (3) is [5, Theorem 2.1], [2, Corollary 2.11], and is also a consequence of 3.2. **Example 4.4**.: (Counter-examples to a result by Ding) In this example, we construct examples of homogenous Elias ideals that are not inside principal ideals. Let \(S=k[[x_{1}\dots,x_{n}]]\), and \(J\) be a homogenous ideal such that \(R=S/J\) is Gorenstein. Let \(f\in S\) be an irreducible element of degree at least \(2\) but lower than the initial degree of \(J\), and such that the image of \(f\) in \(R\) is a NZD. Then \(I=fR:\mathfrak{m}\) is Elias by 4.1 but \(I\) is not inside any principal ideal. For by the irreducibility of \(f\), we must have \(fR:\mathfrak{m}=(f)\), absurd. This class of examples contradicts Theorem 3.1 in [6], which claims that for \(I\) homogenous in a graded Gorenstein \(R\), \(\delta(R/I)=1\) (equivalently, \(I\) is Elias) if and only if \(I\subseteq(x)\) for some \(x\in\mathfrak{m}\). For concrete examples, one can take \(S=\mathbb{Q}[[a,b]]\), \(J=(a^{3}-b^{3})\), and \(f=a^{2}+b^{2}\). If one wants algebraically closed field, one can take \(S=\mathbb{C}[[a,b,c]]\), \(J\) is a complete intersection of two general cubics, and \(f=a^{2}+b^{2}+c^{2}\). The mistake in [6, Theorem 3.1] is as follows. First, one derives that \(1=\sum\frac{z_{i}y_{i}}{x_{i}}\) with \(z_{i}\in\mathfrak{m}\) and \(\frac{y_{i}}{x_{i}}\in I^{-1}\) and hence there is \(i\) such that \(\deg(z_{i}y_{i})=\deg(x_{i})\), which is correct. Then Ding claimed that there is \(u\in k\) such that \(z_{i}y_{i}=ux_{i}\). But this is not true. In the first example above we have \(z_{1}=y_{1}=a,z_{2}=y_{2}=b,x_{1}=x_{2}=a^{2}+b^{2}\). **Example 4.5**.: (De Stefani's counter-example to a conjecture of Ding, revisited) As mentioned above, Ding conjectured that \(\operatorname{index}(R)=\operatorname{gll}(R)\) always when \(R\) is Gorenstein. De Stefani gives a clever counter-example in [2]. Let \(S=k[x,y,z]_{(x,y,z)}\), \(I=(x^{2}-y^{5},xy^{2}+yz^{3}-z^{5})\). Then \(\operatorname{index}(R)=5\) but \(\operatorname{gll}(R)=6\). We now show how some parts of the proof in [2], which is quite involved, can be shortened using our results. We note that since the Hilbert functions of \(R\) are \((1,3,5,6,7,7,8,8...)\) and \(e(R)=8\), we get that \(\mathfrak{m}^{5}\) is Elias by Theorem 2.7. To conclude we need to show that \(\mathfrak{m}^{5}\) is not contained in \((y)\) for any NZD \(y\in\mathfrak{m}\). Note that \(\mathfrak{m}^{6}\) is Ulrich by Hilbert functions. We first show one can assume \(\operatorname{ord}(y)=1\). Assume \(\mathfrak{m}^{5}\subset(y)\), \(\mathfrak{m}^{5}=yI\), then \(\mathfrak{m}^{5}\cong I\). If \(\operatorname{ord}(y)\geq 2\), then \(y\mathfrak{m}^{3}\subset\mathfrak{m}^{5}=yI\), so \(\mathfrak{m}^{3}\subset I\). But as \(\mathfrak{m}I\cong\mathfrak{m}^{6}\) is Ulrich, we get \(\mathfrak{m}^{2}I\subset(x)\) for some minimal reduction of \(\mathfrak{m}\), thus \(\mathfrak{m}^{5}\subset\mathfrak{m}^{2}I\subset(x)\). For the rest, one can follow [2].
2307.04481
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
Manufacturing tools like 3D printers have become accessible to the wider society, making the promise of digital fabrication for everyone seemingly reachable. While the actual manufacturing process is largely automated today, users still require knowledge of complex design applications to produce ready-designed objects and adapt them to their needs or design new objects from scratch. To lower the barrier to the design and customization of personalized 3D models, we explored novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants. We performed a thematic analysis of the collected data to understand how the mental model of novices translates into voice-based 3D modeling. We conclude with design implications for voice assistants. For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects.
Giuseppe Desolda, Andrea Esposito, Florian Müller, Sebastian Feger
2023-07-10T11:03:32Z
http://arxiv.org/abs/2307.04481v2
# Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling ###### Abstract Manufacturing tools like 3D printers have become accessible to the wider society, making the promise of digital fabrication for everyone seemingly reachable. While the actual manufacturing process is largely automated today, users still require knowledge of complex design applications to produce ready-designed objects and adapt them to their needs or design new objects from scratch. To lower the barrier to the design and customization of personalized 3D models, we explored novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants. We performed a thematic analysis of the collected data to understand how the mental model of novices translates into voice-based 3D modeling. We conclude with design implications for voice assistants. For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects. Keywords:Digital Fabrication 3D Design Voice Interaction Wizard of Oz Study. ## 1 Introduction The digital fabrication revolution aims to democratize the way people create tangible objects [13]. With the widespread availability of 3D printing together with many other digital fabrication technologies such as laser cutters or Computer Numerical Control (CNC) routers, end users are moving from passive consumers to active producers. While the actual manufacturing process is largely automated today, users are still required to have a profound knowledge of complex 3D modeling applications, when they adapt models to their needs or even design new objects from scratch [53]. Thus, even if the introduction of technologies such as 3D printers has revolutionized the hobbyist community, lowering the barrier of entry to manufacturing even for novices (who can now put their hands in the process of creating artifacts without relying on third parties), we argue that the design of the 3D objects to be manufactured still requires a high level of knowledge and expertise. These limitations have pushed researchers to investigate natural interaction techniques to simplify 3D modeling tools [36]. For example, research explored gestures [46, 50], virtual/augmented reality [45, 10], eye tracking [20, 54], brain-computer interface [17, 44] and their combination [33, 12, 22, 21] as a multimodal approach. However, their adoption is reserved for technical users and it is strongly limited by hardware costs and excessive size/weight that can make the users easily fatigued [36]. As another possible solution, voice-based interaction has been explored, to both integrate the traditional Graphical User Interface (GUI) interface (e.g., to enable shortcuts via voice commands) [47, 53]) or as the primary interaction paradigm (e.g., see [52, 24, 38]). Although voice-based interaction requires only a microphone, it does not yet provide adequate digital modeling support for everyone: existing solutions either do not consider final users at all [53, 52], or only target 3D experts [24, 51, 21, 38], and novices are not considered potential target beneficiaries of the proposed innovations. To lower the barrier to the design and customization of personalized 3D models by exploiting the potential of voice-based interaction, this study aims to understand how the mental model of novices translates into voice-based 3D modeling. We conducted a high-fidelity Wizard of Oz (WoZ) study to elicit novices' mental model, for example, their expectation, beliefs, needs, and abilities. We recruited a total of 22 participants without skills in 3D modeling, who performed 14 tasks revolving around some basic concepts of 3D modeling like the creation of objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite objects. All the WoZ sessions' recordings were analyzed through thematic analysis. The findings of the study have been distilled in the form of lessons learned. For example, we found that: voice assistants must manage the corrections the novices do during and after the commands; deal with vague and incomplete commands; consider the prior novices' knowledge; provide only a simplified set of operations for creating simple and composite 3D objects; design a workflow similar to what novices would do if they were building real objects; understand chained commands; understand commands that are relative to the users' point of view. The contribution of this paper is two-fold. First, we report the results of our WoZ study presenting the themes that emerged from the thematic analysis. Second, based on these results, we provide a set of design implications for the future design of voice-based interaction paradigms for 3D modeling for novices. ## 2 Background and Related Work This study revolves around the concept of voice-based 3D modeling as a key factor for enabling the democratization of digital fabrication. This section starts by illustrating some of the existing solutions based on natural interaction that try to address the complexity of 3D modeling (Section 2.1). Next, we provide an overview of the requirements for interacting with voice assistants (Section 2.2). Finally, we provide a brief summary of the motivation of this study and introduce the research question that guided our work (Section 2.3). ### Addressing the Complexity of 3D modeling To mitigate the issues of traditional GUI-based Computer-Aided Design (CAD) systems, researchers explored natural interaction paradigms like eye tracking [20, 54], brain computer interface [17; 44], gestures [46; 50], virtual/augmented reality [45; 10] and their combination [12; 22; 21] as a multimodal approach for 3D modeling. The goal of natural interactions with CAD systems is to increase their _usability_ for both expert users and, especially, novice users. Specifically, they aim to: i) reduce the learning curve of the system; ii) allow a more intuitive interaction process; iii) enhance the design abilities of the designers [36]. An example of a multimodal system is "3D Palette" by Billinghurst et al.: a mix of tablet and pen inputs, electromagnetic sensors and voice commands are used to support the digital design process [1]. Similarly, Nanjundaswamy et al. explored a mix of gesture-based interaction, speech recognition, and brain-computer interfaces to reduce the initial learning curve of the design system [33]. A complete overview of the multimodal solutions for CAD is reported by Niu et al. [36]. Despite these potential benefits, such multimodal techniques require the adoption of specialized hardware (e.g., depth-sensing cameras for gesture recognition, headsets to recognize brain signals), which use can be limited by their prices, sizes, weight, and complexity of use [33]. Thus, it is still hard for novice users to really adopt them in real and daily contexts [36]. To overcome these limitations, researchers also investigated voice-based interaction because of its intuitive nature and the simplicity of the required hardware, i.e., a microphone, which nowadays is embedded in any laptop, tablet, or webcam [41]. Furthermore, considering the ubiquity of smartphones and the rise of AR and VR glasses, voice-based interaction can be generalized to technologies where other interaction modalities are not available options. Attempts of integrating voice-based interaction to CAD systems date as back as 1985 [40]. A more recent work suggests the use of voice commands to allow users to either quickly search commands by simply stating their intention [53; 47], or to annotate 3D models [38]. Systems, where the entire modeling process is carried out by voice commands, have also been explored. An example is the solution presented by Kou and Tan, where voice commands related to a CAD-specific lexicon and grammar are understood by a context-aware algorithm [23]. A similar example was proposed by Xue et al., which improves the previous solution by allowing free-form sentences in [52]. Another example of a fully-working system is the one presented by Grigor et al.: it follows the same ideas as the previous ones but uses Artificial Intelligence (AI) to understand the users' inputs, thus allowing for more freedom in the commands, [14]. Similarly, Kou et al. proposed a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands by exploiting a knowledge-guided approach to infer the semantics of voice input [24]. Among all the previous examples, it must be highlighted that the design of their paradigm was made without any kind of involvement of the final users [40; 53; 47; 23] or by solely involving experts in the final testing phase [14]. For example, the study by Nanjundaswamy et al. evaluates a multimodal system using gestures, speech and a brain-computer interface by involving a group of five skilled people [33]. Similarly, Khan et al. involve a total of 41 skilled users from an architecture or engineering background to elicit the requirements of a CAD system based on gestures and speech commands [21]. As another example, Vyas et al. test the usability of a speech-based CAD system involving 6 students with backgrounds in engineering, architecture and visualization [51]. The work proposed by Cuadra et al. investigated how novices use voice assistants to design 3D objects [5]. They performed a WoZ study to compare voice assistants with and without the use of a video channel showing the design in progress, investigating how the two approaches impact users' accuracy and satisfaction. Cuadra et al. validate the idea of using voice assistants, as participants are more satisfied with their objects and suffer less from cognitive overload when the design process is supported by video, but it does not provide any insight on the mental model of novices approaching the digital modeling task [5]. ### Interacting with Voice Assistants The first solution of voice interaction implementing speech recognition dates as back as 1952, when Davis et al. proposed a prototype able to recognize digits [7]. In recent years, the evolution of machine learning and AI fostered the spreading of powerful commercial voice assistants, often based on deep neural networks trained on a plethora of data. However, such powerful speech recognition models alone are not sufficient to build an effective voice assistant, since the interaction with such systems must be considered in the design of the whole system [30]. This need, together with the growing availability of commercial voice assistants, has fostered a sharp uptick of studies on user interaction with voice assistants [41]. Aspects like the cues that drive the conversation [49], the properties that a voice assistant should have [48], the user's mental model [15], emotions felt during the conversation [19], conversational design patterns [30] have been investigated. In addition, solutions to design and evaluate interaction with voice assistants are beginning to be proposed (see, for example, [30, 25, 31, 37, 32, 18, 48]). Careful consideration of these design aspects gains importance when voice assistants aim to simplify challenging or technical operations (e.g., see [3]). Since 3D modeling represents such a demanding task for novices, the elicitation of the novices' mental model is crucial to lower the barrier for 3D modeling. ### Summary and Research Question The analysis of the literature highlights that to simplify the 3D modeling, often the existing solutions are based on multimodal techniques such as gestures, eye tracking, or brain-computer interfaces; however, their adoption in real contexts is strongly limited by the adoption of specialized hardware and, overall, they target technical users. Voice interaction seems a promising paradigm that can overcome the limitations of multimodal solutions, but the existing voice-based solutions are still lacking for three important reasons: i) users are often not considered throughout the design phase, or they are only involved too late in testing phases; ii) to the best of our knowledge, novices are never considered as target users; iii) the voice-based interaction is built on top of the existing CAD systems (and their complexity), instead of designing from scratch the voice paradigm and the whole system. Considering these limitations, to really democratize digital fabrication considering novices, users should be able to access 3D modeling tools even without special skills. All these motivations pushed us to explore novices' mental model in voice-based 3D modeling, in order to reduce the cost of their entry in the digital fabrication era. This is an aspect that has never been explored before and that deserves attention to really democratize digital fabrication. Therefore, our work addresses the following research question: **How does the mental model of novices translate into voice-based 3D modeling?** ## 3 Method To answer our research question, we performed a high-fidelity Wizard of Oz (WoZ) study [42] because it has been proven successful in eliciting the user's mental model for voice-based interaction (e.g., see [11, 49, 28, 5]). Then, we carried out an inductive thematic analysis [4] on the qualitative data, i.e., the transcriptions of the WoZ sessions and the answers of the participants to the open questions. ### Participants A total of 22 participants (F=15, M=7) have been recruited through convenience sampling [8] on the social circles of the authors of this article. This number of participants is in line with other similar studies (e.g., see [49, 26]). Half of the participants were Italians while the other half were Germans. Their mean age was 24.1 years (\(\sigma\) = 3.7, min = 21, max = 34). The entire study was performed in English so as not to have results related to specific languages, which is out of the scope of this study. To ensure that the collected data is not biased toward knowledgeable users, we only recruited participants without any kind of experience with 3D modeling. Regarding the participants' level of education, around 45.45% already have a High School Diploma or a German A-level, 36.36% have a Bachelor's Degree, 13.64% have a Master's Degree, and only one participant (representing the remaining 4.55%) has not provided any information. Most participants (15 out of 22) do not have a STEM education, while 6 of the remaining 7 do not have any computational thinking skills, as they studied or worked in non-IT scientific fields (e.g., pharmaceutical and nutrition sciences). Regarding the participants' skills, they had an average level of IT knowledge (\(\bar{x}\) = 6.5/10; \(\sigma\) = 2.1), a medium-low level of knowledge of voice assistants (\(\bar{x}\) = 3.1/10; \(\sigma\) = 2.0) and very low knowledge of 3D modeling (\(\bar{x}\) = 1.6/10; \(\sigma\) = 1.1). ### Tasks A total of 14 tasks have been designed by two authors of this paper, both experts in 3D modeling, taking into account the most common and useful activities that are required to create simple and composite 3D objects. The resulting tasks revolve around basic concepts of 3D modeling, like the creation of simple objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite geometries. The details of the tasks are reported in the task table in the attached appendix (the list of all the graphical tasks is available in the attached appendix, sub-folder _tasks_). To reduce the impact of the primer effect [8] that providing a textual description of a task would have on the participants, we chose to provide the participants with graphical tasks: each task is composed of a brief prompt and a diagram showing the participants a 3D object or a 3D transformation that should be recreated (an example of graphical tasks is provided in Fig. 1). The representations chosen for each task were validated during a pilot study with 3 novices that were not considered in the final WoZ study. ### Apparatus We carried out the WoZ study remotely by using Zoom3. Four researchers have been involved: two Italians acted respectively as conductors and wizards for the Italian participants, while two German researchers acted as conductors and wizards for the German participants. In both groups, researchers switched roles to minimize the risk of bias introduced when conducting the test. Footnote 3: [https://zoom.us](https://zoom.us) To create the illusion for participants that they are interacting with a real voice-based system for 3D modeling, we decided to use Blender4, explaining to participants that they can interact with it through voice commands. Blender has been selected since it is a free and open-source software that, among other features like sculpting or rendering, allows one to design and visualize 3D objects. One of the main features that made Blender the perfect choice for our WoZ study is the availability of APIs for the Python language5 that can be used inside a shell-like environment: this allows the Wizard to immediately create and modify the objects programmatically when the participants provide voice commands, thus preventing the participants from noticing anything odd and increasing the speed at which the Wizard is capable of satisfying the participants' requests. Taking advantage of this feature, we pre-defined a set of functions in a Python Figure 1: Examples of graphical tasks: a brief prompt is reported on top of each task and below a diagram shows the participants the 3D object to create (a, c) or the transformation to be performed (b). module to simplify the use of Blender's APIs for the purpose of this study (the module is available in the supplementary materials, sub-folder _python module_). To show the participants the task they had to complete, we overlaid the graphical tasks on the bottom-right side of the Blender's window. To this aim, we used Open Broadcast Software (or, more commonly, OBS)6, a free and open-source software for video recording and live streaming. Using OBS, it was also possible to define animations and transitions to show when users are moving to the next task and to signal to the participants that the "voice assistant" (i.e., the Wizard) is listening to the user's command or it is actually performing it. In particular, for each task, both the Blender window and the graphical task are visible (see Fig. 2a). When the participants activate the Blender voice assistant by saying "Hey Blender", the "I'm listening" label indicates that participants can provide the command to solve the task (see Fig. 2b). Then, when the voice command has been issued, a rotating icon indicates that the voice assistant is analyzing it, creating the illusion that there is a real voice assistant (see Fig. 2c). During the loading, the Wizard writes the Python statements related to the user commands and the result is finally shown in Blender (see Fig. 2d). Footnote 6: [https://obsproject.com](https://obsproject.com) ### Procedure For each participant, when the Zoom session started, both the conductor and the Wizard were connected on Zoom but the latter never appeared or interacted with the participant. While the conductor introduced the participant to the study, the Wizard shared his screen, in particular the window created by using OBS. The sessions were recorded using Zoom's built-in recorder. Before starting the recordings, participants were asked to sign (either in digital or in verbal form) a privacy policy. It is worth mentioning that our universities require approval by an ethics committee only in the case of medical and clinical studies. For other studies like ours, they require that test participants give consent in a written or digital form; thus, we informed participants about all the details of the study and asked them to agree before starting the study. All of them agreed. As soon as the participant agreed to attend the study, the conductor invited the participant to complete a set of tasks. The webcam of the conductor was turned off during task execution to avoid disturbing the participant. To reduce the variability between sessions and between the Italian and German participants, the same introductory script was defined (available in the attached appendix, sub-folder "introductory script"). In summary, the conductor explains that the goal of the study was to validate a new voice assistant called Blender, which we created to assist novices in 3D modeling. Then, the conductor asks to complete a set of tasks and that, for each of them, a graphical representation appears on the right-bottom side of their screen. The conductor also specifies that the participant had to first activate the voice assistant by saying "Hey Blender" and then, once the "I'm listening" label appears, the participant can provide a sequence of voice commands that, in their opinion, is the best to solve the task (for example "create a cube"). No examples of voice commands have been provided to avoid introducing bias. At the end of each task, the participants had to communicate with the conductor to move on to the next task. At the end of the session, each participant filled in a questionnaire that includes questions on demographics, as well as some usability-related questions to evaluate the effectiveness of the Blender voice assistant. Furthermore, since (to the extent of our knowledge) there were no previous examples of graphical tasks for a Wizard of Oz study, we have also chosen to add some questions to evaluate how easy it was for the user to understand the tasks (available in attached appendix, sub-folder _questionnaire_). The entire procedure lasted around 30 minutes for each participant. A graphical synthesis of the entire procedure and the data collected is shown in Fig. 3. ### Data Analysis The first analysis regarded the questionnaire answers that evaluate the choice of providing the tasks in graphical format. Specifically, we included a question that asked "_How Figure 2: The graphical task is overlaid on the bottom-right side of the Blender’s window from the beginning of the task (a); when the participants activate the voice assistant by saying ”Hey Blender”, the “I’m listening” label indicates that they can provide the command to solve the task (b); a rotating icon indicates that the voice assistant is elaborating the user commands (c); the results is shown after the command elaboration (d). easy it was to understand the graphical tasks?_" and it ranges from 1 (not simple at all) to 10 (very simple). Both the median and average scores are 8.2/10, with a standard deviation of 1.0. These results seem to validate the idea of presenting the tasks graphically, but it also highlights that for some tasks (the ones with an ambiguous representation) the conductor of the study must be able to guide the participants to the right interpretation (without the use of words that may introduce a primer effect [8]). In our study, this issue impacted only the 11th task for four participants and it was solved by turning the webcam on and mimicking the action depicted in the task, in case the user was showing difficulties in understanding a task or if he/she explicitly requested help. After ensuring the quality of the graphical tasks, we analyzed the qualitative data collected during the study, which helped us answer the research question, i.e., video transcriptions, questionnaire responses and participants' comments. All the video recordings (a total of about 11 hours) were first transcribed and expanded by including the annotations that identify pauses, the start and the end of the processing by the WoZ, and eventual errors or over-correction by the WoZ. This dataset was completed by reporting the participants comments and the answers to the three open questions we included in the questionnaire: i) What did you like the most about the system used and the interaction with it? ii) What did you like less about the system and the interaction with it? and iii) Would you use a system like Blender to model in 3D? Please motivate your answer. This data was analyzed in a systematic qualitative interpretation using Inductive Thematic Analysis [4]. The initial coding was conducted independently by four researchers, who are co-authors of this article and are experienced in qualitative data analysis: two of them analyzed the Italian results while the other two the German results. The two couples of researchers began with open coding independently. Once all the data was coded, the set of initial codes was further refined by merging the different codes. This first filtering phase allowed us to obtain a set of code groups that capture meaning at a higher level. The identified code groups were then used by each group to extract the main themes. At the end, both the codes and the themes of the two groups were compared to identify similarities and differences. With the exception of some minor differences related to their naming, both the codes and the themes identified by the two couples of researchers were identical in meaning. The final themes that will be presented here derive from a joint naming session carried out by all four researchers. Only a few small differences were identified, and they will be discussed as part of the design implications. The final codes and themes with the relationships among them are available in the attached appendix, sub-folder _Codes and Themes_. Figure 3: Phases of the study and data collected at each phase ## 4 Results The thematic analysis resulted in the description of five themes reported in the following sub-sections. For each theme, significant participant quotes are reported. For the sake of conciseness, we will refer to participants as "P" followed by the participant number, and to the WoZ system as simply "system". ### Basic Operations This theme frames the strategies of interactions that novices have when they approach the 3D modeling activities of creation and manipulation. #### 4.1.1 Creation. Novices tend to provide simple commands in the form "<verb> a <shape>", where the used verbs are typically "create", "draw", "build", and examples of shape names are "cube", "box", or "cylinder". This behavior has been observed in tasks that required the creation of simple or composite objects. Strictly related to this is the object duplication. Novices usually keep the requests simple by asking them to duplicate a precise object, as P4 did in task 12 when he said "duplicate the cube". When the novices, instead, have to face the creation of multiple identical objects, without using the duplication requests (for example, because there was no previous copy in the scene), they simply use a basic creation request by also providing the number of copies: this is clearly exemplified by P5 in task 14 in "create four cylinders". #### 4.1.2 Manipulation The manipulation operations used by novices during the study are _translation_, _rotation_, and _scaling_. It is worth mentioning that the manipulation operations require some kind of reference frame to be performed; to this aim, novices often use relative references (for more details see theme The Gulf of Execution where the references used by the novices are discussed). In more complex cases, novices provided commands containing both a creation request and an implicit manipulation request, where the manipulation is often expressed as a set of constraints on the final object. As an example, in task 14, P8 asked the system to "create four cylinders on the corners of the lower rectangle": in this example, the multiple creation request is clearly visible, and it is put alongside a relative positioning request. Finally, one of the most interesting identified open codes is the one that relates to moving objects with respect to _implicit construction shapes_. As an example, P4 during the last task asked "place the four cylinders at the four corners of a square." In this example, the participant did not have a square in the scene but implicitly requested the system to create a square, place the cylinders at its corners, and delete the square once the operation was completed. This kind of operation was pretty common throughout the last task: around 45% of the participants provided a command that used a construction shape like the one previosly cited. ### Selection of Objects This theme covers the strategies adopted to identify and select objects, specifically, _absolute_ selection, _relative_ selection, or _implicit_ selection. In the case of absolute selection, most participants explicitly refer to the entire scene, or to a single object in a scene by using its name (the one shown in the "inspector" view in Blender, as P11 asked during task 14 by saying "should I call it Box 0001 if I want to move it?") or by its shape (as P1 did during task 6 by saying "move the cube 20 cm downwards"). A specialization of the latter case is the reference to a shape using a 2D approximation. One example is echoed by P8 during task 14: "Hey blender, move the upper rectangle on the side of the lower one". Here, the user referred to two 3D boxes by their 2D approximation (rectangles). The relative selection resulted in four commonly used strategies to select objects, namely: * their relative time of creation (e.g., P3 in task 14: "Blender, place the second box under the first"); * their relative position (e.g., P8 in task 14: "Hey Blender, create four cylinders in the corners of the lower rectangle"); * their dimensions (e.g., P11 in task 14: "Hey Blender, move the tallest box attaching it to the side of the other box"); * by inverting the current selection, eventually applying additional filters (e.g., P3 in task 14: "Blender, place the other two cylinders like you placed the previous ones"). Finally, users also often performed implicit selections of the objects in the scene, for example, by referring to a single object in the scene or by referring to the last edited object, either explicitly or implicitly (e.g., P1 in task 8 implicitly referred to the last edited object by saying "increase the volume by three times"). It is worth remarking that novices do not differentiate nor have preferences between the various methods, and actually, often mix them to be sure that the selection is clear and precise (e.g.: in a previously shown example by P8 in task 14, "Hey blender, move the upper rectangle on the side of the lower one", the user performs the selection by using both an absolute reference to the 2D approximation of the shape of an object, and a relative reference to the positioning of another object). ### Errors Due to the lack of geometry knowledge and/or 3D modeling expertise, often novices commit _errors of which the users are aware of_, and _errors of which the users are not aware of_. In the first case, they try to prevent or correct the errors. For this reason, we named it "error correction". In the second case, when a user is either not aware of an error or if they do not care about trying to fix it, then the error simply represents a mistake made during the task execution. For this reason, we named it "execution errors". We analyze the details of each thread in the following paragraphs. **Error correction.** Different behaviors for correcting the errors have been observed, specifically _during_ and _after_ the command. Regarding the error correction made during the command, some novices try to prevent their own errors when they recognize one while stating the command, by providing a correction in the same command. For example, P9 during the chair construction task says "Hey blender, create a rectangle over the quadrilateral of length - I mean, height 30 centimeters, depth 5 and side 20-22...". This command contains multiple corrections, starting from the correction of the name of the dimension that the user wants to set to 30 centimeters, and then correcting the actual size of the side of the rectangle to 22 centimeters Regarding the corrections made after the commands, most of the participants expected some utility commands that are typically available in GUI-based software, like the "undo" and "redo" functions. As an example, P3 during task 14 provided both the command "Blender, undo the last operation", and "place the other two cylinders as you've placed the previous ones." This highlights how, although novices may not be familiar with the task of 3D modeling or voice-based interaction, they were able to transfer the knowledge of other software they may have used in the past, expecting that their previous experience would be applicable to the new, unknown system. **Execution errors.** Some of the mistakes committed by the novices are strictly related to _lapsus_, _lack of knowledge_, or _system shortcomings_. In the case of lapsus, some participants referred to shapes and objects using the wrong name (e.g., P10 was trying to refer to a box by calling it "cylinder" during task 14). In case of lack of knowledge, errors range from wrong names used for dimensions and primitives, to being unaware of the direction of the axis, perhaps by referring to previous knowledge obtained in school. For example, the Y axis in a 2D plane is usually the vertical one, thus some novices expect the Y axis to be the vertical one also in 3D. Finally, we identified system shortcomings, i.e. errors made by the wizard during the execution of the commands: all of these errors can be traced back to the incomprehension of the command, often due to its intrinsic vagueness (see the theme of "The Gulf of Execution"). ### The Gulf of Execution This theme represents the way novices translate their goals into commands. Throughout the sessions, before providing specific commands, we immediately noticed that novices often think aloud to understand what they have to do and how they can translate it to commands like P16 said during task 14 by saying "so, the picture has a different point of view. I should move it a little bit. Ok. Hey Blender, make the cylinder bigger." Then, by analyzing their commands, we identified three main aspects of the commands where the gulf of execution becomes critical, specifically: i) relativity ii) vagueness iii) abstraction. **Relativity.** Here we summarize how novices think about positions, scale, rotation, and selection relative to other parts of the scene. Two main overall frames of reference are used by the novices: the axes and other objects. To select an axis, novices adopt three approaches, namely: i) _axis relative direction:_ a common way of selecting axes is through their relative direction (depending on the user's point of view), as echoed by P9 during task 11, by saying "move the geometric shape 20 cm to the right"; ii) _the axis color:_ as an example, during the execution of the last task (the one of creating a chair), P2 referred to the Y axis by its color stating "turn of 180 degrees the box on the green axis"; iii) _axis name:_ some novices also refer to axes by their actual name, as P19 did during the 12th task by asking the system to "move the right cube 10 centimeters along the X axis.". When referring to objects' dimensions, novices adopted two main approaches for selection. A first approach consists of using the dimensions' name, as P3 has done in the task of chair creation by saying _"move along the y axis of a length equal to the base of the second box the last cylinder"_. A second approach used a relative comparison to other dimensions; for example, P3 during task 14 selected an object by stating _"move the third cylinder under the highest box [...]"_. **Vagueness.** It encloses a lack of information in the commands provided to reach the goals. In general, the lack of information is caused by: * _chaining of multiple commands_ to describe at a high level a composite shape, as shown by P22 during the chair creation task, by asking "create four cylinders with the same distance to each other."; * _missing data_ that the system needs to execute the requests; as an example, novices forget to provide some or all dimensions of a shape (e.g., P1 in task 1 stated "create a cube" without providing any dimension), they forget to specify a parameter for a transformation (e.g., P7 in task 10 asked to "rotate of 30 degrees the figure" without specifying a direction). **Abstraction.** We noticed two behaviors related to the abstraction of the commands. The first one relates a general abstraction over the process to reach the desired goal, as exemplified by P2 that tried to solve task 14 by saying "create a chair using two boxes and four cylinders". The second one refers to how novices translate the desired 3D shapes into words. For example, shapes are created by providing a general description (e.g., P10 in task 4 by saying "create a 3D rectangle 30 cm high, 20 cm deep, and long 10 cm", referred to a box as a "3D rectangle", thus simply describing the shape) or by approximating the desired shape with a similar 2D shape (e.g., P8 during task 4 used "rectangle" instead of "box" by saying "create a rectangle of height 30, width 20, depth 10"). Furthermore, especially German participants, novices also refer to the 3D shapes by using similar real-world objects (e.g., P17 during task 3 stated "create a dice with an edge length of 30 centimeters", using "dice" instead of "cube"). ### Users' Requests We collected requests and suggestions provided by the participants, which provide useful insights on novices' mental model. Among the most common requests, participants often asked to rotate the camera and change their point of view. As an example, P11 during the last task of creating a chair, asked "can I see it from below?" and "can I see it from above" to perform some minor adjustments and corrections to the positions of the 3D objects. This behavior underlines the need to provide a way to allow novices to rotate their point of view. This functional requirement is strictly related to the theme of Selection of Objects as it may benefit from different interaction modalities that could be explored (e.g., using Augmented Reality). Another common request is related to the actual dimensions: when novices explicitly set size in the command (for example, in the third task), they want to check that the system created an object of the right size. This is exemplified by P10 which explicitly asked if "can I ask it to check the dimensions?" in the third task. This suggestion does not translate to an additional requirement for the AI model that recognizes users' commands, but it rather provides some insights on the requirements of the whole 3D modeling tool. Other minor suggestions regarded the customization of the axis: some participants expected the Y axis to be the "vertical" one as it usually happens in 2D drawings, rather than the Z axis as it happens in 3D modeling tools like Blender. Providing such a customization option would surely reduce the error rate in a final system, as the novices could adapt it to their own knowledge. ## 5 Discussion and Implications Based on the findings of the WoZ study, in the following we present design implications for the development of future voice-based 3D modeling tools for novice designers and relate them to the wider research literature around voice assistants and general user experience principles. #### 5.0.1 Understand user corrections and adapt to them. This requirement stems from the errors the users are aware of (see theme Errors). It poses requirements that impact two different facets of future voice-based digital modeling tools: the Natural Language Understanding (NLU) layer and the conversation flow. Regarding the NLU layer, systems must be able to intercept user corrections and aborted commands. Based on our findings, we note that _recognizing uncertainty, hesitation, doubt, and error awareness early on is particularly crucial in the digital modeling context_, as users displayed them frequently due to their unfamiliarity with 3D modeling [2]. Regarding the conversation flow, after intercepting the error correction, it is important to design a dialog that helps users understand the error and recover from it [18]. Moore and Arar [30] provide valuable pointers through their _Natural Conversation Framework_ which proposes a set of conversational patterns. Some of these patterns relate to _user corrections_ and can be applied to voice-based digital modeling. An example inspired by this framework that relates to errors that users correct while they issue a 3D modeling command might be: _User:_ Hey blender, increase of 10 centimeters -no- of 20 centimeters the sides of the geometric figure _Agent:_ I'm sorry, I didn't understand. Do you mean an increase of 10 or 20 centimeters? _User:_ 20 centimeters. _Agent:_ Ok, I'm increasing of 20 centimeters the sides of the geometric figure. #### 113.3.1 Deal with vague and incomplete commands . We have identified numerous Errors by the lack of knowledge and the system's shortcomings that users were unaware of. These errors are related to incomprehension due to the vagueness and abstraction of some commands. Self-repair strategies should be introduced to improve interaction [6]. To this aim, we identified two possible solutions. The first one consists of _sensible defaults_: in case of a vague command, the voice assistant fixes it by _selecting a relevant parameter from a list of alternatives_. For example, if the user says "create a cylinder on top of the cube", the cylinder diameter is not specified. In this case, the system can assume that the diameter is equal to the side of the cube. This solution can also benefit from the dialog context: as suggested by Jain et al., _resolving and maintaining the dialog context_ can help select the most appropriate sensible default from a list of alternatives [18]. For example, if other cylinders have been previously created with a given diameter on top of cubes the same can be applied to the new ones in case of vague commands. This allows the system to be proactive, anticipating the users' requests as suggested by Volkel et al. [48]. The second solution consists of _interactively guiding the user_ by providing the missing information. With reference to the previous command of the box and cylinder, instead of using defaults, the voice assistant can explicitly ask the user for the desired radius. The strategy adopted by the voice assistant is informed by the degree of system autonomy or desired user control. A hybrid solution can also benefit from both approaches: the selected sensible default can be used by the voice assistant to ask the user if the default is right, for example, with reference to the previous case the voice assistant can reply: "OK, I'm creating a cylinder with a diameter equal to the side of the cube. Is it OK?" #### 113.3.2 Translate interaction conventions to voice-based digital modeling . Users commonly apply their experience with software applications to other applications or even different domains. As an example, some participants expected to execute "undo" or "redo" commands, which are common across applications and domains. This is in line with the traditional Nielsen heuristics of "user control and freedom" and "consistency and standard" [35]. The latter states that "users should not have to wonder whether different words, situations, or actions mean the same thing", thus the system should "follow platform and industry conventions" (from Nielsen [34]). For this reason, a voice-based 3D modeling system should provide such common operations, like the aforementioned "undo" and "redo" commands. Further exploration may be required to clearly define and match the set of expected commands to voice-based digital modeling. #### 113.3.3 Adopt simple operations even for the creation of composite 3D models . Based on the theme Basic Operations, we note that most users follow similar and simple approaches even in complex tasks. For example, by analyzing task 13 (which consisted of creating a figure having a cylinder on top of the cube), multiple approaches might be adopted, but novices used only basic operations (creation and translation) to create both a simple cube and a cylinder and then moving the latter on top of the former. This highlights that, although many technical operations may be implemented in voice assistants for digital modeling, it is important to provide novices with simple operations to create and compose 3D objects, rather than prescribing more complex operations like "extrusion" and "inserting", which are most adequate for skilled users [33]. 3.3 Match digital modeling workflows with novices' expectations and experiences from building physical objects Related to the Basic Operations, but by focusing on the last task (that consisted of the creation of a chair), we noticed that the majority of the users started by creating the base cylinders (almost all users started with a phrase like _"create four cylinders"_). This surely provides an interesting insight on how people approach the creation of composite 3D objects. By creating the base cylinders first, users are basically following an approach that starts from the bottom and proceeds upwards. This is not different from the approach that users should follow if they were composing physical shapes: by starting from the bottom, they are able to stack the various shapes without the risk of their composition to "fall down". This indication can be useful if wizard procedures are introduced to guide the creation of composite 3D objects; for example, the voice assistants can start the interaction by asking which is the shape, with its features, that must be placed at the bottom, then going on guiding the user to create other shapes on top of the previous ones. #### 11.3.4 Provide alternatives for the selection of 3D objects By reflecting on the theme of Selection of Objects, we argue that it is among the most critical ones: most of the 3D modeling revolves around the selection of objects to be composed. We found that several and different techniques have been adopted by the novices. For example, a common solution is represented by commands to select an object by referring to the entire scene, in other words in an absolute way. We also documented commands that use relative references, for example, their relative time of creation, their relative position, their dimensions, and by inverting the current selection. The last approach is represented by the implicit selection of the objects in the scene. These strategies represent different solutions the users can adopt to select a 3D object, and thus the voice assistant should accommodate all of them. To simplify the interaction, future voice assistants can be complemented with additional interaction modalities like gestures or eye tracking, where users could simply point [12, 22, 21] or gaze [27] at the object or surface they want to select. #### 11.3.5 Understand commands that are relative to the user's point of view As described in the themes The Gulf of Execution and Selection of Objects, users often execute commands that are related to their point of view, in particular, to change the camera perspective, to select an axis, and to select a 3D object. In other words, we found that a common way for novices to issue commands is through the "screen" coordinate sys tem [43], as provided by some professional 3D modeling systems7, by using common words such as "left" and "right", as P9 did during task 11 with the command "move the geometric shape 20 cm to the right". Furthermore, novices often provided commands relative to both their point of view and other objects (as P10 did during task 13: "insert a cylinder on top of the cube"). This implies that future voice assistants must be equipped with some way of understanding the 3D context into which the command is provided, and they must take into account the user's point of view during the intent-matching process. Footnote 7: [https://shorturl.at/fGLRZ](https://shorturl.at/fGLRZ) Grant multiple ways to refer to the axes.Users referred to the axes of the 3D scene by adopting different approaches: by indicating the axis color, by referring to the user's relative direction, by using the axis name (see themes The Gulf of Execution) or some users also preferred to switch the Y and Z axes as the "vertical" axis (see theme Users' Requests). This ambiguity is also found in professional systems, as some of them use the Z axis as vertical while others use the Y axis instead [16]. This behavior should be considered in the design of voice assistants for 3D modeling, since this is a core activity that, if not adequately supported, might lead to ineffective user interaction. Design for complex commands.Multiple chained commands have often been prompted to execute various actions. In our study, it was possible to accommodate the multiple users commands thanks to the WoZ but voice assistants are typically restricted to simple standalone commands. Similar to what Fast et al. already proposed for complex tasks [9], also voice-based systems for 3D modeling should address this requirement, which strongly impacts the design of its NLU layer that must be able to understand and execute multiple chained commands. Favor explicit trigger wordsPrevious work by Vtyurina et al. argued that forcing the use of explicit trigger words would constrain user interactions, suggesting the use of implicit conversation cues for driving the dialog [49]. On the contrary, during our experiments novices used implicit conversational cues while thinking about their workflow and as a natural reaction after a successful command execution (see The Gulf of Execution): this highlights the need for future voice-based systems to provide clear explicit activation cues and trigger words, to avoid any unintentional activation that would disrupt users' workflow. Embrace diversity in naming approaches.As novices usually have little to no knowledge of the 3D modeling domain, they often have to resort to different naming approaches when dealing with shapes for which they do not recall the "right" name. As already highlighted in The Gulf of Execution, novices can refer to shapes by providing high-level descriptions (e.g., "3D rectangle" instead of "box"), 2D approximations ("rectangle" instead of "box"), or by associating them to a real-world object (e.g., "dice" instead of "cube"). For this reason, future systems must be able to understand both analogies and descriptions of shapes. A concrete solution might be the adoption of a lexical ontology like WordNet [29] to infer the shape name related to the real object. ## 6 Limitations of the Study Our study is an initial step toward understanding how novices approach voice-based 3D modeling. We have identified some limitations of our work. First, the novices' languages deserve a wider exploration: our study highlights very small differences between Germans and Italians because of their culture; however, a similar study where participants use their native languages might be useful to understand how language might impact the resulting mental model. Similarly, this study does not focus on how aspects like ethnicity, socio-economic status, and age might impact the novice's mental model. Another limitation regards the tasks: the ones used in the study are representative of the most common operations to design 3D models but digital fabrication often implies the design of objects that are more complex than a chair. In addition, the set of proposed tasks does not cover all possible operations (e.g., selecting textures and making holes). Future work may also study differences between the mental model of lay users (target of this study) and novices in 3D modeling that are domain experts (e.g., they have expertise in sculpting or 3D world composition, but do not know how to model). Similarly, the proposed voice-based interaction approach may be compared with alternative solutions based on mouse and keyboard or multi-modal approaches, to explore the pros and cons of each solution. Finally, Blender has been selected as the 3D modeling tool because of the advantages reported in Section 3.3; however, its UI is designed for a WIMP interaction thus it presents commands, buttons, functions, etc., that might bias or confuse novices. Despite carefully hiding all the useless parts of the Blender UI, the adoption of a system purposely designed to better fit the voice interaction might be adopted to elicit the mental model. ## 7 Conclusion Voice interaction is emerging as a promising paradigm that can simplify 3D modeling for digital fabrication. However, novices' mental model is never considered when designing voice-based 3D modeling systems. In addition, voice interaction is usually built on top of WIMP systems instead of designing the voice paradigm and the whole system from scratch. This study addresses these limitations by investigating the novices' mental model in 3D modeling and contributes to the state-of-the-art by identifying a set of design implications that support the definition of voice-based interaction paradigms for the design and customization of personalized 3D models. This contribution aims to lower the barrier to 3D modeling thus supporting the wider democratization of digital fabrication. As future work, we are now addressing the limitations reported in the previous section. We are also working on the development of a prototype of a voice assistant integrated into Blender: it is currently being developed in DialogFlow [39] and it has been designed considering the design implications proposed in this study. The aim is to study novices' behavior when interacting with real systems, also exploring if and how the design indications suggested in this study also accommodate the design of more complex objects in more realistic situations, for example, by proposing scenarios instead of tasks. #### Acknowledgements This work has been funded by the European Union's Horizon 2020 research and innovation program under grant agreement No. 952026 ([https://www.humane-ai.eu/](https://www.humane-ai.eu/)). The research of Andrea Esposito is funded by a Ph.D. fellowship within the framework of the Italian "D.M. n. 352, April 9, 2022" - under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 3.3 - Ph.D. Project "Human-Centered Artificial Intelligence (HCAI) techniques for supporting end users interacting with AI systems", co-supported by "Eusoft S.r.l." (CUP H91I22000410007).
2304.03866
Conservative objective models are a special kind of contrastive divergence-based energy model
In this work we theoretically show that conservative objective models (COMs) for offline model-based optimisation (MBO) are a special kind of contrastive divergence-based energy model, one where the energy function represents both the unconditional probability of the input and the conditional probability of the reward variable. While the initial formulation only samples modes from its learned distribution, we propose a simple fix that replaces its gradient ascent sampler with a Langevin MCMC sampler. This gives rise to a special probabilistic model where the probability of sampling an input is proportional to its predicted reward. Lastly, we show that better samples can be obtained if the model is decoupled so that the unconditional and conditional probabilities are modelled separately.
Christopher Beckham, Christopher Pal
2023-04-07T23:37:50Z
http://arxiv.org/abs/2304.03866v1
# Conservative objective models are a special kind of contrastive divergence-based energy model ###### Abstract In this work we theoretically show that conservative objective models (COMs) for offline model-based optimisation (MBO) are a special kind of contrastive divergence-based energy model, one where the energy function represents both the unconditional probability of the input and the conditional probability of the reward variable. While the initial formulation only samples modes from its learned distribution, we propose a simple fix that replaces its gradient ascent sampler with a Langevin MCMC sampler. This gives rise to a special probabilistic model where the probability of sampling an input is proportional to its predicted reward. Lastly, we show that better samples can be obtained if the model is decoupled so that the unconditional and conditional probabilities are modelled separately. Machine Learning, ICML ## 1 Introduction Model-based optimisation (MBO) is concerned with the use of generative models for design problems, where the input \(\mathbf{x}\) specifies the design and the desirability of any design (i.e. the reward) is a black box function \(y=f(\mathbf{x})\) called the _ground truth oracle_ which is prohibitively expensive to evaluate. For instance, if we are dealing with designing drugs to target disease then the oracle is a real world process that involves synthesising and testing the drug in a wet lab, which is expensive. Because each evaluation of the'real world' \(f(\mathbf{x})\) is expensive, we would like to use machine learning to construct a reliable proxy of the oracle \(f_{\theta}(\mathbf{x})\) and exploit that instead. (This is one discernible difference to more traditional derivative-free black box optimisation, which assumes that \(f\) can be queried at will.) In addition, we are also interested in _extrapolation_: we would like to find designs \(\mathbf{x}\) that have as high of a reward as possible, possibly even higher than what has been observed so far. Generally speaking, we would like to generate a candidate set \(\mathcal{S}=\{\mathbf{x}_{j}\}_{j=1}^{M}\) such that: \[\mathcal{S}=\text{argmax}_{\mathbf{x}_{1},\dots,\mathbf{x}_{M}}\ \sum_{j=1}^{M}f(\mathbf{x}_{j}). \tag{1}\] Since we do not have access to \(f\) during training, we must resort to training an approximation of it, which we denote \(f_{\theta}(\mathbf{x})\). This is usually called an _approximate_ or _surrogate_ model or 'oracle'. We note that because \(f_{\theta}(\mathbf{x})\) is approximate and is a discriminative model, it is vulnerable to over-scoring inputs or even assigning rewards greater than zero to implausible inputs,1 and these are commonly referred to as _adversarial examples_. In the context of our aforementioned drug example, an implausible input would be a drug whose chemical configuration (that is, its configuration of atoms) is physically impossible. How these problems are addressed depends on whether one approaches MBO from a discriminative modelling point of view (Fu and Levine, 2021; Trabucco et al., 2021) or a generative modelling one (Brookes et al., 2019; Fannjiang and Listgarten, 2020; Kumar and Levine, 2020; Beckham et al., 2022). In this work we will exclusively discuss _conservative objective models_(Trabucco et al., 2021), whose work comes from the discriminative perspective. However, we will show that this model essentially falls under a particular class of generative model called an _energy-based model_, and in this work we perform a theoretical and empirical analysis of that model from this perspective. Lastly, for the sake of clarification, we note that MBO methods can be categorised into whether they are online or offline. In the online case, we assume that the ground truth oracle can be queried during training to obtain additional labels, which essentially becomes active learning. In the offline case, we only assume a dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) and must make do with this data to train the best possible proxy model \(f_{\theta}(\mathbf{x})\).2 For the remainder of this paper we will only consider offline MBO, and simply refer to it as MBO unless otherwise stated. Footnote 2: The difference between offline and online MBO pertains to just the training of the generative model. Even with ‘offline’ MBO, in a real world setting that model still has to be validated against the ground truth oracle by generating novel inputs and scoring them. We lay out our contributions in this work as follows: * We theoretically show that conservative objective models (COMs) are extremely similar to an energy-based model (EBM) that is trained via contrastive divergence, albeit with a modified MCMC sampler that can only sample from the _modes_ of its distribution (Section 2.1). This special form of EBM is parameterised such that the _negative energy_ of an input \(-E_{\theta}(\mathbf{x})\) is equivalent to the predicted reward \(f_{\theta}(\mathbf{x})\) of that same input. In other words, the energy is trained to be both predictive of the _likelihood_ of an example, as well as its _reward_, with a training hyperparameter \(\alpha\) introduced to balance the trade-off between how much model capacity should be allocated between the two. These two components can be seen as inducing a _joint density_\(p_{\theta}(\mathbf{x},y;\alpha)\propto p_{\theta}(y|\mathbf{x})p_{\theta}(\mathbf{x})^{\alpha}\) over the data.3 Footnote 3: Code for this paper will be made available here: [https://github.com/christopher-beckham/coms-are-energy-models](https://github.com/christopher-beckham/coms-are-energy-models) * COMs uses gradient ascent for its MCMC sampler, which can only sample modes from its distribution. If it is modified to properly sample from the distribution, then the model becomes a special instance of a contrastive divergence EBM, and we call these _Stochastic COMs_ (Section 2.2). _Stochastic COMs_ have the special property that the probability of sampling an input is proportional to its predicted reward, i.e. \(p_{\theta}(\mathbf{x})\propto f_{\theta}(\mathbf{x})\). We illustrate the effect of \(\alpha\) on a toy spiral dataset in Section 3 as well as visualise generated samples between the both COMs variants. * We show that COMs fail to generate desirable samples on a simple toy spiral dataset because the same network is being used to parameterise both the likelihood of an example and its score, and this subsequently degrades sample quality. To alleviate this, we propose _decoupled COMs_, where a separate classifier is trained and its gradients are used at sampling time (Section 2.3). _Decoupled COMs_ can be thought of as a contrastive divergence-based EBM which leverages an external classifier (regression model) as a form of conditional guidance. ### Energy-based generative models In EBMs we wish to learn a probability distribution without any specific modelling assumptions. This is done by defining the unnormalised probability of an input as the _negative_ of an energy \(E_{\theta}\) that is parameterised with a neural network: \[p_{\theta}(\mathbf{x})=\frac{\exp(-E_{\theta}(\mathbf{x}))}{Z_{\theta}}, \tag{2}\] where \(Z_{\theta}\) is the (usually intractable) normalising constant, which can be seen as a function of the energy model parameters \(\theta\). Ignoring the intractability issue for a brief moment, the log likelihood for one example \(\mathbf{x}\) can be expressed as: \[\log p_{\theta}(\mathbf{x})=-E_{\theta}(\mathbf{x})-\log\underbrace{\int_{\mathbf{x}}\exp (-E_{\theta}(\mathbf{x}))d\mathbf{x}}_{Z_{\theta}}, \tag{3}\] where we have re-written \(Z_{\theta}\) as an integral. While this seems virtually impossible to handle, an interesting identity from Song and Kingma (2021) says the score of \(Z_{\theta}\) is equivalent to: \[\nabla_{\theta}\log Z_{\theta}=\mathbb{E}_{\mathbf{x}\sim p_{\theta}(\mathbf{x})}[- \nabla_{\mathbf{x}}E_{\theta}(\mathbf{x})]. \tag{4}\] In other words, the integral can be approximated via Monte Carlo by simply computing the score over each example inside the expected value. This means that we can define a loss \(\mathcal{L}_{\theta}(\mathbf{x})\) such that, when we take the gradient of it, it becomes equivalent to \(\nabla_{\theta}\log p_{\theta}(\mathbf{x})\): \[\mathcal{L}_{\theta}(\mathbf{x}) =-E_{\theta}(\mathbf{x})+\mathbb{E}_{\mathbf{x}\sim p_{\theta}(\mathbf{x})}E_ {\theta}(\mathbf{x})\] \[\implies\nabla_{\theta}\mathcal{L}_{\theta}(\mathbf{x})=\nabla_{ \theta}\log p_{\theta}(\mathbf{x}) =\nabla_{\theta}[-E_{\theta}(\mathbf{x})]+\underbrace{\mathbb{E}_{\mathbf{x} \sim p_{\theta}(\mathbf{x})}[-\nabla_{\mathbf{x}}E_{\theta}(\mathbf{x})]}_{\text{Eq.~{}4}}. \tag{5}\] It is expensive to approximate \(Z_{\theta}\) term because it requires us to draw samples from the generative model \(p_{\theta}(\mathbf{x})\) which is a costly process. For example, we would have to run Langevin MCMC (Neal et al., 2011; Welling and Teh, 2011; Song and Kingma, 2021) by drawing an initial \(\mathbf{x}_{0}\) from some simple prior distribution and running the Markov chain for a sufficiently long number of time steps \(T\) such that \(\mathbf{x}_{T}\approx p_{\theta}(\mathbf{x})\): \[\mathbf{x}_{t+1} :=\mathbf{x}_{t}+\frac{\epsilon_{t}^{2}}{2}\nabla_{\mathbf{x}_{t}}\log p_ {\theta}(\mathbf{x}_{t})+\epsilon\mathbf{z}_{t}\] \[\boxed{=\mathbf{x}_{t}+\frac{\epsilon_{t}^{2}}{2}\nabla_{\mathbf{x}_{t}}[- E_{\theta}(\mathbf{x}_{t})]+\epsilon_{t}\mathbf{z}_{t},} \tag{6}\] where \(\mathbf{z}_{t}\sim\mathcal{N}(0,\mathbf{I})\), and \(\epsilon_{T}\to 0\). For reasons that will become clear shortly, we prefer to define \(p_{\theta}(\mathbf{x})\) more explicitly such that it is obvious that sampling involves an \(\mathbf{x}_{0}\) that is drawn from a simple prior distribution (e.g. a Gaussian or uniform distribution), which we will call \(p_{\pi}(\mathbf{x})\). That is, in order to sample from our generative model we first sample \(\mathbf{x}_{0}\sim p_{\pi}(\mathbf{x}_{0})\) and then run Langevin MCMC on \(\mathbf{x}_{0}\), which we can write simply as a sample from the conditional distribution \(\mathbf{x}\sim p_{\theta}(\mathbf{x}|\mathbf{x}_{0})\). Both of these distributions define a joint distribution \(p_{\theta,\pi}(\mathbf{x},\mathbf{x}_{0})=p_{\theta}(\mathbf{x}|\mathbf{x}_{0})p_{\pi}(\mathbf{x}_ {0})\), and therefore the marginal over \(\mathbf{x}\) itself can simply be written as: \[p_{\theta,\pi}(\mathbf{x})=\int_{\mathbf{x}_{0}}p_{\theta}(\mathbf{x}|\mathbf{x}_{0})p_{\pi}( \mathbf{x}_{0})d\mathbf{x}_{0}. \tag{7}\] Therefore, we can write a more explicit form of Equation 3 that uses \(p_{\theta,\pi}\) instead: \[\mathcal{L}_{\theta,\pi}(\mathbf{x})=\log p_{\theta,\pi}(\mathbf{x}) =-E_{\theta}(\mathbf{x})+\mathbb{E}_{\mathbf{x}^{\prime}\sim p_{\theta,\pi} (\mathbf{x})}E_{\theta}(\mathbf{x}^{\prime})\] \[=-E_{\theta}(\mathbf{x})+\mathbb{E}_{\mathbf{x}^{\prime}\sim p_{\theta}( \mathbf{x}|\mathbf{x}_{0}),\mathbf{x}_{0}\sim p_{\pi}(\mathbf{x}_{0})}E_{\theta}(\mathbf{x}^{ \prime}). \tag{8}\] We now explain the reason for this reformulation: a widely known algorithm used to train these models is called _contrastive divergence_(Hinton, 2002), a modification of the Langevin MCMC procedure. Contrastive divergence proposes two modifications to make it more computationally viable: run MCMC for \(k\) iterations instead (where \(k\) is extremely small, such as a few steps), and let \(p_{\pi}\) be the _actual data distribution_, so the chain is initialised from a real data point. (To keep notation simple, whenever \(p_{\theta}(\mathbf{x})\) is used, we really mean \(p_{\theta,\pi}(\mathbf{x})\) where \(p_{\pi}(\mathbf{x})=p(\mathbf{x})\), the real data distribution.) While running the sampling chain for \(k\) iterations introduces some bias into the gradient, it appears to work well in practice (Bengio and Delalleau, 2009). Concretely, if we use contrastive divergence for \(k\) iterations then we will write our objective as the following: \[\mathcal{L}_{\theta}^{CD-k}(\mathbf{x}) :=-E_{\theta}(\mathbf{x})+\mathbb{E}_{\mathbf{x}^{\prime}\sim p_{\theta}^ {k}(\mathbf{x}^{\prime}|\mathbf{x}_{0})p(\mathbf{x}_{0})}E_{\theta}(\mathbf{x}^{\prime}) \tag{9}\] \[\approx\mathcal{L}_{\theta}(\mathbf{x}).\] We will denote this style of energy-based model as a _contrastive divergence-based EBM_, or simply _CD-EBM_. ## 2 Conservative objective models Before continuing, we make an important distinction between the approximate oracle \(f_{\theta}(\mathbf{x})\) itself and its _statistical_ interpretation, \(p_{\theta}(y|\mathbf{x})\). The approximate oracle \(f_{\theta}(\mathbf{x})\) is a regression model trained to predict \(y\) from \(\mathbf{x}\) but the precise loss function used imbues a specific probabilistic interpretation relating to that model. For instance, if the _mean squared error loss_ is used during training, then \(p_{\theta}(y|\mathbf{x})\) has the interpretation of being a Gaussian distribution whose \(f_{\theta}(\mathbf{x})\) parameterises the mean and \(\sigma^{2}=1\). While the choice of probabilistic model is up to the user, we will assume a Gaussian model here as it is the most commonly used for regression tasks and is the probabilistic model used in the paper. Given some training pair \((\mathbf{x},y)\in\mathcal{D}\) we can write out its conditional likelihood of \(y\) given \(\mathbf{x}\) as follows: \[\log p_{\theta}(y|\mathbf{x})=\log\mathcal{N}(y;f_{\theta}(\mathbf{x}),\sigma)=-\frac{ 1}{\sigma\sqrt{2\pi}}(y-f_{\theta}(\mathbf{x}))^{2}, \tag{10}\] and since we assumed \(\sigma^{2}=1\) we get \(-(y-f_{\theta}(\mathbf{x}))^{2}\) times a constant term. Since the mean squared error loss is typically minimised, the negative sign disappears. Conservative objective models (COMs) are a recently proposed method (Trabucco et al., 2021) for MBO. Conceptually, the method can be thought of as simply training an approximate oracle \(f_{\theta}(\mathbf{x})\) but with the model subjected to an extra regularisation term that penalises predictions for samples that have been generated with \(f_{\theta}\), which are assumed to be adversarial examples. In order to mitigate the issue of adversarial examples and over-scoring, the authors propose a regularisation term that penalises the magnitude of samples that have been generated in the vicinity of \(\mathbf{x}\): \[\mathcal{L}_{\theta}^{\text{sup}}(\mathbf{x},y;\alpha):=\log p_{\theta}(y|\mathbf{x})+ \underbrace{\alpha\big{[}-\mathbb{E}_{\mathbf{x}^{\prime}\approx p_{\theta}(\mathbf{x }^{\prime}|\mathbf{x}_{0}),\mathbf{x}_{0}\sim p(\mathbf{x})}f_{\theta}(\mathbf{x}^{\prime})+f_ {\theta}(\mathbf{x})\big{]}}_{\text{COMs regulariser}}. \tag{11}\] The following sampler is used for \(p_{\theta}(\mathbf{x}|\mathbf{x}_{0})\): \[\mathbf{x}_{t+1} :=\mathbf{x}_{t}+\epsilon\nabla_{\mathbf{x}_{t}}[-E_{\theta}(\mathbf{x}_{t})]\] \[\boxed{=\mathbf{x}_{t}+\epsilon\nabla_{\mathbf{x}_{t}}f_{\theta}(\mathbf{x} _{t}),} \tag{12}\] where \(\epsilon\) is constant for each time step. What is interesting is that this procedure does not inject any noise; because of this, samples will instead converge to a _maximum a posteriori_ solution, i.e. one of the modes of the distribution \(p_{\theta}(\mathbf{x})\)(Welling and Teh, 2011) (hence the use of the approximate symbol \(\approx\) in the expectation of Equation 12). This can be problematic if there is very little inter-sample diversity amongst generated samples, as they will be less robust as a whole to the ground truth oracle. ### Relationship to EBMs Furthermore, we note that the regularisation term inside \(\alpha\) in Equation 11 is actually _equivalent_ to Equation 8 if we define \(f_{\theta}(\mathbf{x})=-E_{\theta}(\mathbf{x})\), and this in turn is equivalent to \(\log p_{\theta}(\mathbf{x})\). This, combined with the classification loss \(\log p_{\theta}(y|\mathbf{x})\) defines a _joint distribution_\(p_{\theta}(\mathbf{x},y)\), which is precisely the loss proposed in the original paper (Trabucco et al., 2021). Let us propose a special joint density \(p(\mathbf{x},y;\alpha)\) where \(\alpha\) controls the trade-off between the two likelihood terms: \[p_{\theta}(\mathbf{x},y;\alpha) \propto p_{\theta}(y|\mathbf{x})p_{\theta}(\mathbf{x})^{\alpha}\] \[\implies\log p_{\theta}(\mathbf{x},y;\alpha) \propto\log p_{\theta}(y|\mathbf{x})+\alpha\log p_{\theta}(\mathbf{x})\] (13) \[=\underbrace{-\frac{1}{2}(y-f_{\theta}(\mathbf{x}))^{2}}_{\log p_{ \theta}(y|\mathbf{x})}+\alpha\big{[}-\mathbb{E}_{\mathbf{x}^{\prime}\sim p_{\theta}( \mathbf{x}^{\prime}|\mathbf{x}_{0}),\mathbf{x}_{0}\sim p(\mathbf{x})}f_{\theta}(\mathbf{x}^{\prime })+f_{\theta}(\mathbf{x})\big{]}\] \[=-\frac{1}{2}(y-\underbrace{f_{\theta}(\mathbf{x})}_{-E_{\theta}(\mathbf{ x})})^{2}+\alpha\underbrace{\big{[}\mathbb{E}_{\mathbf{x}^{\prime}\sim p_{\theta}(\mathbf{x} )}E_{\theta}(\mathbf{x}^{\prime})-E_{\theta}(\mathbf{x})\big{]}}_{\log p_{\theta}(\mathbf{ x}),\text{ Eqn.~{}\ref{eq ### Decoupled COMs In Section 2.1 we showed that COM's training objective induces a special joint density \(p_{\theta}(\mathbf{x},y;\alpha)\) where \(\alpha\) controls the trade-off between modelling \(p_{\theta}(\mathbf{x})\) and also the classifier \(p_{\theta}(y|\mathbf{x})\). If \(\alpha\) were to be carefully tuned then we would hope that samples drawn from the model \(\mathbf{x}\sim p_{\theta}(\mathbf{x})\) would not only be plausible (i.e. lie on the data distribution) but also comprise high reward on average. One concern is that since the same model \(E_{\theta}\) is parameterising both distributions, achieving this might be cumbersome. Here we propose an alternative, one that decouples the training of both. Let us denote \(p_{\theta}(\mathbf{x})\) any learned energy-based model4 on \(\mathbf{x}\), and also introduce an independently trained oracle \(f_{\omega}(\mathbf{x})\) which is a standard regression model trained to predict \(y\) from \(\mathbf{x}\). We propose the following tilted density (Asmussen and Glynn, 2007; O'Donoghue et al., 2020): Footnote 4: We can even use a COM for which \(\alpha\) is large enough such that most of the model is spent on modelling the data distribution. In fact, we found that the training dynamics of this was more stable than the training of a CD-ESM. \[p_{\theta,\omega}(\mathbf{x};w) =p_{\theta}(\mathbf{x})\exp(wf_{\omega}(\mathbf{x})-\kappa(1/w)) \tag{16}\] \[\implies\log p_{\theta,\omega}(\mathbf{x};w) =\log p_{\theta}(\mathbf{x})+wf_{\omega}(\mathbf{x})-\text{const.}, \tag{17}\] where \(w\) is a hyperparameter weighting our preference for \(\mathbf{x}\)'s with high reward (with respect to \(f_{\omega}\)) and \(\kappa\) is a normalising constant and does not depend on \(\mathbf{x}\). To sample, we simply use the following Langevin MCMC sampler: \[\mathbf{x}_{t+1} :=\mathbf{x}_{t}+\frac{\epsilon^{2}}{2}\nabla_{\mathbf{x}_{t}}\Big{[}wf_ {\omega}(\mathbf{x})+\log p_{\theta}(\mathbf{x}_{t})\Big{]}+\epsilon\mathbf{z}_{t}\] \[\boxed{=\mathbf{x}_{t}+\frac{\epsilon^{2}}{2}\Big{[}w\nabla_{\mathbf{x}_ {t}}f_{\omega}(\mathbf{x})+\nabla_{\mathbf{x}_{t}}f_{\theta}(\mathbf{x}_{t})\Big{]}+ \epsilon\mathbf{z}_{t}.} \tag{18}\] ## 3 Experiments and Discussion DatasetWe consider a simple 2D spiral dataset that has been modified to also introduce a reward variable \(y\). The ground truth function for this reward variable is \(f(\mathbf{x})=\sum_{i=1}^{2}\exp((\mathbf{x}_{i}-0)^{2})\), which means that the largest reward is found at the origin \((\mathbf{x}_{1},\mathbf{x}_{2})=(0,0)\). This is illustrated in Figure 1. In the context of MBO, we would like to learn a generative model which is able to sample _valid_ points that are as close to the center as possible, since points that are closest to the center will have a larger reward. Here, a 'valid' point is one that lies on the spiral, i.e. some \(\mathbf{x}\) for which \(p(\mathbf{x})>0\) for the ground truth distribution \(p(\mathbf{x})\). As we can see in the figure, the point \(\mathbf{x}=(\mathbf{x}_{1},\mathbf{x}_{2})=(0,0)\) which lies at the center would not be valid. TrainingThe energy model is a one hidden layer MLP of 256 units, and we use \(k\)-contrastive divergence during training for \(k=100\) time steps (see Equation 9). Its associated variance schedule for Langevin MCMC is a geometric sequence from \(0.02\to 0.001\) over those \(k\) intervals.5. At generation time, we run Langevin MCMC for 50k timesteps with prior distribution \(p_{\pi}(\mathbf{x}_{0})=\text{Uniform}(-1.5,2)\) and use a geometric schedule from \(0.1\to 10^{-5}\). Footnote 5: This can be generated easily in Numpy via numpy.geomspace(a=0.02, b=0.001, n) ResultsWe train each of the three variants of COMs, and their results are shown in Figures 2, 3, and 4, respectively. For the first two, we train two variants: one where \(\alpha=0\) and the model reduces down to just a regression model (classifier), and Figure 1: 2D spiral dataset. Orange points are samples from the ground truth marginal \(p(\mathbf{x})\), and background colours correspond to values of \(y\) for the ground truth oracle \(f(\mathbf{x})=\sum_{i=1}^{2}\exp((\mathbf{x}_{i}-0)^{2})\). one where \(\alpha=50\) where the model is heavily weighted to model the data distribution instead. Since the original COM uses gradient ascent as its sampler, samples are heavily biased towards seeking modes and sample diversity suffers as a result. In the stochastic variant this is fixed, however we found it difficult to choose an \(\alpha\) such that samples were simultaneously concentrated near the center but _on_ the spiral, which would constitute the best samples for this dataset. As we mentioned in Section 2.1, we believe it is because we're using the same energy model to model both \(p_{\theta}(\mathbf{x})\) and \(f_{\theta}(\mathbf{x})\), and therefore either task is not able to be learned sufficiently well. In decoupled COMs however (Figure 4) the energy \(E_{\theta}(\mathbf{x})\) and \(f_{\omega}(\mathbf{x})\) are separate models and the latter is weighted by hyperparameter \(w\). We can see that for modest values of \(w\) we obtain samples that progressively become more heavily concentrated at the center, but are still lying on the spiral. Figure 4: Generated samples (shown as black crosses) for _decoupled COMs_ (Sec. 2.3). Like stochastic COMs, Langevin MCMC is also used here but we also leverage the gradient of an externally trained regression model \(f_{\omega}(\mathbf{x})\) as shown in Equation 18. Here, we achieve the desired behaviour: a modest value of \(w\) gives us samples that mostly lie on the part of the spiral closest to the center. Figure 3: Generated samples (shown as black crosses) for the _stochastic COMs_ formulation (Sec. 2.2). Orange points are those from the real distribution \(p(\mathbf{x})\), and the colourbar denotes the ground truth \(y\). In 2(a) and 2(b) we show samples for \(\alpha=0\) and \(\alpha=50\) respectively, for two separate training runs (seeds). For \(\alpha=0\), only the conditional distribution \(p_{\theta}(y|\mathbf{x})\) modelled, rather than \(\mathbf{x}\) and \(y\) jointly. For the \(\alpha=50\) case, the energy loss is heavily weighted in favour of modelling \(p_{\theta}(\mathbf{x})\). While samples for both \(\alpha\)’s are more diverse, it is difficult to select for the ‘good’ samples, i.e. those that are close to the center while still lying on the spiral (see Figure S6) for additional enumerations of \(\alpha\)). This is because the same energy function is being used to parameterise both distributions. We resolve this issue with the decoupled COMs variant, which is shown in Figure 4. Figure 2: Generated samples (shown as black crosses) for the _original COMs_ formulation (Sec. 2). Orange points are those from the real distribution \(p(\mathbf{x})\), and the colourbar denotes the ground truth \(y\). In 2(a) and 2(b) we show samples for \(\alpha=0\) and \(\alpha=50\) respectively, for two separate training runs (seeds). For \(\alpha=0\), only the conditional distribution \(p_{\theta}(y|\mathbf{x})\) modelled, rather than \(\mathbf{x}\) and \(y\) jointly. For the \(\alpha=50\) case, the energy loss is heavily weighted in favour of modelling \(p_{\theta}(\mathbf{x})\). Because the original COMs formulation uses a gradient ascent MCMC sampler, only modes can be sampled from the distribution, and sample diversity suffers as a consequence. This issue is addressed with _Stochastic COMs_ (Fig. 3), which uses Equation 15 to properly sample from the distribution. ## 4 Related work Recently, _score-based generative models (SBGMs)_ have been in wide use (Song and Ermon, 2019, 2020), and this also includes the diffusion class of models since they are theoretically very similar (Sohl-Dickstein et al., 2015; Ho et al., 2020). Due to space constraints we defer an extended discussion to Section A.3, though we heavily conjecture that this class is model is significantly more robust than COMs and therefore CD-EBMs. This is for the following reasons: * SBGMs sidesteps the issue of having to generate samples from the distribution during training with MCMC, which significantly speeds up training. This is because the training objective used is score matching (matching derivatives), as opposed to contrastive divergence which requires negative samples be generated. * SBGMs model the gradient directly \(s_{\theta}(\mathbf{x})=\nabla_{\mathbf{x}}\log p_{\theta}(\mathbf{x})\). Not only does this bypass the need to compute gradients at generation time with autograd, it also means that the energy function can model more information about its input because it is now a mapping from \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) (where \(d\) is the input data dimension), as opposed to \(E_{\theta}(\mathbf{x})\) which is a mapping from \(\mathbb{R}^{d}\rightarrow\mathbb{R}\)(Salimans and Ho, 2021). Furthermore, this mapping from \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) allows one to use specialised encoder-decoder models such as the U-Net (Ronneberger et al., 2015), which leverages skip connections to combine information at various resolutions of the input. * Modern SBGMs also propose score matching over _many_ different noise scales. Both large and small are important, since larger ones make it easier to cover all modes and smaller ones are closer to the score of the actual data distribution. All of these noise scales are learned within the same network \(s_{\theta}(\mathbf{x})\). At generation time, these noise scales are combined to give rise to an annealed version of Langevin MCMC which iterates from larger noise scales to smaller ones. We note that a modern SBGM can be constructed by simply replacing the contrastive-based formulation of \(p_{\theta}(\mathbf{x})\) in the decoupled COMs variant (Section 2.3) with one that has been trained with score matching as per Song and Ermon (2019). This combined with an external classifier \(f_{\omega}\) becomes very reminiscent to the 'classifier guidance' style of techniques introduced in Dhariwal and Nichol (2021); Ho and Salimans (2022). ## 5 Conclusion In this work, we showed that COMs, a highly performant algorithm for offline model-based optimisation, is essentially an energy-based model trained via contrastive divergence. COMs use the same energy model to parameterise both the unconditional and conditional parts of the data distribution (\(p_{\theta}(\mathbf{x})\) and \(p(y|\mathbf{x})\), respectively), and this also means that the model has a special property in which the probability of sampling an input is _proportional_ to its predicted reward. In this work we identified two shortcomings with the original formulation: firstly, a gradient ascent sampler is used which limits sample diversity; and secondly the parameterisation of both distributions hinders conditional sampling quality, as demonstrated on a toy 2D spiral dataset. We address both of these issues with a 'decoupled' variant of COMs which models the conditional and unconditional parts of the joint distribution separately, as well as use a Langevin MCMC sampler which correctly samples from the learned distibution. Lastly, we contribute a brief discussion comparing the training dynamics of COMs with more recent energy-based models which are trained with score matching. \begin{table} \begin{tabular}{l l l l l} \hline \hline Method & Joint density & Training algorithm for \(p_{\theta}(\mathbf{x})\) & \begin{tabular}{l} Sampling algorithm \\ for \(p_{\theta}(\mathbf{x})\) \\ \end{tabular} \\ \hline COMs (Sec. 2) & \(p_{\theta}(\mathbf{x},y;\alpha)\propto p_{\theta}(\mathbf{x})^{\alpha}p_{\theta}(y| \mathbf{x})\) & \begin{tabular}{l} Contrastive divergence \\ (approximate\({}^{\dagger}\)) \\ \end{tabular} & \begin{tabular}{l} Gradient \\ (Eqn. 11) \\ \end{tabular} & \begin{tabular}{l} ascent\({}^{\dagger}\) \\ (Eqn. 12) \\ \end{tabular} \\ \hline \begin{tabular}{l} Stochastic \\ (Sec. 2.2) \\ \end{tabular} & COMs & \(p_{\theta}(\mathbf{x},y;\alpha)\propto p_{\theta}(\mathbf{x})^{\alpha}p_{\theta}(y| \mathbf{x})\) & \begin{tabular}{l} Contrastive divergence \\ (Eqn. 15) \\ \end{tabular} & \begin{tabular}{l} Langevin \\ (Eqn. 18) \\ \end{tabular} & \begin{tabular}{l} MCMC \\ \end{tabular} \\ \hline \begin{tabular}{l} Decoupled \\ (Sec. 2.3) \\ \end{tabular} & COMs & \(p_{\theta,\omega}(\mathbf{x},y;w)\propto p_{\theta}(\mathbf{x})\exp(f_{\omega}(\mathbf{x})^ {w})p_{\theta}(y|\mathbf{x})\) & \begin{tabular}{l} Contrastive divergence \\ (Eqn. 18) \\ \end{tabular} & \begin{tabular}{l} Langevin \\ (Eqn. 18) \\ \end{tabular} & \begin{tabular}{l} MCMC \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the three variants of COMs in this work. _Stochastic_ (Sec. 2.2) and _decoupled_ (Sec. 2.3) variants are proposed methods that address particular issues in the original formulation. Note that for all generated samples shown in Figures 2, 3 and 4, either \(p_{\theta}(\mathbf{x})\) is used (original, _stochastic_) or the exponentially tilted variant \(p_{\theta}(\mathbf{x})\exp(f_{\omega}(\mathbf{x})^{w})\) (_decoupled_). \(\dagger\) = samples converge to a maximum a posteriori solution, so we are not truly sampling from \(p_{\theta}(\mathbf{x})\) (this is addressed in the stochastic variant).
2305.02689
Unit fractions with shifted prime denominators
We prove that any positive rational number is the sum of distinct unit fractions with denominators in $\{p-1 : p\textrm{ prime}\}$. The same conclusion holds for the set $\{p-h : p\textrm{ prime}\}$ for any $h\in\mathbb{Z}\backslash\{0\}$, provided a necessary congruence condition is satisfied. We also prove that this is true for any subset of the primes of relative positive density, provided a necessary congruence condition is satisfied.
Thomas F. Bloom
2023-05-04T10:06:34Z
http://arxiv.org/abs/2305.02689v1
# Unit fractions with shifted prime denominators ###### Abstract. We prove that any positive rational number is the sum of distinct unit fractions with denominators in \(\{p-1:p\text{ prime}\}\). The same conclusion holds for the set \(\{p-h:p\text{ prime}\}\) for any \(h\in\mathbb{Z}\backslash\{0\}\), provided a necessary congruence condition is satisfied. We also prove that this is true for any subset of the primes of relative positive density, provided a necessary congruence condition is satisfied. The study of decompositions of rational numbers into sums of distinct unit fractions (often called 'Egyptian fractions') is one of the oldest topics in number theory (see [2] for further background and many related problems on Egyptian fractions). It is elementary to prove that such a decomposition is always possible, for instance by using a greedy algorithm. In this paper we explore a natural variant that imposes restrictions on the denominators in these decompositions. **Question 1**.: _For which \(A\subseteq\mathbb{N}\) is it true that every positive rational number can be written as \(\sum_{n\in B}\frac{1}{n}\) for some finite \(B\subset A\)?_ A trivial necessary condition is that the set contains multiples of every prime; for example, the set of all odd numbers does not have this property (it cannot represent \(\frac{1}{2}\)). The condition \(\sum_{n\in A}\frac{1}{n}=\infty\) is also clearly necessary, but not sufficient - indeed, it is easy to see that there is no solution to \(1=\sum_{p\in A}\frac{1}{p}\) where \(A\) is any finite set of primes, even though \(\sum_{p\leq N}\frac{1}{p}\sim\log\log N\). For sets \(A\) with no such trivial obstructions it is reasonable to speculate that an Egyptian fraction decomposition with denominators restricted to \(A\) always exists. An early seminal paper on this topic is by Graham [6], who proved a general result that implies, for example, that such a decomposition always exists when \(A\) is the set of all primes _and_ squares. Motivated by a conjecture of Sun [12, Conjecture 4.1], Eppstein [4] developed an alternative elementary method, which implies such a decomposition always exists when \(A\) is the set of 'practical numbers' (those \(n\) such that all \(m\leq n\) can be written as the sum of distinct divisors of \(n\)). A variant of Question 1 can be asked even when there are trivial obstructions. For example, Graham [6] has shown that every rational number \(x\) can be written as the sum of distinct unit fractions with square denominators, subject to the obvious necessary condition that \(x\in[0,\frac{\pi^{2}}{6}-1)\cup[1,\frac{\pi^{2}}{6})\), and every rational number \(x\) with square-free denominator can be written as the sum of distinct unit fractions with square-free denominators. A natural candidate of number theoretic interest, for which there exist no obvious obstructions to any rational decomposition, and for which the methods of [6] and [4] are not applicable, is the set of shifted primes \(\{p-1:p\text{ prime}\}\). That such a restricted Egyptian fraction decomposition always exists was conjectured by Sun [12, Conjecture 4.1] (see also [13, Conjecture 8.17] and [11] for some numerical data). In this paper we use the method of [1] to prove this conjecture: any positive rational \(x>0\) has a solution (indeed, infinitely many) to \[x=\frac{1}{p_{1}-1}+\cdots+\frac{1}{p_{k}-1}\] where \(p_{1}<\cdots<p_{k}\) are distinct primes. We also prove a similar result with denominators \(p_{i}-h\) for any (fixed) \(h\neq 0\), although for \(\left|h\right|>1\) there are some trivial congruence obstructions - for example, since no subset of \(\left\{p+2:p\text{ prime}\right\}\) has lowest common multiple divisible by \(8\) the fraction \(\frac{1}{8}\) cannot be represented as the sum of distinct unit fractions of the shape \(\frac{1}{p+2}\). We deduce this existence result from the following more general result, showing that any shifted set of primes, all divisible by \(q\), of 'positive upper relative logarithmic density' contains a decomposition of \(\frac{1}{q}\). (Recall that \(\sum_{p\leq N}\frac{1}{p}\sim\log\log N\), and so it is natural to consider \(\sum_{\begin{subarray}{c}p\leq N\\ p\in A\end{subarray}}\frac{1}{p}\) divided by \(\log\log N\) as a measure of the size of \(A\).) **Theorem 1**.: _Let \(h\in\mathbb{Z}\backslash\{0\}\) and \(q\geq 1\) be such that \(\left(\left|h\right|,q\right)=1\). If \(A\) is a set of primes congruent to \(h\pmod{q}\) such that_ \[\limsup_{N\to\infty}\frac{\sum_{p\in A\cap[1,N]}\frac{1}{p}}{\log\log N}>0\] _then there exists a finite \(S\subset A\) such that_ \[\frac{1}{q}=\sum_{p\in S}\frac{1}{p-h}.\] A simple application of partial summation produces the following version with (relative) upper logarithmic density replaced by (relative) lower density. **Corollary 1**.: _Let \(h\in\mathbb{Z}\backslash\{0\}\) and \(q\geq 1\) be such that \(\left(\left|h\right|,q\right)=1\). If \(A\) is a set of primes congruent to \(h\pmod{q}\) with positive relative lower density, that is,_ \[\liminf_{N\to\infty}\frac{\left|A\cap[1,N]\right|}{N/\log N}>0,\] _then there exists a finite \(S\subset A\) such that_ \[\frac{1}{q}=\sum_{p\in S}\frac{1}{p-h}.\] We remark that (unlike the statement for unrestricted sets of integers, see [1, Theorem 2]) the stronger version of Corollary 1 with the \(\liminf_{N\to\infty}\) replaced by \(\limsup_{N\to\infty}\) is false - for example, if \(A[N]\) is the set of primes in \([N/2,N]\) then \(\sum_{p\in A[N]}\frac{1}{p}\ll\frac{1}{\log N}\), and hence if \(A=\cup_{k}A[N_{k}]\) where \(N_{k}=2^{k^{C}}\) for some large absolute constant \(C>0\) then \(\sum_{n\in A}\frac{1}{n}<1\), and hence certainly we cannot find a finite \(S\subset A\) such that \(\sum_{n\in S}\frac{1}{n}=1\), and yet \[\limsup_{N\to\infty}\frac{\left|A\cap[1,N]\right|}{N/\log N}\geq 1/2.\] We now show how Theorem 1 implies the headline result: any positive rational number (subject to the necessary congruence conditions) can be written as the sum of distinct unit fractions with shifted prime denominators. **Corollary 2**.: _Let \(h\in\mathbb{Z}\) and \(x=r/q\in\mathbb{Q}_{>0}\) be such that \(\left(\left|h\right|,q\right)=1\). There are distinct primes \(p_{1},\ldots,p_{k}\) such that_ \[x=\frac{1}{p_{1}-h}+\cdots+\frac{1}{p_{k}-h}.\] Proof.: By Dirichlet's theorem (see for example [10, Corollary 4.12]) if \(A\) is the set of primes congruent to \(h\pmod{q}\), then \[\sum_{n\in A\cap[1,N]}\frac{1}{n}\geq(\tfrac{1}{\phi(q)}+o(1))\log\log N.\] Trivially the same must hold for \(A\backslash B\), for any finite set \(B\). In particular by \(r\) repeated applications of Theorem 1 (first to \(A\), then \(A\backslash S_{1}\), and so on) we can find \(r\) disjoint finite sets \(S_{1},\ldots,S_{r}\subset A\) such that \[\frac{1}{q}=\sum_{p\in S_{i}}\frac{1}{p-h}\] for \(1\leq i\leq r\). It follows that \[x=\frac{r}{q}=\sum_{p\in\bigcup S_{i}}\frac{1}{p-h}\] as required. We prove Theorem 1 with an application of the author's earlier work [1] (which in turn is a stronger form of an argument of Croot [3]). Loosely speaking, the main result of [1] shows that we can solve \(1=\sum\frac{1}{n_{i}}\) with \(n_{i}\in A\) whenever \(A\) satisfies 1. \(\sum_{n\in A}\frac{1}{n}\to\infty\), 2. every \(n\in A\) is 'friable' (or'smooth'), in that if a prime power \(q\) divides \(n\) then \(q\leq n^{1-\delta(n)}\) for some \(0<\delta(n)=o(1)\), 3. every \(n\in A\) has'small divisors', and 4. every \(n\in A\) has \(\approx\log\log n\) many distinct prime divisors. To prove Theorem 1, therefore, it suffices to show that the set \(\{\frac{p-h}{q}:p\in A\}\) has these properties. Fortunately, there has been a great deal of study of the arithmetic properties of shifted primes, and so using classical techniques from analytic number theory we are able to find a large subset of our original set \(A\) satisfying all four properties. For experts in analytic number theory we add that in establishing the necessary number theoretic facts about shifted primes we have followed the simplest path, forgoing many of the more elaborate refinements possible. The main observation of this paper is that the inputs required to the method of [1] are mild enough to be provable for the shifted primes using (a crude form of) existing technology. To minimise technicalities we have proved only a qualitative form of Theorem 1. In principle a (very weak) quantitative version could be proved with the same methods, along similar lines to [1, Theorem 3], but this would complicate the presentation significantly. Finally, the methods and main results of [1] have now been formally verified using the Lean proof assistant, in joint work with Bhaviik Mehta.1 This formalisation has not been extended to the present work, but since the proof of Theorem 1 uses the main result of [1] as its primary ingredient (combined with classical number theory) it can be viewed as 'partially formally verified'. In Section 1 we prove Theorem 1 assuming certain number theoretic lemmas. In Section 2 we prove these lemmas. ### Acknowledgements The author is funded by a Royal Society University Research Fellowship. We would like to thank Greg Martin for a helpful conversation about friable values of shifted primes and remarks on an earlier version of this paper. ## 1. Proof of Theorem 1 Our main tool is the following slight variant of [1, Proposition 1] (which is identical to the below except that the exponent of \(c\) is replaced by \(1/\log\log N\)). **Proposition 1**.: _Let \(c\in(0,1/4)\) and \(N\) be sufficiently large (depending only on \(c\)). Suppose \(A\subset[N^{1-c},N]\) and \(1\leq y\leq z\leq(\log N)^{1/500}\) are such that_ 1. \(\sum_{n\in A}\frac{1}{n}\geq 2/y+(\log N)^{-1/200}\)_,_ 2. _every_ \(n\in A\) _is divisible by some_ \(d_{1}\) _and_ \(d_{2}\) _where_ \(y\leq d_{1}\) _and_ \(4d_{1}\leq d_{2}\leq z\)_,_ 3. _every prime power_ \(q\) _dividing some_ \(n\in A\) _satisfies_ \(q\leq N^{1-4c}\)_, and_ 4. _every_ \(n\in A\) _satisfies_ \[\tfrac{99}{100}\log\log N\leq\omega(n)\leq 2\log\log N.\] _There is some \(S\subseteq A\) such that \(\sum_{n\in S}\frac{1}{n}=1/d\) for some \(d\in[y,z]\)._ Proof.: The proof is identical to that of [1, Proposition 1], except that in the final part of the proof we choose \(M=N^{1-c}\). Observe that the inputs to that proof, namely [1, Proposition 2, Proposition 3, and Lemma 7], are valid for any \(M\in(N^{3/4},N)\). It remains to check the 'friable' hypothesis, for which we require that if \(n\in A\) and \(q\) is a prime power with \(q\mid n\) then, for some small absolute constant \(c>0\), \[q\leq c\min\left(\frac{M}{z},\frac{M}{(\log N)^{1/100}},\frac{M^{3}}{N^{2-4/ \log\log N}(\log N)^{2+1/50}}\right).\] For \(N\) sufficiently large (depending only on \(c\)) the right-hand side is \(>N^{1-4c}\), and so hypothesis (3) suffices. It is convenient to recast this in a slightly different form. **Proposition 2**.: _Let \(\delta,\epsilon>0\) and suppose \(y\) is sufficiently large depending on \(\delta\) and \(\epsilon\), and \(y\leq w\leq z\). If \(N\) is sufficiently large (depending on \(\delta,\epsilon,y,w,z\)) and \(A\subset[2,N]\) is such that for all \(n\in A\)_ 1. _if a prime power_ \(q\) _divides_ \(n\) _then_ \(q\leq n^{1-\epsilon}\)_,_ 2. \(|\omega(n)-\log\log n|\leq\log\log n/1000\)_,_ 3. \(n\) _is divisible by some_ \(d_{1}\in[y,w)\)_,_ 4. \(n\) _is divisible by some_ \(d_{2}\in[4w,z)\)_, and_ 5. \(\sum_{n\in A}\frac{1}{n}\geq\delta\log\log N\)_,_ _then there exists \(S\subseteq A\) such that \(\sum_{n\in S}\frac{1}{n}=1/d\) for some \(d\leq z\)._ Proof.: For \(i\geq 0\) let \(N_{i}=N^{(1-\epsilon/4)^{i}}\), and let \(A_{i}=A\cap(N_{i+1},N_{i}]\). Note that \(N_{i}<2\) for \(i\geq C\log\log N\), where \(C\) is some sufficiently large constant depending only on \(\epsilon\). Since \(\sum_{n\leq\log\log N}\frac{1}{n}\ll\log\log\log N\) it follows by the pigeonhole principle that there must exist some \(i\) such that with \(A^{\prime}=A_{i}\) and \(N^{\prime}=N_{i}\gg\log\log N\) we have \[\sum_{n\in A^{\prime}}\frac{1}{n}\gg_{\delta,\epsilon}1\] and \(A^{\prime}\subset((N^{\prime})^{1-\epsilon/4},N^{\prime}]\). It suffices to verify that the assumptions of Proposition 1 are satisfied by \(A^{\prime}\), with \(c=\epsilon/4\). We have already verified the first assumption (assuming \(y\) and \(N\) are sufficiently large; note that since \(N^{\prime}\gg\log\log N\) this ensures that \(N^{\prime}\) is also sufficiently large). The second assumption of Proposition 1 is ensured by conditions (3) and (4). For the third assumption, note that by condition (1) if \(n\in A^{\prime}\) is divisible by a prime power \(q\) then \[q\leq n^{1-\epsilon}\leq(N^{\prime})^{1-\epsilon}\] as required. Finally the fourth assumption follows from condition (2) and noting that for all \(n\in[(N^{\prime})^{1-\epsilon/4},N^{\prime}]\) we have \[\log\log n=\log\log N^{\prime}+O_{\epsilon}(1),\] and the \(O_{\epsilon}(1)\) term is \(\leq\log\log N^{\prime}/500\), say, provided we take \(N\) sufficiently large. To prove Theorem 1 we want to apply Proposition 2 to \(B=\{\frac{p-h}{q}:p\in A\}\). To verify the hypotheses we will require the following number-theoretic lemmas. We were unable to find these exact statements in the literature, so have included proofs in the following section, but the proofs are all elementary and cover well-trodden ground. **Lemma 1**.: _For any \(\epsilon>0\) and \(h\in\mathbb{Z}\backslash\{0\}\) the relative density of primes \(p\) such that \(n=p-h\) is divisible by a prime power \(q>n^{1-\epsilon}\) is \(O_{h}(\epsilon)\)._ **Lemma 2**.: _For any \(\delta>0\) and \(h\in\mathbb{Z}\backslash\{0\}\) the relative density of primes \(p\) such that \(n=p-h\) has_ \[|\omega(n)-\log\log(n)|\geq\delta\log\log n\] _is \(0\)._ **Lemma 3**.: _For any \(h\in\mathbb{Z}\backslash\{0\}\), if \(4\leq y<z\) the relative density of primes \(p\) such that \(n=p-h\) is not divisible by any primes \(q\in[y,z]\) is \(O_{h}(\log y/\log z)\)._ We will now show how these lemmas, combined with Proposition 2, imply Theorem 1. Proof of Theorem 1.: By assumption there is some \(\delta>0\) and infinitely many \(N\) such that \[\sum_{p\in A\cap[1,N]}\frac{1}{p}\geq 4\delta\log\log N.\] Let \(B=\{\frac{p-h}{q}:p\in A\}\subset\mathbb{N}\), so that there must exist infinitely many \(N\) such that \[\sum_{n\in B\cap[1,N]}\frac{1}{n}\geq 3\delta\log\log N.\] Let \(\epsilon=c\delta\) where \(c>0\) is some small absolute constant to be determined later. Let \(y\) be sufficiently large in terms of \(\delta\) (so that Proposition 2 can apply) and \(w\leq z\) be determined shortly, and let \(B^{\prime}\subseteq B\) be the set of those \(n\in B\) such that 1. if a prime power \(r\) divides \(n\) then \(r\leq n^{1-\epsilon}\), 2. \(|\omega(n)-\log\log n|\leq\log\log n/1000\), 3. \(n\) is divisible by some prime \(p_{1}\in[y,w)\), and 4. \(n\) is divisible by some prime \(p_{2}\in[4w,z)\). If \(X_{1}\) is the set of \(m=p-h\) which are divisible by some prime power \(r>m^{1-2\epsilon}\) then by Lemma 1 we have \[|X_{1}\cap[1,N]|\ll\epsilon\frac{N}{\log N},\] and hence, since for all large primes \(p\) we have \((p-h)^{1-2\epsilon}\leq(\frac{p-h}{q})^{1-\epsilon}\), the set \(B_{1}\) of those \(n\in B\) which fail the first condition satisfies \[|B_{1}\cap[1,N]|\ll\epsilon\frac{N}{\log N},\] whence by partial summation \[\sum_{n\in B_{1}\cap[1,N]}\frac{1}{n}\ll\epsilon\log\log N.\] By a similar argument (recalling that \(q\) is some fixed constant, and so \(\omega(\frac{p-h}{q})=\omega(p-h)+O(1)\) and \(\log\log(\frac{p-h}{q})=\log\log(p-h)+O(1)\)), Lemma 2 implies that the sum of reciprocals from those \(n\in B\cap[1,N]\) which fail the second condition is \(o(\log\log N)\). Similarly, by Lemma 3 we can choose \(w\) and \(z\) (depending only \(\delta\)) such that for all large \(N\) the sum of reciprocals from those \(n\in B\cap[1,N]\) which fail either condition (3) or (4) is \(\leq\delta\log\log N\). Therefore, there exist infinitely many \(N\) such that (provided \(\epsilon\) is a small enough multiple of \(\delta\)) \[\sum_{n\in B^{\prime}\cap[1,N]}\frac{1}{n}\geq 2\delta\log\log N.\] Fix such an \(N\) and let \(B^{\prime\prime}=B^{\prime}\cap[1,N]\). All of the conditions from Proposition 2 are now satisfied for \(B^{\prime\prime}\), and hence there exists some \(S_{1}\subseteq B^{\prime\prime}\) and \(d_{1}\leq z\) such that \(\sum_{n\in S_{1}}\frac{1}{n}=\frac{1}{d_{1}}\). We now apply Proposition 2 again to \(B^{\prime\prime}\backslash S_{1}\), and continue this process \(k=\lceil z\rceil^{2}\) many times, producing some disjoint \(S_{1},\ldots,S_{k}\) and associated \(d_{1},\ldots,d_{k}\leq z\) where \(\sum_{n\in S_{i}}\frac{1}{n}=\frac{1}{d_{i}}\) for \(1\leq i\leq k\). Notice that the conditions of Proposition 2 remain satisfied for each \(B^{\prime\prime}\backslash\cup_{i\leq j}S_{i}\) for \(j\leq k\), since \[\sum_{n\in\cup_{i\leq j}S_{i}}\frac{1}{n}\leq k\ll z^{2}<\delta\log\log N,\] assuming \(N\) is sufficiently large, since \(z\) depends on \(\delta\) only. By the pigeonhole principle there must exist some \(d\leq z\) and \(i_{1},\ldots,i_{d}\) such that \(d_{i_{j}}=d\) for \(1\leq j\leq d\), and hence \(S=\cup_{1\leq j\leq d}S_{i_{j}}\) satisfies \[\sum_{n\in S}\frac{1}{n}=d\cdot\frac{1}{d}=1\] as required. ## 2. Number theoretic ingredients It remains to prove Lemmas 1, 2, and 3, which we will do in turn. ### Friability of shifted primes There has been a great deal of work on shifted primes with only small prime divisors. Often the focus is on an existence result, finding the smallest possible \(\delta>0\) such that there exist infinitely many shifted primes \(p-1\) with no prime divisors \(>p^{\delta}\). We refer to [9] for recent progress on this and references to earlier work. Our focus is a little different: we are content with a very high friability threshold, but we need to show that almost all shifted primes are this friable. For the regime of friability that we are interested even the original elementary methods of Erdos [5] suffice. Proof of Lemma 1.: This is only a slight generalisation of [5, Lemma 4]. It suffices to show that, for all \(\epsilon>0\) and large \(N\), the number of \(p\leq N\) such that \(p-h\) is divisible by some prime power \(q\) with \(q>N^{1-\epsilon}\) is \[\ll_{h}\epsilon\frac{N}{\log N}.\] We first note that trivially for any \(q\) the number of \(p\leq N\) such that \(p-h\) is divisible by \(q\) is certainly \(O_{h}(N/q)\), and hence the count of those \(p-h\) divisible by some non-prime prime power \(q>N^{1-\epsilon}\) is \[\ll_{h}N\sum_{k\geq 2}\sum_{N^{1-\epsilon}\leq m^{k}\leq N}\frac{1}{m^{k}} \ll N^{\epsilon}\ll\epsilon\frac{N}{\log N}\] for all large \(N\). It remains to bound the count of those \(p\leq N\) such that \(p-h\) is divisible by some prime \(q>N^{1-\epsilon}\). Such \(p-h\) we can write uniquely (assuming \(N\) is large enough depending on \(h\)) as \(p-h=qa\) for some \(a\leq 2N^{\epsilon}\) and \(q>N^{1-\epsilon}\) prime. A simple application of Selberg's sieve (for example [8, Theorem 3.12]) yields that, for any fixed \(a\geq 1\) and \(h\neq 0\) the number of primes \(q\leq x\) such that \(aq+h\) is also prime is \[\ll_{h}\frac{a}{\phi(a)}\frac{x}{(\log x)^{2}}.\] Since \(q\leq N/a+O_{h}(1)\), the number of \(p\leq N\) such that \(p-h=qa\) is \[\ll\frac{1}{\phi(a)}\frac{N}{(\log N)^{2}}.\] Summing over all \(a\leq 2N^{\epsilon}\) the total count is \[\ll_{h}\frac{N}{(\log N)^{2}}\sum_{a\leq 2N^{\epsilon}}\frac{1}{\phi(a)}\ll \epsilon\frac{N}{\log N}\] as required, using the fact that \(\sum_{a\leq M}\frac{1}{\phi(a)}\ll\log M\). ### Number of prime divisors of shifted primes We need to know that \(\omega(n)\sim\log\log n\) for almost all \(n\in\{p-h:p\text{ prime}\}\). This is in fact the typical behaviour of \(\omega(n)\) for a generic integer \(n\), and we expect the same behaviour when restricting \(n\) to the random-like sequence of shifted primes. Indeed, just like \(\omega(n)\) itself, \(\omega(p-h)\) satisfies an Erdos-Kac theorem, that is, \(\omega(p-h)\) behaves like a normal distribution with mean \(\log\log(p-h)\) and standard deviation \(\sqrt{\log\log(p-h)}\). This was established by Halberstam [7], although a simple variance bound suffices for our application here. Proof of Lemma 2.: It suffices to show that, for all \(\delta>0\) and large \(N\), if \(A\) is the set of \(p\leq N\) such that \(\left|\omega(p-h)-\log\log(p-h)\right|>\delta\log\log(p-h)\), then \[\left|A\right|\ll\frac{N}{(\log N)(\log\log N)}.\] Let \(A_{1}=A\cap[1,N^{1/2}]\) and \(A_{2}=A\backslash A_{1}\). We can trivially bound \(\left|A_{1}\right|\ll N^{1/2}\), and for \(p\in A_{2}\) we have \(\log\log(p-h)=\log\log N+O(1)\), whence for large enough \(N\) if \(p\in A_{2}\) we have \[\left|\omega(p-h)-\log\log N\right|>\tfrac{\delta}{2}\log\log N.\] By [7, Theorem 3], however, we have \[\sum_{p\leq N}\left|\omega(p-h)-\log\log N\right|^{2}\ll\pi(N)\log\log N,\] and hence \[\left|A_{2}\right|(\log\log N)^{2}\ll_{\delta}\pi(N)\log\log N,\] and the result now follows from Chebyshev's estimate \(\pi(N)\ll N/\log N\). ### Shifted primes with small divisors For Lemma 3 we need to show that there are few shifted primes remaining after we remove all multiples of primes \(p\in[y,z]\), which is a classic upper bound sieve problem. Since the information we require is very weak even the simplest sieve suffices: the following is proved as [8, Theorem 1.1]. **Lemma 4** (Sieve of Eratosthenes-Legendre).: _Let \(A\) be a finite set of integers and \(\mathcal{P}\) a finite set of primes. Let \(z\geq 2\) and \(P(z)=\prod_{\begin{subarray}{c}p\in\mathcal{P}\\ p<z\end{subarray}}p\). Suppose that \(f(d)\) is a multiplicative function and \(X>1\) is such that for all \(d\mid P(z)\) we have_ \[\left|A_{d}\right|=f(d)X+R_{d}.\] _Then_ \[\#\{n\in A:(n,P(z))=1\}\ll X\prod_{\begin{subarray}{c}p\in P\\ p<z\end{subarray}}\left(1-f(p)\right)+\sum_{d\mid P(z)}\left|R_{d}\right|.\] For the required sieve input we will use the following classic result on the distribution of primes within arithmetic progressions (which is proved, for example, as [10, Corollary 11.21]). Recall that \(\pi(N;d,h)\) is the number of primes \(p\leq N\) such that \(p\equiv h\pmod{d}\). **Theorem 2** (Siegel-Walfisz).: _There is a constant \(c>0\) such that for all \(h\in\mathbb{Z}\) and \(1\leq d\leq\log x\) with \((\left|h\right|,d)=1\) we have_ \[\pi(N;d,h)=\frac{\mathrm{li}(N)}{\phi(d)}+O(N\exp(-c\sqrt{\log N})).\] Proof of Lemma 3.: Fix \(4\leq y\leq z\) and let \(P=\prod_{\begin{subarray}{c}y\leq q\leq z\\ q\nmid h\end{subarray}}q\) (where \(q\) is restricted to primes). It suffices to show that, for all large \(N\), \[\#\{p-h\leq N:(p-h,P)=1\}\ll_{h}\frac{\log y}{\log z}\mathrm{li}(N).\] We will apply Lemma 4 with \(A=\{p-h:p\leq N\}\), \[\mathcal{P}=\{p\in[y,z]:p\nmid h\},\] \(f(d)=1/\phi(d)\), and \(X=\operatorname{li}(N)\), noting that by Theorem 2 whenever \((d,h)=1\) and \(d\leq\log N\) \[|A_{d}|=\pi(N;d,h)=\frac{\operatorname{li}(N)}{\phi(q)}+O(x\exp(-c\sqrt{\log x})).\] It follows that \[\#\{p-h\leq N:(p-h,P)=1\}\ll\operatorname{li}(N)\prod_{\begin{subarray}{c}y \leq q\leq z\\ q\nmid h\end{subarray}}\left(1-\frac{1}{q-1}\right)+2^{z}N\exp(-c\sqrt{\log N}).\] The conclusion now follows provided we choose \(N\) large enough so that \(z\ll\sqrt{\log\log N}\), say, and using Mertens' estimate that \[\prod_{p\leq w}(1-1/p)\asymp\frac{1}{\log w}.\]
2310.06659
On the average number of cycles in conjugacy class products
We show that for the product of two fixed point free conjugacy classes, the average number of cycles is always very similar. Specifically, our main result is that for a randomly chosen pair of fixed point free permutations of cycle types $\alpha$ and $\beta$, the average number of cycles in their product is between $H_n-3$ and $H_n+1$, where $H_n$ is the harmonic number.
Jesse Campion Loth, Amarpreet Rattan
2023-10-10T14:34:30Z
http://arxiv.org/abs/2310.06659v1
# On the average number of cycles in conjugacy class products ###### Abstract. We show that for the product of two fixed point free conjugacy classes, the average number of cycles is always very similar. Specifically, our main result is that for a randomly chosen pair of fixed point free permutations of cycle types \(\alpha\) and \(\beta\), the average number of cycles in their product is between \(H_{n}-3\) and \(H_{n}+1\), where \(H_{n}\) is the harmonic number. ## 1. Introduction Let \(n\) be a positive integer. For any finite set \(A\), let \(\mathfrak{S}_{A}\) be the symmetric group on \(A\). We also use the notation \([n]:=\{1,\ldots,n\}\), and \(\mathfrak{S}_{n}:=\mathfrak{S}_{[n]}\). A _partition of \(n\)_ is a list \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) of positive integers such that \(\alpha_{i}\geq\alpha_{i+1}\) and \(\sum_{i}\alpha_{i}=n\). Each \(\alpha_{i}\) is called a _part_, and the _length_ of \(\alpha\), denoted \(\ell(\alpha)\), is the number of parts (thus \(\ell(\alpha)=k\) above). For each \(0\leq j\leq\ell(\alpha)\), we define \(\alpha^{\prime}_{j}=\sum_{i=1}^{j}\alpha_{i}\), with \(\alpha^{\prime}_{0}=0\). We let \(C_{\alpha}\) be the conjugacy of \(\mathfrak{S}_{n}\) indexed by \(\alpha\). We call the permutation \((1\ \cdots\ \alpha^{\prime}_{1})\cdots(\alpha^{\prime}_{\ell(\alpha)-1}+1\ \cdots\ n)\) the _canonical permutation of type \(\alpha\)_. We further let \(H_{n}:=\sum_{i=1}\frac{1}{i}\) be the \(n^{\text{th}}\)_harmonic number_. We start by defining the main statistic of study. For two partitions \(\alpha,\beta\vdash n\), let \(C_{\alpha,\beta}\) be the random variable for the number of cycles in \(\sigma\cdot\omega\), where \(\sigma\) and \(\omega\) are chosen uniformly at random from the conjugacy classes \(C_{\alpha}\) and \(C_{\beta}\), respectively. The goal of this manuscript is to estimate the expected value of \(C_{\alpha,\beta}\) when \(\alpha\) and \(\beta\) are partitions without parts equal to \(1\). We study this statistic through maps. Let \(\alpha\) and \(\beta\) be partitions without parts of size \(1\) of the same positive integer \(n\), and let \(\sigma_{0}\) and \(\omega_{0}\) be the canonical permutations of type \(\alpha\) and \(\beta\), respectively. For a permutation \(\pi\in\mathfrak{S}_{n}\), we define a _map_\(m_{\pi}=(D,R,E(\pi))\), where \(D\) is a set and \(R\) and \(E(\pi)\) are permutations given as follows. The set \(D=S\cup T\) are the _darts_, where \(S=\{s_{1},\ldots,s_{n}\}\) and \(T=\{t_{1},\ldots,t_{n}\}\). The set of darts can be thought of as a set of half-edges in a bipartite graph whose left side (right side) has \(\ell(\alpha)\ (\ell(\beta))\) vertices, and the \(i^{\text{th}}\) vertex on the left side (right side) has the darts \(s_{\alpha^{\prime}_{i}+1},\ldots,s_{\alpha^{\prime}_{i+1}}\ (t_{\beta^{\prime}_{i}+1},\ldots,t_{\beta^{ \prime}_{i+1}})\) at it. The permutation \(E(\pi)\) is a fixed point free involution on the set \(D\) given by \[E(\pi):=(s_{1}\,t_{\pi(1)})(s_{2}\,t_{\pi(2)})\ldots(s_{n}\,t_{\pi(n)}).\] Thus \(E(\pi)\) is in \(\mathfrak{S}_{D}\) and gives a perfect matching between the sets \(S\) and \(T\); the set \(E(\pi)\) indicates that an edge is placed between the darts \(s_{i}\) and \(t_{\pi(i)}\) for each \(i\), giving a bipartite graph \(G\). The permutation \(R\) is also in \(\mathfrak{S}_{D}\) and sends \[s_{i}\to s_{\sigma_{0}(i)}\text{ and }t_{i}\to t_{\omega_{0}(i)}; \tag{1}\] that is, \[R:=(s_{1}\,s_{2}\ldots s_{\alpha^{\prime}_{1}})(s_{\alpha^{ \prime}_{1}+1},\ldots s_{\alpha^{\prime}_{2}})\ldots(s_{\alpha^{\prime}_{\ell( \alpha)-1}+1},\ldots,s_{n})\cdot\\ (t_{1}\,t_{2}\ldots t_{\beta^{\prime}_{1}})(t_{\beta^{\prime}_{ 1}+1},\ldots t_{\beta^{\prime}_{2}})\ldots(t_{\beta^{\prime}_{\ell(\beta)-1}+ 1},\ldots,t_{n}),\] and its cycle type is \(\alpha\cup\beta\), where \(\alpha\cup\beta\) is the partition of \(2n\) with parts in \(\alpha\) or \(\beta\). The permutation \(R\), also called the _rotation scheme_, encodes an embedding of the bipartite graph \(G\); when drawn on an orientable surface, the cycles of \(R\) give the local clockwise order in which the darts appear at each vertex. Together \(m_{\pi}=(D,R,E(\pi))\) can be visualized as a 2-cell embedding of a graph on an orientable surface1; see Figure 1. Footnote 1: A _2-cell embedding_ of a graph on an orientable surface is an embedding of a graph on a surface such that edges do not cross, and the removal of the graph from the surface leaves connected components each homeomorphic to a disc. It is well known (see SS4.2 [10] and references therein) that these maps are in correspondence with products of permutations of type \(\alpha\) and \(\beta\). Faces in the map \(m_{\pi}\) are given by the cycles of \(R\cdot E(\pi)\), and they naturally correspond to the cycles in \(\sigma_{0}\pi\omega_{0}\pi^{-1}\): if \(\sigma_{0}\pi\omega_{0}\pi^{-1}\) has cycle type \(\lambda\), then \(R\cdot E(\pi)\) has cycle type \(2\lambda:=(2\lambda_{1},2\lambda_{2},\ldots)\), where the \(\lambda_{i}\) are the parts of \(\lambda\). The permutation \(\sigma_{0}\pi\omega_{0}\pi^{-1}\) can be recovered from \(R\cdot E(\pi)\) by simply ignoring the symbols that correspond to darts in \(T\) and interpreting the resulting permutation on \(S\) as acting on the symbols on \([n]\).2 Example 1 illustrates this correspondence. Footnote 2: The claim about the connection between the cycles of \(\sigma_{0}\pi\omega_{0}\pi^{-1}\) and \(R\cdot E(\pi)\) remains true for any other fixed permutations \(\sigma\) and \(\omega\) in \(C_{\alpha}\) and \(C_{\beta}\), respectively, as long as the permutation \(R\) has cycles compatible with \(\sigma\) and \(\omega\) in the sense of (1). **Example 1** (Permutation/map correspondence).: Let \(\alpha=(4,3)\) and \(\beta=(3,2,2)\). A map \(m_{\pi}=(D,R,E(\pi))\) is pictured in Figure 1 with \(\pi=(1)(2\,3\,5)(4\,7\,6)\in\mathfrak{S}_{7}\). Here \(D=S\cup T\) and \[R =(s_{1}\,s_{2}\,s_{3}\,s_{4})(s_{5}\,s_{6}\,s_{7})\cdot(t_{1}\,t_ {2}\,t_{3})(t_{4}\,t_{5})(t_{6}\,t_{7}),\] \[E(\pi) =(s_{1}\,t_{1})(s_{2}\,t_{3})(s_{3}\,t_{5})(s_{4}\,t_{7})(s_{5}\, t_{2})(s_{6}\,t_{4})(s_{7}\,t_{6}).\] Whence \[R\cdot E(\pi)=(s_{1}\,t_{3})(s_{2}\,t_{5}\,s_{6}\,t_{6}\,s_{4}\,t_{1}\,s_{5}\, t_{4}\,s_{3}\,t_{7}\,s_{7}\,t_{2}).\] Each cycle of \(R\cdot E(\pi)\) traces a face of \(m_{\pi}\) in Figure 1 via a walk in the following way. Pick a symbol, say \(t_{5}\), and begin a walk around the map while keeping the map to the right; begin at the vertex containing the dart \(t_{5}\) to get to \(t_{4}\); then walk along the edge connecting \(t_{4}\) to \(s_{6}\). The walk then, by definition, goes from \(t_{5}\) to \(s_{6}\). Continue by walking along the vertex containing the dart \(s_{6}\) to \(s_{7}\), and then walk along the edge connecting \(s_{7}\) to \(t_{6}\). The walk then goes from \(s_{6}\) is \(t_{6}\). Hence the walk traces \(\cdots t_{5}\to s_{6}\to t_{6}\cdots\). Note the permutation \(R\cdot E(\pi)\) acts on those symbols in the same way. The canonical permutations \(\sigma_{0}\) and \(\omega_{0}\) of type \(\alpha\) and \(\beta\), respectively, are given by \[\sigma_{0} =(1\,2\,3\,4)(5\,6\,7)\in C_{\alpha},\] \[\omega_{0} =(1\,2\,3)(4\,5)(6\,7)\in C_{\beta}\] We compute \[\sigma_{0}\pi\omega_{0}\pi^{-1} =(1\,2\,3\,4)(5\,6\,7)\cdot(1)(2\,3\,5)(4\,7\,6)\cdot(1\,2\,3)(4 \,5)(6\,7)\cdot[(1)(2\,3\,5)(4\,7\,6)]^{-1}\] \[=(1)(2\,6\,4\,5\,3\,7)\] Note that \(\sigma_{0}\pi\omega_{0}\pi^{-1}\) can be obtained from \(R\cdot E(\pi)\) by removing the symbols in \(T\). Thus studying the number of faces of bipartite maps is equivalent to studying \(C_{\alpha,\beta}\). We use this correspondence to estimate \(E[C_{\alpha,\beta}]\) because it is easier to define our structures and substructures in maps. We formalise this now. We denote by \(M_{\alpha,\beta}\) the set of all such maps: \[M_{\alpha,\beta}:=\{m_{\pi}:\pi\in S_{n}\}\] Let \(U\) be the uniform probability distribution on this set. Then \((M_{\alpha,\beta},U)\) is a probability space. We use \(F(\cdot)\) to denote the number of faces of a map. From above, we see that \(F(m_{\pi})=c(R\cdot E(\pi))\); that is, the number of faces in \(m_{\pi}\) is the number of cycles in the product of \(R\cdot E(\pi)\). Define \(F_{\alpha,\beta}:=F\) as the random variable on \((M_{\alpha,\beta},U)\) for the number of faces in a map in \(M_{\alpha,\beta}\). **Lemma 2**.: _For any positive integer \(n\) and partitions \(\alpha\) and \(\beta\) of \(n\),_ \[\mathbb{E}[F_{\alpha,\beta}]=\mathbb{E}[C_{\alpha,\beta}].\] Proof.: Note that \[\mathbb{E}[F_{\alpha,\beta}]=\frac{d}{dq}\Big{|}_{q=1}\frac{1}{n!}\sum_{\pi \in S_{n}}q^{c(R\cdot E(\pi))},\] while \[\mathbb{E}[C_{\alpha,\beta}]=\frac{d}{dq}\Big{|}_{q=1}\frac{1}{|C_{\alpha}||C _{\beta}|}\sum_{\begin{subarray}{c}\sigma\in C_{\alpha}\\ \omega\in C_{\beta}\end{subarray}}q^{c(\sigma\omega)}.\] With \(\sigma_{0}\) and \(\omega_{0}\) as the canonical permutations of type \(\alpha\) and \(\beta\), respectively, we have \[\frac{1}{n!}\sum_{m_{\pi}\in M_{\alpha,\beta}}q^{F(m_{\pi})} =\frac{1}{n!}\sum_{\pi\in S_{n}}q^{c(R\cdot E(\pi))}\] \[=\frac{1}{n!}\sum_{\pi\in S_{n}}q^{c(\sigma_{0}\pi\omega_{0}\pi^{ -1})}\] \[=\frac{z_{\beta}}{n!}\sum_{\omega\in C_{\beta}}q^{c(\sigma_{0} \omega)}\] \[=\frac{1}{|C_{\alpha}||C_{\beta}|}\sum_{\begin{subarray}{c}\sigma \in C_{\alpha}\\ \omega\in C_{\beta}\end{subarray}}q^{c(\sigma\omega)},\] and the result follows. The statistic \(C_{\alpha,\beta}\) has been studied previously in some special cases. The case when \(\alpha=2^{n/2}\), the partition with all parts equal to \(2\), models a random surface obtained by gluing together polygonal disks and has been studied with motivation from physics. Pippenger and Schleich [11] show that \(\mathbb{E}[C_{2^{n/2},3^{n/3}}]=H_{n}+O(1)\), and also obtain bounds for the variance. Their results use a combinatorial analysis of a random process. More recently, their result was generalised by Chmutov and Pittel [1] to any \(\beta\) with parts all of size at least \(3\). They show that the whole distribution of permutations obtained is asymptotically uniform up to parity. As a corollary to this, it follows that \(\mathbb{E}[C_{2^{n/2},\beta}]=H_{n}+O(1)\). Their proof follows a method of Gamburd [1] that uses the Fourier transform on representations of the symmetric group and recent bounds on characters given by Larsen and Shalev [10]. Although it is not explicitly mentioned in their paper, these methods trivially extend to the case when \(\alpha\) has all parts of size at least 3 and \(\beta\) has all parts of size at least 2. The same result holds in this case, giving that \(\mathbb{E}[C_{\alpha,\beta}]=H_{n}+O(1)\). The size of the \(O(1)\) term is unclear, as it relies on a very general, powerful character bound. However in some special cases more precise bounds are known. The case when \(\alpha=(n)\) has been studied by various authors, using representation theory [12], matrix integrals [11], and more combinatorial methods Goulden and Jackson [13], Cori _et al._[14], and Boccara [2]. These results are more refined, and give the exact number of products with a given number of cycles. From this, we can obtain the average number of cycles. The case \(\alpha=\beta=(n)\) is especially nice; we have that \(\mathbb{E}[C_{(n),(n)}]=H_{n-1}+\lceil\frac{n}{2}\rceil^{-1}\). In the more general case when \(\beta\) has all parts of size at least 2, it follows from Stanley's generating function [12, Theorem 3.1] that \(H_{n-1}-\frac{4}{n}\leq\mathbb{E}[C_{(n),\beta}]\leq H_{n-1}+\frac{4}{n}\) (see [10]). In all these known cases, we see that the average number of cycles is close to \(H_{n}\). We shall here be determining an estimate in the more general case when \(\alpha\) and \(\beta\) can be any partitions with no parts of size one. We state our main theorem. **Theorem 3**.: _For any partitions \(\alpha\) and \(\beta\) with no part of size 1, we have \(H_{n}-3<\mathbb{E}[C_{\alpha,\beta}]\leq H_{n}+1\)._ Our theorem eliminates the asymptotic notation present in the more general results previously mentioned. The method of proof will also be more combinatorial, using random processes on maps. ## 2. Basic terminology and random processes We give some definitions and terminology that we shall make extensive use of throughout the paper. Fix \(\alpha\) and \(\beta\) to be partitions of \(n\) with no parts of size 1. A partial map is a triple \(m=(D,R,E)\), where \(D\) and \(R\) are defined as they were for maps, but where \(E\in\mathfrak{S}_{D}\) is an involution (not necessarily fixed point free). The involution can be represented by an injection \(\pi:X\to[n]\), where \(X\subseteq[n]\). Then we can set \[E(\pi):=\prod_{i\in X}(s_{i}\,t_{\pi(i)})\] and define the partial map \(m_{\pi}=(D,R,E(\pi))\). Notice that a map is itself a partial map. In this case \(E\) will be a fixed point free involution, and the associated function \(\pi\) will be a permutation on \(\{1,2,\ldots,n\}\). Also when \(E\) is the identity (the associated function \(\pi\) has the empty set as its domain) the partial map is just a set of darts at vertices with no faces or edges. These are the two extreme cases. Fix an injection \(\pi:X\to[n]\) for some subset \(X\subseteq[n]\), and let \(m=m_{\pi}=(D,R,E(\pi))\) be a partial map, and write \(E=E(\pi)\). Figure 2 contains some examples of the definitions given below. * _Paired/unpaired darts_: Each dart in \(m\) is one of the symbols in \(S\) or \(T\). We have two types of dart: * Paired dart: A dart in a partial map that is part of an edge, _i.e_ a dart that is in a 2-cycle in \(E\). * Unpaired dart: A dart in a partial map that is not part of an edge, _i.e_ a dart that is a fixed point in \(E\). Let \(S^{u},T^{u}\) be the set of unpaired darts in \(S,T\) respectively. * _Completed face_: A cycle in \(R\cdot E\) only containing paired darts. Let \(F(m)\) denote the number of completed faces in the partial map. * _Unpaired permutation/partial face/length_: For \(m\), the induced permutation of \(R\cdot E\) on \(S^{u}\cup T^{u}\) is called the _unpaired permutation_, which we denote \(u_{m}\). That is, we remove all paired darts, and any resulting empty cycles, from \(R\cdot E\) to obtain \(u_{m}\). We write \(u=u_{m}\) when \(m\) is clear from the context. A _partial face_ is a cycle in \(u_{m}\). The _length_ of a partial face is its length as a cycle. * _Bad dart_: An unpaired dart contained in a cycle of length one in \(u_{m}\). * _Mixed partial face/bad partial map_: A partial face in \(m\) is _mixed_ if it contains darts in both \(S^{u}\) and \(T^{u}\). The partial map \(m\) is called _bad_ if it does not contain any mixed partial faces. We shall use a different random process to prove the upper and lower bounds for \(\mathbb{E}[F_{\alpha,\beta}]\). **Random Process A (RPA)** 1. Initialize \(S^{u}=S\), \(T^{u}=T\), \(X=\emptyset\), \(\pi_{0}:X\to[n]\). 2. Call \(m_{\pi_{k-1}}\) the partial map at the start of the \(k^{\text{th}}\) step. Pick the _active dart_\(d\) from \(S^{u}\cup T^{u}\) with respect to the following order of preference. * The bad dart in \(S^{u}\) with smallest index. * The bad dart in \(T^{u}\) with smallest index. * The dart with the smallest index in \(S^{u}\). 3. If \(d=s_{i}\in S^{u}\), then pick \(t_{j}\in T^{u}\) uniformly at random and call \(t_{j}\) the _pairing dart_ at this step. If \(d=t_{j}\in T^{u}\), then pick \(s_{i}\in S^{u}\) uniformly at random and call \(s_{i}\) the _pairing dart_. Set \(X=X\cup\{i\}\), and define \(\pi_{k}:X\to[n]\) as \(\pi_{k-1}\) from the previous step, but also with \(\pi_{k}(i)=j\). 4. Then \(S^{u}=S^{u}-\{s_{i}\}\) and \(T^{u}=T^{u}-\{t_{j}\}\). If \(S^{u}\neq\emptyset\) then return to step 2. 5. Output the map \(m_{\pi_{n}}\). **Random Process B (RPB)** 1. Initialize \(S^{u}=S\) and \(T^{u}=T\), \(X=\emptyset\), \(\pi_{0}:X\to[n]\). 2. Call \(m_{\pi_{k-1}}\) the partial map at the start of the \(k^{\text{th}}\) step. Pick the _active dart_ as \(s_{k}\). 3. Pick \(t_{j}\in T^{u}\) uniformly at random and call \(t_{j}\) the _pairing dart_ at this step. Set \(X=X\cup\{i\}\), and define \(\pi_{k}:X\to[n]\) as \(\pi_{k-1}\) from the previous step also with \(\pi_{k}(i)=j\). 4. Then \(S^{u}=S^{u}-\{s_{k}\}\) and \(T^{u}=T^{u}-\{t_{j}\}\). If \(S^{u}\neq\emptyset\) then return to step 2. 5. Output the map \(m_{\pi_{n}}\). After \(n\) steps of either process, the output is a map in \(M_{\alpha,\beta}\). **Lemma 4**.: _RPA and RPB both output a uniform at random map from \(M_{\alpha,\beta}\)._ Proof.: At the \(k^{\text{th}}\) iteration of step 2, there are \(n-k+1\) choices of pairing dart. There are therefore \(n!\) possible outcomes of this random process, each with equal Figure 2. In the partial map \(m\) on the left, the paired darts are \(\{s_{1},s_{2},s_{3},s_{4},s_{7},t_{2},t_{3},t_{4},t_{5},t_{7}\}\). The unpaired darts are all of the remaining darts, so \(S^{u}=\{s_{5},s_{6}\}\) and \(T^{u}=\{t_{1},t_{6}\}\). It has two completed faces, given by the cycles \((s_{2}\,t_{2})\) and \((s_{4}\,t_{5})\). The unpaired permutation is \(u_{m}=(t_{1})(s_{5}\,s_{6}\,t_{6})\). It has one bad dart, \(t_{1}\), and it has one mixed partial face, so it is not a bad map. The partial map \(m^{\prime}\) on the right has \(S^{u}=\{s_{5},s_{6},s_{7}\}\) and \(T^{u}=\{t_{1},t_{6},t_{7}\}\), and it has the same bad dart, \(t_{1}\), as \(m\). Furthermore, we observe \(u_{m^{\prime}}=(t_{1})(s_{5}\,s_{6}\,s_{7}\,s_{8})(t_{6}\,t_{7})\). There are no mixed partial faces, so \(m^{\prime}\) is a bad map. probability. There are \(n!\) elements in \(M_{\alpha,\beta}\), and every map is clearly output by each of the random processes. The result follows. We give some basic observations, which are illustrated in Figures 3 and 4. **Observation 5**.: Suppose that we are at some step 2 of RPA or RPB. Let \(m\) be the partial map at the beginning of this step, and suppose that it has active dart \(d\). Step 3 then pairs \(d\) with another dart. Let \(u=u_{m}\). Observe the following. 1. Suppose that \(d\) is a bad dart. A completed face is added at this step if and only if the pairing dart is also bad (_i.e._ in the unpaired permutation \(u\), both \(d\) and its pairing dart are fixed points). When a completed face is added the resulting map has two fewer bad darts. A new bad dart is added if and only if the pairing dart is in a partial face of length 2. 2. Suppose that \(d\) is not a bad dart. A completed face is added if and only if the pairing dart is \(u(d)\) or \(u^{-1}(d)\). If the pairing dart is \(u(d)\) and \(u(d)=u^{-1}(d)\) then two completed faces are added at this step. Similarly, a bad dart is added if and only if the pairing dart is \(u^{2}(d)\) or \(u^{-2}(d)\). If \(u^{2}(d)=u^{-2}(d)\), then two bad darts are added. ### Proving the upper bound with RPA Again, throughout this section \(\alpha\) and \(\beta\) are partitions of some positive integer \(n\) with no parts of size 1. Suppose some partial map \(m\) appears at the start of some step of RPA. Then the random process will pair the active dart with a randomly chosen pairing dart. This will add some number (possibly zero) of completed faces to the partial map. Let \(F_{m}\) be the random variable for the number of completed faces that are added when the random process adds a new edge to \(m\). **Lemma 6**.: _Suppose the random process outputs the map \(m_{\pi}\). Let \(m_{\pi}^{(k)}\) denote the partial map at the \(k^{\rm th}\) step of the process that outputted \(m_{\pi}\). Then we have_ \[\mathbb{E}[F_{\alpha,\beta}]\leq\max_{m_{\pi}\in M_{\alpha,\beta}}\left\{\sum_{ k=1}^{n}\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\right\}.\] Proof.: Each face in \(m_{\pi}\) is added as a completed face at some step of RPA. Once it is added, it cannot be removed at a later step. At the \(k^{\rm th}\) iteration of step 3, let \(m_{k}\) be the random variable for the partial map at the start of this step, and let \(A_{k}\) be the random variable for the number of completed faces added at the \(k^{\rm th}\) step. Using total probability, we have \[\mathbb{E}[A_{k}]=\sum_{m}Pr[m_{k}=m]\mathbb{E}[F_{m}],\] where the sum is over all possible partial maps. Using linearity of expectation gives: \[\mathbb{E}[F_{\alpha,\beta}] =\sum_{k=1}^{n}\mathbb{E}[A_{k}]\] \[=\sum_{k=1}^{n}\sum_{m}Pr[m_{k}=m]\mathbb{E}[F_{m}]\] Figure 3. Illustrating Observation 5.1, we have four maps: the left contains two partial maps \(m_{1}\) and \(m_{2}\) and the right contains possible outputs \(m_{1}^{\prime}\) and \(m_{2}^{\prime}\), respectively, of an application of one of the random processes. On the left the active dart is in green. The map \(m_{1}\) has two bad darts \(s_{4}\) (the active dart) and \(t_{4}\). The map \(m_{1}^{\prime}\) was obtained by pairing \(s_{4}\) with \(t_{4}\), completing a face (given by the cycle \((s_{1}\,t_{3}\,s_{4}\,t_{1}\,s_{3}\,t_{4})\)), as claimed in Observation 5.1. The map \(m_{2}\) has \(s_{3}\) as the active dart, and has a face of length \(2\). If \(s_{3}\) is paired with \(t_{5}\), a bad dart, \(t_{8}\), is created in \(m_{2}^{\prime}\). \[=\frac{1}{n!}\sum_{m_{\pi}\in M_{\alpha,\beta}}\sum_{k=1}^{n}\mathbb{ E}[F_{m_{\pi}^{(k-1)}}]\] \[\leq\max_{m_{\pi}\in M_{\alpha,\beta}}\left\{\sum_{k=1}^{n}\mathbb{ E}[F_{m_{\pi}^{(k-1)}}]\right\}\qed\] Figure 4. Illustrating Observation 5.2, we have four maps whose relationship with each other is the same as in Figure 3. The green dart is again the active dart. We find \(u_{m_{1}}=(s_{5}\,s_{6}\,s_{7}\,s_{8})(t_{5}\,t_{6})(t_{7}\,t_{8})(s_{4}\,t_{4})\). The partial map \(m_{1}^{\prime}\) is obtained by pairing \(s_{4}\) with \(t_{4}\); by Observation 5.2 this creates two completed cycles, one is given by \((s_{1}\,t_{1}\,s_{3}\,t_{4}\,s_{2}\,t_{2})\) and the other by \((t_{3}\,s_{4})\). The map \(m_{2}\) has \(u_{m_{2}}=(s_{5}\,s_{6}\,t_{2}\,t_{5})\). The partial map \(m_{2}^{\prime}\) is obtained from \(m_{2}\) by pairing the active dart \(s_{5}\) with \(t_{2}\). Since \(u_{m_{2}}^{2}(s_{5})=u_{m_{2}}^{-2}(s_{5})=t_{2}\), this creates two bad darts in \(m_{2}^{\prime}\), the darts \(s_{6}\) and \(t_{5}\). **Lemma 7**.: _Any partial map \(m\) appearing at the start of some step of RPA has at most two bad darts, and at most one mixed partial face. This mixed partial face is always of the form \((s_{i_{1}}\,\dots\,s_{i_{a}}\,t_{j_{1}}\,\dots\,t_{j_{b}})\), and if the active dart \(d\) is in this partial face it must equal \(s_{i_{1}}\)._ Proof.: Suppose the partial map \(m\) has no bad darts, and suppose \(d\) is the active dart at this step. If the pairing dart is \(u^{2}(d)\) then \(u(d)\) will become a bad dart at the next step. If the pairing dart is \(u^{-2}(d)\) then \(u^{-1}(d)\) will become a bad dart at the next step. If \(u^{2}(d)=u^{-2}(d)\) then these choices coincide and we add two bad darts. The above are merely Observation 5 restated. In either case, we add at most two bad darts. Now suppose \(m\) has one or two bad darts. Then \(d\) is one of these bad darts, since step 2 of RPA prioritises choosing bad darts for the active dart. See Figure 3 for an example of RPA processing a bad dart. We make a new bad dart if and only if the pairing dart \(d^{\prime}\) is in partial face of length 2. In the new map \(d\) will no longer be a bad dart. Since we removed a bad dart, and added at most one bad dart, at the next step we will have at most two bad darts. This covers both cases, so the number of bad darts is always at most two. We now prove the claims about the mixed partial faces. The partial map at the start of an application of RPA has no paired darts, so it has no mixed partial faces. We therefore proceed by induction. Let \(m\) be the partial map at the beginning of some iteration of RPA. Suppose the partial map \(m\) at the start of a step of the process has at most one mixed partial face. If the map is bad then it has no mixed partial faces, and we can add at most one mixed partial face at this step. Otherwise assume that the map has one mixed partial face. We have two cases: the active dart \(d\) is bad or not. If \(d\) is bad, then processing it cannot add any more mixed partial faces. If \(d\) is not bad, then it is in \(S^{u}\) by our choice of active dart in step 2. Let \(v\) be the vertex incident with \(d\). We first aim to show that \(u_{m}^{-1}(d)\in T^{u}\); for that purpose, we assume the opposite: that \(u_{m}^{-1}(d)\in S^{u}\). First, all the darts in \(S^{u}\) with smaller index than \(d\) are processed. Second, observe that all the darts in \(S^{u}\) not incident with \(v\) with larger index must be in partial faces only containing darts in \(S^{u}\). This is because any paired dart with larger index than \(d\) in \(S^{u}\) (at \(v\) or a later vertex) must have been paired with an active bad dart in \(T\) (by the choice of active dart at step 2), and pairing an active bad dart cannot create a new mixed partial face. This has two consequences. First, all the darts in \(S^{u}\) that are contained in the mixed partial face must be incident with \(v\). Moreover, they appear contiguously in the mixed partial face from the dart with lowest index to the dart with highest index. Since \(d\) is the dart with lowest index among the unpaired darts at \(v\), if \(u_{m}^{-1}(d)\in S^{u}\), then \(u_{m}^{-1}(d)\) has the highest index amongst unpaired darts at \(v\). Therefore the mixed partial face equals \((d\cdots u_{m}^{-1}(d))\), so only contains darts incident with \(v\). Thus this partial face is not mixed, giving a contradiction. Therefore \(u_{m}^{-1}(d)\in T^{u}\), so the mixed partial face is of the form \((d\,s_{i_{1}}\,\cdots\,s_{i_{a}}\,t_{j_{1}}\,\cdots\,t_{j_{b}})\) with \(a\geq 0\) and \(b\geq 1\). Pairing \(d\) with any dart in this mixed partial face, or outside of this mixed partial face, still preserves there being at most one mixed partial face of this form. We now estimate when completed faces are added to a partial map. Proof of the upper bound in Theorem 3.: Fix some map \(m_{\pi}\in M_{\alpha,\beta}\). Using Lemma 6 it will be sufficient to estimate \(\sum_{k=1}^{n}\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\). Fix some value of \(k\) and let \(d\) be the active dart chosen in the \(k^{\text{th}}\) iteration of step 2. Case 1: The dart \(d\) is bad. By Observation 5, a face will be added if and only if the pairing dart is also bad. By Lemma 7, there is at most one other bad dart, so there is at most one other choice that adds a face. This gives \(\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\leq\frac{1}{n-k+1}\). Case 2: The dart \(d\) is not bad and \(u(d)\in S^{u}\). Then the pairing dart cannot be chosen to be \(u(d)\), so \(u^{-1}(d)\) is the only possible choice of pairing dart that adds a completed face. In this case \(\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\leq\frac{1}{n-k+1}\). Case 3: The dart \(d\) is not a bad dart and \(u(d)\in T^{u}\). Then choosing the pairing dart to be \(u(d)\) or \(u^{-1}(d)\) adds a completed face, and both of these choices are possible. If \(u(d)=u^{-1}(d)\) then choosing the pairing dart to be \(u(d)\) adds 2 completed faces. We therefore have \(\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\leq\frac{2}{n-k+1}\). Since \(u(d)\in T^{u}\), we see that \(d\) is in a mixed partial face at the start of this step. By Lemma 7 we have that \(u^{-1}(d)\in T^{u}\), so the mixed partial face is of the form \((d\,t_{i_{1}}\,\ldots\,t_{i_{a}})\). Therefore once \(d\) is paired, this face is no longer mixed. By Lemma 7 this was the only mixed partial face at the start of this step, so the map at the start of the next step is bad. Let \(d^{\prime}\) be the active dart at the next step, which is step \(k+1\). If the active dart \(d^{\prime}\) is bad, then we are in case 1 and we may add more faces. In this case, the partial map at the start of step \(k+2\) will also be bad. This will continue until we eventually arrive at some step \(j\), with \(j>k\), where the partial map at the start of this step is bad and the active dart is not bad, or the whole map will be completed before we reach such a step \(j\). If this step \(j\) exists, and \(\hat{d}\) is the active dart, then because the map is bad, we have \(u_{m_{\pi}^{(j-1)}}(\hat{d})\in S^{u}\) and \(u_{m_{\pi}^{(j-1)}}^{-1}(\hat{d})\in S^{u}\), so there will be no choice of pairing dart that adds a face. Therefore \(\mathbb{E}[F_{m_{\pi}^{(j-1)}}]=0\). Grouping together these terms we obtain \(\mathbb{E}[F_{m_{\pi}^{(k-1)}}]+\mathbb{E}[F_{m_{\pi}^{(j-1)}}]\leq\frac{2}{ n-k+1}<\frac{1}{n-k+1}+\frac{1}{n-j+1}\), with the intermediate terms \(\mathbb{E}[F_{m_{\pi}^{(i-1)}}]\leq\frac{1}{n-i+1}\). Thus \[\sum_{i=k}^{j}\mathbb{E}[F_{m_{\pi}^{(i-1)}}]\leq\sum_{i=k}^{j}\frac{1}{n-i+1}. \tag{2}\] If the index \(j\) exists, we call the above described run between \(k\) and \(j\) of the algorithm a _closed run_, and a _open run_ otherwise. The algorithm may have several closed runs, and potentially one open run. The expectations for added faces for closed runs all satisfy (2). Then, by definition, all remaining iterations of the algorithm satisfy Cases 1 or 2, in which case \[\sum_{k=1}^{n}\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\leq\sum_{k=1}^{n}\frac{1}{n-k+1}=H _{n}<H_{n}+1.\] If there is an open run beginning at the \(k^{\text{th}}\) iteration, then \[\sum_{i=k}^{n}\mathbb{E}[F_{m_{\pi}^{(i-1)}}]\leq\frac{2}{n-k+1}+\sum_{i=k+1}^ {n}\frac{1}{n-i+1}\leq\left(\sum_{i=k}^{n}\frac{1}{n-i+1}\right)+1,\] in which case we also obtain \[\sum_{k=1}^{n}\mathbb{E}[F_{m_{\pi}^{(k-1)}}]\leq\left(\sum_{k=1}^{n}\frac{1} {n-k+1}\right)+1\leq H_{n}+1.\] In either case, the result is proved. ### Proving the lower bound with RPB Again, throughout this section \(\alpha=(\alpha_{1},\ldots)\) and \(\beta=(\beta_{1},\ldots)\) are partitions of some positive integer \(n\) with no parts of size 1. Recall that for each \(0\leq j\leq\ell(\alpha)\) that \(\alpha_{j}^{\prime}=\sum_{i=1}^{j}\alpha_{i}\), with \(\alpha_{0}^{\prime}=0\). We prove the lower bound by analysing RPB. In this case the active dart at the beginning of step \(k\) is always \(s_{k}\). Let \(B_{k}\) be the random variable for the number of completed faces added to the partial map at step \(k\). Thus \(B_{k}\) is similar to \(A_{k}\) except it applies to RPB. As for \(A_{k}\), we clearly have \[\mathbb{E}[F_{\alpha,\beta}]=\sum_{k=1}^{n}\mathbb{E}[B_{k}]. \tag{3}\] We will need some additional variables on RPB. Let \(m\) be the partial map at the start of step \(k\) of RPB. Note that \(m\) has \(k-1\) edges and \(2(n-k+1)\) unpaired darts. * Let \(O_{k}\) be the random variable for the number of bad darts in \(T^{u}\) in \(m\). * Let \(b_{k}\) be the event that \(m\) is bad. Thus each variable \(O_{k}\) and \(b_{k}\) count their relevant quantities at the beginning of step \(k\) of the algorithm. Before we find estimates on these new definitions, we make an observation about RPB. These observations are based on the simple choice of active dart in step 2. **Observation 8**.: Let \(m\) be a map at the beginning of step \(k\). Suppose the active dart \(s_{k}\) is at the \(j^{\text{th}}\) vertex, so \(k=\alpha_{j}^{\prime}+r\) for some \(0\leq r\leq\alpha_{j+1}-1\). Then all the darts at the \(j^{\text{th}}\) vertex after \(s_{k}\) are unpaired, while all the darts before are paired. Furthermore, all the darts at the \(i^{\text{th}}\) vertex on the left hand side are paired if \(i<j\) and unpaired if \(i>j\). It follows that a cycle of \(u_{m}\) is \((s_{k}\cdots s_{\alpha_{j+1}^{\prime}}t_{i_{1}}\cdots t_{i_{p}})\) for some set of darts \(M=\{t_{i_{1}},\ldots,t_{i_{p}}\}\subseteq T\). This is the only potentially mixed partial face of \(m\), and \(m\) is bad if and only if \(M\) is empty. In the special case where \(r=1\), the set \(M\) is necessarily empty, and the above cycle of \(u_{m}\) has the form \((s_{\alpha^{\prime}_{j}+1}\cdots s_{\alpha^{\prime}_{j+1}})\). We now give estimates on \(O_{k}\) and \(b_{k}\). **Lemma 9**.: _If \(k\neq\alpha^{\prime}_{j}+1\) for all \(j\), then \(\mathbb{E}[O_{k}]\leq 3\) and \(Pr[b_{k}]\leq\frac{4}{n-k+2}\). If \(k=\alpha^{\prime}_{j}+1\) for some \(j\) or \(k=1\), then \(\mathbb{E}[O_{k}]\leq 3+\frac{3}{n-k+2}\) and \(Pr[b_{k}]=1\)._ Proof.: Our proof is by induction on \(k\), the steps of the algorithm. Since \(\alpha\) and \(\beta\) have no parts of size one, we have \(\mathbb{E}[O_{1}]=\mathbb{E}[O_{2}]=0\), \(P[b_{1}]=1\) and \(Pr[b_{2}]=0\). Now suppose the conditions of the lemma hold for \(O_{j}\) and \(b_{j}\) for all \(j\leq k\). We prove the result holds after the \(k^{\text{th}}\) step is completed (the beginning of the \((k+1)^{\text{st}}\) step). Denote the map at the beginning of the \(k^{\text{th}}\) step \(m\); the active dart in \(m\) is \(s_{k}\) and there are \(n-k+1\) unpaired darts in \(T^{u}\) that \(s_{k}\) could be paired with at this step. We set \(u=u_{m}\) to be the unpaired permutation. Case 1: \(k\neq\alpha^{\prime}_{j}\) and \(k\neq\alpha^{\prime}_{j}+1\) for all \(j\). From Observation 5, we add a bad dart at this step if and only if the pairing dart is \(u^{2}(s_{k})\) or \(u^{-2}(s_{k})\). However, by our choice of \(k\), we have \(u(s_{k})\in S^{u}\), so pairing \(s_{k}\) with \(u^{2}(s_{k})\) would add a bad dart in \(S^{u}\) in the resulting map. Therefore there is only one choice of pairing dart for \(s_{k}\) that could add a bad dart in \(T^{u}\). Separately, if the pairing dart is a bad dart, then the pairing dart is no longer bad in the resulting partial map. Since \(k\neq\alpha^{\prime}_{j}+1\), we have \(\mathbb{E}[O_{k}]\leq 3\), so \[\mathbb{E}[O_{k+1}] \leq\mathbb{E}[O_{k}]-\frac{\mathbb{E}[O_{k}]}{n-k+1}+\frac{1}{n- k+1}\] \[\leq 3\left(1-\frac{1}{n-k+1}\right)+\frac{1}{n-k+1}<3.\] If the partial map \(m\) at the beginning of the \(k^{\text{th}}\) step is bad, then the partial map at the \((k+1)^{\text{st}}\) step will be bad if and only if the pairing dart is bad. If \(m\) is not bad, then by Observation 8 the only mixed partial face is the one containing the active dart \(s_{k}\), and the only pairing dart that results in a bad map is \(t_{i_{1}}\) (in the notation of Observation 8). Both cases of \(m\) being bad and not bad are illustrated in Figure 5. Since \(k\neq\alpha^{\prime}_{j}+1\), we can use the inductive assumption and total probability to obtain \[Pr[b_{k+1}] =Pr[b_{k+1}\mid b_{k}]Pr[b_{k}]+Pr[b_{k+1}\mid b_{k}^{c}]Pr[b_{k}^ {c}]\] \[\leq Pr[b_{k}]\frac{\mathbb{E}[O_{k}\mid b_{k}]}{n-k+1}+Pr[b_{k}^ {c}]\frac{1}{n-k+1}\] \[\leq Pr[b_{k}]\frac{\mathbb{E}[O_{k}\mid b_{k}]}{n-k+1}+Pr[b_{k}^ {c}]\frac{1+\mathbb{E}[O_{k}\mid b_{k}^{c}]}{n-k+1}\] \[\leq\frac{1+\mathbb{E}[O_{k}]}{n-k+1}\leq\frac{4}{n-k+1}.\] Case 2: \(k=\alpha^{\prime}_{j}\) for some \(j\). In this case the active dart \(s_{k}\) is the last unpaired dart at its vertex. If the partial map at this step is not bad, then by Observation 5 both \(u^{2}(s_{k})\) and \(u^{-2}(s_{k})\) are valid choices of pairing dart that add a bad dart to the resulting map. Therefore we expect to add \(\frac{2}{n-k+1}\) bad darts in this case. Figure 5. On the left are possible maps at the beginning of the \(6^{\text{th}}\) iteration of the RPB, while the primed maps on the right are their output after the \(6^{\text{th}}\) step is executed. In both cases \(s_{6}\) (green) is the active dart. At top left, we have \(u_{m_{1}}=(s_{6}\,s_{7}\,s_{8})(t_{1})(t_{7}\,t_{8})\), so the map is bad. The dart \(s_{6}\) is paired with the bad dart \(t_{1}\) to obtain \(m^{\prime}_{1}\), which is bad. The reader can confirm that pairing \(s_{6}\) with either dart \(t_{7}\) or \(t_{8}\) (neither dart is bad) instead of \(t_{1}\) does not produce a bad map. At bottom left, we have \(u_{m_{2}}=(s_{6}\,s_{7}\,s_{8}\,t_{3}\,t_{8})(t_{1})\), so \(m_{2}\) has a mixed partial face and is not bad. Note the mixed partial face contains the active dart. Furthermore, the dart \(t_{3}\) plays the role of \(t_{i_{1}}\) in Observation 8, and if \(s_{6}\) is paired with \(t_{3}\), we get the bad map \(m^{\prime}_{2}\). The reader can check that pairing \(s_{6}\) with \(t_{1}\) or \(t_{8}\) instead of \(t_{3}\) does not give a bad map. If the partial map at this step is bad, then \(s_{k}\) itself is a bad dart. In this case we add one new bad dart if and only if the pairing dart is in a partial face containing two unpaired darts. If we are at an early step and \(\beta=2^{n/2}\), then almost every dart could satisfy this. Therefore we use a rough estimate and assume any choice of pairing dart will add a bad dart. Regardless of if the partial map at this step is bad or not, pairing \(s_{k}\) with any bad dart will result in a map with one fewer bad dart (the pairing dart will no longer be bad). This gives \[\mathbb{E}[O_{k+1}] \leq\mathbb{E}[O_{k+1}\mid b_{k}]Pr[b_{k}]+\mathbb{E}[O_{k+1} \mid b_{k}^{c}]Pr[b_{k}^{c}]\] \[\leq(\mathbb{E}[O_{k}\mid b_{k}]+1)Pr[b_{k}]+\left(\mathbb{E}[O_{ k}\mid b_{k}^{c}]+\frac{2}{n-k+1}\right)Pr[b_{k}^{c}]-\frac{\mathbb{E}[O_{k}]}{n-k+1}\] \[=\mathbb{E}[O_{k}]\left(1-\frac{1}{n-k+1}\right)+Pr[b_{k}]\left( 1-\frac{2}{n-k+1}\right)+\frac{2}{n-k+1}\] \[\leq 3\left(1-\frac{1}{n-k+1}\right)+\frac{4}{n-k+2}\left(1- \frac{2}{n-k+1}\right)+\frac{2}{n-k+1}\] \[=3\left(1-\frac{1}{n-k+1}\right)+\frac{6}{n-k+1}-\frac{12}{(n-k+ 1)(n-k+2)}\] \[\leq 3+\frac{3}{n-k+1}.\] At the next step, the active dart is \(s_{k+1}\), and \(k+1=\alpha_{j}^{\prime}+1\). Then by Observation 8 all of the darts at the vertex incident with \(s_{k+1}\) are unpaired. Therefore the partial map is necessarily bad. This means \(Pr[b_{k+1}]=1\). Case 3: We have \(k=\alpha_{j}^{\prime}+1\) for some \(j\). In this case, again, the active dart \(s_{k}\) is at a vertex with all unpaired darts by Observation 8 and the map is bad. Therefore there are no choices of pairing dart that add a bad dart in \(T^{u}\). However, choosing a pairing dart that is bad will remove this bad dart from the resulting map. Therefore using induction we have \[\mathbb{E}[O_{k+1}] \leq\left(1-\frac{1}{n-k+1}\right)\mathbb{E}[O_{k}]\] \[\leq\left(1-\frac{1}{n-k+1}\right)\left(3+\frac{3}{n-k+2}\right)\] \[<3\left(\frac{n-k}{n-k+1}\right)\left(\frac{n-k+1}{n-k}\right)=3.\] Since the partial map at the start of this step is bad, the partial map at the next step will be bad if and only if the pairing dart is bad. Therefore we obtain \[Pr[b_{k+1}]\leq\frac{\mathbb{E}[O_{k}]}{n-k+1}\leq\frac{3+\frac{3}{n-k+2}}{n- k+1}\leq\frac{4}{n-k+1}.\] Note that when \(n-3\leq k\leq n\), the right hand side of the previous equation is greater than or equal to 1, so it provides no interesting bound for \(Pr[b_{k+1}]\). This covers all the cases and hence completes the induction. We are now ready to give the lower bound. Proof of the lower bound in Theorem 3.: We estimate \(\mathbb{E}[B_{k}]\) for each \(k\) and then use (3). First observe that \(\mathbb{E}[B_{n}]\geq 1\) and \(\mathbb{E}[B_{1}]=0\). Second, for the special cases when \(k=\alpha_{j}^{\prime}+1\) for some \(j\), we find by Observations 5 and 8 that the map at the beginning of the \(k^{\text{th}}\) iteration is bad and that \(\mathbb{E}[B_{k}]=0\). Next, using total probability, we have \[\mathbb{E}[B_{k}]=\mathbb{E}[B_{k}|b_{k}]Pr[b_{k}]+\mathbb{E}[B_{k}|b_{k}^{c} ]Pr[b_{k}^{c}]\geq\mathbb{E}[B_{k}|b_{k}^{c}](1-Pr[b_{k}]). \tag{4}\] To estimate the right hand side of (4), we use the estimate on \(Pr[b_{k}]\) from Lemma 9, but note that estimate is only useful when \(k\leq n-3\) (otherwise the stated estimate on \(Pr[b_{k}]\) is greater than or equal to 1). Thus when \(k=n-2\) or \(n-1\), we use the coarse lower bound of 0 for \(\mathbb{E}[B_{k}]\). Thus we have so far \[\mathbb{E}[B_{1}],\mathbb{E}[B_{n-2}]\text{ and }\mathbb{E}[B_{n-1}] \geq 0, \tag{5}\] \[\mathbb{E}[B_{n}] \geq 1,\] \[\text{and }\mathbb{E}[B_{k}] =0\text{ when }k=\alpha_{j}^{\prime}+1\text{ for some }j.\] For the remaining values of \(k\), we estimate (4) in two cases: when \(k\neq\alpha_{j}^{\prime}\) for all \(j\) and when \(k=\alpha_{j}^{\prime}\) for some \(j\). At the beginning of the \(k^{\text{th}}\) step, we let \(m\) be the partial map and \(v\) the vertex with the active dart \(s_{k}\). Since we are estimating \(\mathbb{E}[B_{k}|b_{k}^{c}]\), we assume \(m\) is not bad. Case 1: \(k\neq\alpha_{j}^{\prime}\) for any \(j\). Then since \(m\) is not bad, by Observation 8 we have that the active dart \(s_{k}\) is in the lone mixed partial face, so \(u^{-1}(s_{k})\in T^{u}\), while \(u(s_{k})\in S^{u}\), from which it follows that \(s_{k}\) is not bad. But then from Observation 5 it follows that the lone choice of pairing dart that adds a face is \(u^{-1}(s_{k})\). So \[\mathbb{E}[B_{k}|b_{k}^{c}]\geq\frac{1}{n-k+1},\] whence \[\mathbb{E}[B_{k}|b_{k}^{c}](1-Pr[b_{k}]) \geq\frac{1}{n-k+1}\left(1-\frac{4}{n-k+2}\right)\] \[=\frac{1}{n-k+1}-\frac{4}{(n-k+1)(n-k+2)}.\] Case 2: \(k=\alpha_{j}^{\prime}\) for some \(j\). Since \(m\) is not bad, by Observation 8 the active dart \(s_{k}\) is in a mixed face, so \(s_{k}\) is not bad. By Observation 5 there are two choices of pairing dart that add a completed face: \(u(s_{k})\) and \(u^{-1}(s_{k})\). Both of these may be possible as choices of pairing dart, since \(s_{k}\) being the last unpaired dart at \(v\) implies \(u(s_{k}),u^{-1}(s_{k})\in T^{u}\). As noted in Observation 5, if \(u(s_{k})=u^{-1}(s_{k})\), then two faces are added. Hence \[\mathbb{E}[B_{k}|b_{k}^{c}]\geq\frac{2}{n-k+1}=\frac{1}{n-k}+\frac{1}{n-k+1}- \frac{1}{(n-k)(n-k+1)}.\] This gives \[\mathbb{E}[B_{k}|b_{k}^{c}](1-Pr[b_{k}]) \geq\left(\frac{1}{n-k}+\frac{1}{n-k+1}-\frac{1}{(n-k)(n-k+1)} \right)\left(1-\frac{4}{n-k+2}\right)\] \[>\frac{1}{n-k+1}+\frac{1}{n-k}-\frac{4}{(n-k+1)(n-k+2)}\] \[\quad-\frac{4}{(n-k)(n-k+2)}-\frac{1}{(n-k)(n-k+1)}\] \[>\frac{1}{n-k+1}-\frac{4}{(n-k+1)(n-k+2)}\] \[\quad+\frac{1}{n-k}-\frac{5}{(n-k)(n-k+1)}.\] Let \[C =\{\alpha_{j}^{\prime}:j=1,\ldots,\ell(\alpha)-1\text{ and } \alpha_{j}^{\prime}\leq n-3\}\] \[\text{and }D =\{\alpha_{j}^{\prime}+1:j=1,\ldots,\ell(\alpha)-1\text{ and } \alpha_{j}^{\prime}\leq n-4\},\] and recall the identity \[\sum_{k=i}^{m}\frac{1}{k(k+1)}=\frac{1}{i}-\frac{1}{m+1}.\] Then, using (3), (4) and (5) and summing over \(k\) gives: \[\mathbb{E}[F_{\alpha,\beta}] =\sum_{k=1}^{n}\mathbb{E}[B_{k}]=1+\sum_{k=2}^{n-3}\mathbb{E}[B_{k} |b_{k}^{c}](1-Pr[b_{k}])\] \[\geq 1+\sum_{\genfrac{}{}{0.0pt}{}{k=2}{k\notin C,D}}^{n-3}\left( \frac{1}{n-k+1}-\frac{4}{(n-k+1)(n-k+2)}\right)\] \[\qquad+\sum_{k\in C}\left(\frac{1}{n-k+1}-\frac{4}{(n-k+1)(n-k+2) }+\frac{1}{n-k}-\frac{5}{(n-k)(n-k+1)}\right)\] \[=1+\sum_{\genfrac{}{}{0.0pt}{}{k=2}{k\notin C,D}}^{n-3}\left( \frac{1}{n-k+1}-\frac{4}{(n-k+1)(n-k+2)}\right)\] \[\qquad+\sum_{k\in C,D}\left(\frac{1}{n-k+1}-\frac{5}{(n-k+1)(n-k +2)}\right)\] \[>1+\sum_{k=2}^{n-3}\frac{1}{n-k+1}-5\sum_{k=2}^{n-3}\frac{1}{(n- k+1)(n-k+2)}\] \[=H_{n}-\frac{1}{2}-\frac{1}{3}-\frac{1}{n}-5\left(\frac{1}{4}- \frac{1}{n}\right)=H_{n}-\frac{25}{12}+\frac{4}{n}>H_{n}-3.\qed\]
2302.06448
Joint Span Segmentation and Rhetorical Role Labeling with Data Augmentation for Legal Documents
Segmentation and Rhetorical Role Labeling of legal judgements play a crucial role in retrieval and adjacent tasks, including case summarization, semantic search, argument mining etc. Previous approaches have formulated this task either as independent classification or sequence labeling of sentences. In this work, we reformulate the task at span level as identifying spans of multiple consecutive sentences that share the same rhetorical role label to be assigned via classification. We employ semi-Markov Conditional Random Fields (CRF) to jointly learn span segmentation and span label assignment. We further explore three data augmentation strategies to mitigate the data scarcity in the specialized domain of law where individual documents tend to be very long and annotation cost is high. Our experiments demonstrate improvement of span-level prediction metrics with a semi-Markov CRF model over a CRF baseline. This benefit is contingent on the presence of multi sentence spans in the document.
T. Y. S. S. Santosh, Philipp Bock, Matthias Grabmair
2023-02-13T15:28:02Z
http://arxiv.org/abs/2302.06448v1
# Joint Span Segmentation and Rhetorical Role Labeling with Data Augmentation for Legal Documents ###### Abstract Segmentation and Rhetorical Role Labeling of legal judgements play a crucial role in retrieval and adjacent tasks, including case summarization, semantic search, argument mining etc. Previous approaches have formulated this task either as independent classification or sequence labeling of sentences. In this work, we reformulate the task at span level as identifying spans of multiple consecutive sentences that share the same rhetorical role label to be assigned via classification. We employ semi-Markov Conditional Random Fields (CRF) to jointly learn span segmentation and span label assignment. We further explore three data augmentation strategies to mitigate the data scarcity in the specialized domain of law where individual documents tend to be very long and annotation cost is high. Our experiments demonstrate improvement of span-level prediction metrics with a semi-Markov CRF model over a CRF baseline. This benefit is contingent on the presence of multi sentence spans in the document. Keywords:Rhetorical Role Labeling semi-Markov CRF Data Augmentation ## 1 Introduction Rhetorical Role Labeling (RRL) of legal documents involves segmenting a document into semantically coherent chunks and assigning a label to the chunk that reflects its function in the legal discourse (e.g., preamble, fact, evidence, reasoning). RRL for long legal case documents is a precursor task to several downstream tasks, such as case summarization [9, 22, 12, 5], fact-based semantic case search [21], argument mining [25] and judgement prediction [12]. Prior works in RRL on legal judgements have regarded the task either as straightforward classification of sentences without modeling any contextual dependency between them [1, 25] or as sequence labeling [27, 3, 8, 12]. Initial works [22, 5, 9] performed RRL using hand-crafted features as part of a summarization pipeline. Savelka et al. [24] employed a CRF on hand-crafted features to segment US court decisions into functional and issue specific parts. Similarly, Walker et al. [25] used engineered features for RRL on US Board of Veterans' Appeals (BVA) decisions. With the rise of deep learning, Yamada et al. [27], Ghosh et al. [8], Paheli et al. [3] and Ahmad et al. [1] employed deep learning based BiLSTM-CRF models for RRL on Japanese civil rights judgements, Indian Supreme Court opinions, UK supreme court judgements and the US BVA corpus respectively. More recently, Kalamkar et al. [12] benchmark RRL on Indian legal documents using a Hierarchical Sequential Labeling Network model (HSLN). The corpus they used claims to be the largest available corpus of legal documents annotated with rhetorical sentence roles. In this work we approach RRL on legal documents with the observation that the texts of judgement are not only very long, but also often contain large sections of the same sentence type (e.g. explanations of case facts). We hence build models that segment the document into thematically coherent sets of contiguous sequence of sentences (which we refer to as _spans_) and assign them labels. We also hypothesize that modeling documents at this span level can also help to capture certain types of contexts effectively that may be spread across long sequences of sentences that can be collapsed into a much smaller number of thematically coherent spans. For example, when case documents are to be retrieved according to certain types of information, then aggregating that content from a small number of topical blocks across a long document is intuitive. At the same type, we explore how this assumption of topical continuity in the law can help RRL models learn better from small amounts of training data. To tackle this problem as sequential span classification, we apply semi-Markov Conditional Random Field (CRF) [23], which have been proposed to jointly handle span segmentation and labeling. Semi-Markov CRFs have been used in various tasks such as Chinese word segmentation [17, 16], named entity recognition [31, 32, 2], character-level parts of speech labelling [13], phone recognition [19], chord recognition [20], biomedical abstract segmentation [28] and piano transcription [29]. Most previous works dealt with shorter input sequences and thus contained smaller span lengths, which allows for a convenient upper bound on the maximum length of a span. In this work, we assess the performance of semi-Markov CRFs on legal judgements, which are usually very long and also possess a potentially large range of labels, making this setup even more challenging. Obtaining sufficiently large amounts of annotated data for deep learning models in specialized domains like the law is very expensive as it requires expert annotators. To mitigate this data scarcity, we explore three strategies of data augmentation (DA) such as random deletion of words, back translation and swapping of sentences within a span. DA techniques which are common in computer vision field, has witnessed growing interest in NLP tasks due to the twin challenge of large annotated data for neural networks and expensive data annotation in low-resource domains [6]. In sum, this paper contributes the casting RRL of legal judgments as a sequential span classification task and associated experiments with semi-Markov CRFs on existing public datasets. We also explore three data augmentation strategies to assess their impact on the task. Our experiments demonstrate that our semi-Markov CRF model performs bet ter compared to a CRF baseline on documents characterized by multi-sentence spans. 1 Footnote 1: Our code is available at [https://github.com/TUMLegalTech/Span-RRL-ECIR23](https://github.com/TUMLegalTech/Span-RRL-ECIR23) ## 2 Method Our hierarchical semi-Markov CRF model takes the judgement document \(x=\{x_{1},x_{2},\ldots,x_{m}\}\) as input, where \(x_{i}=\{x_{i1},x_{i2},\ldots,x_{in}\}\) and outputs the rhetorical role label sequence \(l=\{l_{1},l_{2},\ldots,l_{m}\}\) with \(l_{i}\in L\). \(x_{i}\) and \(x_{jp}\) denote \(i^{\text{th}}\) sentence and \(p^{\text{th}}\) token of \(j^{\text{th}}\) sentence, respectively. \(m\) and \(n\) denote the number of sentences and tokens in the \(i^{\text{th}}\) sentence respectively. \(l_{i}\) is the rhetorical role corresponding to sentence \(x_{i}\) and \(L\) denotes set of pre-defined rhetorical role labels. ### Hierarchical semi-Markov CRF model Our model contains a semi-Markov CRF component [23] built on top of a Hierarchical Sequential Labeling Network model [11] with the following layers: **Encoding layers:** Following [12], we encode each sentence with BERT-BASE [14] to obtain token level representations \(z_{i}=\{z_{i1},z_{i2},\ldots,z_{in}\}\). These are passed through a Bi-LSTM layer [10] followed by an attention pooling layer [30] to obtain sentence representations \(s=\{s_{1},s_{2},\ldots,s_{m}\}\). \[u_{it}=\tanh(W_{w}z_{it}+b_{w})\ \ \&\ \ \alpha_{it}=\frac{\exp(u_{it}u_{w})}{ \sum_{s}\exp(u_{is}u_{w})}\ \ \&\ \ s_{i}=\sum_{t=1}^{n}\alpha_{it}u_{it} \tag{1}\] where \(W_{w}\), \(b_{w}\), \(u_{w}\) are trainable parameters. **Context enrichment layer:** The sentence representations \(s\) are passed through a Bi-LSTM to obtain contextualized sentence representations \(c=\{c_{1},c_{2},\ldots,c_{m}\}\), which encode contextual information from surrounding sentences. **Classification layer:** A semi-Markov CRF takes the sequence of sentence representations \(c\) and segments it into labeled spans \(k=\{k_{1},...,k_{|s|}\}\) with \(k_{j}=(a_{j},b_{j},y_{j})\) where \(a_{j}\) and \(b_{j}\) are the starting and ending position of the sentences in the \(j^{\text{th}}\) span, and \(y_{j}\) is the corresponding rhetorical role label of the \(j^{\text{th}}\) span. \(|s|\) denotes the total number of spans where \(\sum_{l=1}^{|s|}(b_{j}-a_{j}+1)=m\). We model the conditional probability through a semi-Markov CRF which jointly tackles the span segmentation and label assignment for a span as follows: \[p(y|c)=\frac{1}{Z(c)}\exp(\sum_{j=1}^{|s|}F(k_{j},c)+A(y_{j-1},y_{j})) \tag{2}\] \[\text{where}\ \ \ Z(c)=\sum_{k^{\prime}\in K}\exp(\sum_{j}F(k^{\prime}_{j},c)+A(y_{j-1 },y_{j})) \tag{3}\] where \(F(k_{j},c)\) is the score assigned for span \(k_{j}\) (i.e., for interval \([a_{j},b_{j}]\) belonging to label \(y_{j}\) based on span input \(c\)) and \(A(y_{j-1},y_{j})\) is the transition score of the labels of two adjacent spans. \(Z(c)\) denotes the normalization factor computed as the sum over the set of all possible spans \(K\) against \(c\). The score \(F(k_{j},c)\) is computed using a learnable weight and bias matrix. \[F(k_{j},c)=W^{T}.f(k_{j},c)+b \tag{4}\] where W and b denote trainable parameters and \(f(k_{j},c)\) represents span representation of \(j^{\text{th}}\) span derived from c. To obtain the span representations \(f(k_{j},c)\), we pass the sentence-level representations \(c\) for the sentences in the given span \(k_{j}\) through a BiLSTM layer initially to capture the context of the span. Then we obtain the span representation \(f(k_{j},c)\) as the concatenation of the first two and final two sentences vectors, and the mean of the sentences in the span. In case of shorter spans, we repeat the same sentence to match the dimension. We maximize the above defined conditional log-likelihood to estimate the parameters and train the model end-to-end. We perform inference using the Viterbi decoding algorithm [7] to obtain the best possible span sequence along with its label assignment. These computations are done in logarithmic space to avoid numerical instability. In traditional semi-Markov CRF which are applied to relatively shorter sequences in the previous works, the assumption is that that there exists no transition between the same rhetorical labels. However, due to the long input data and a larger range of potential label spans, we relax this assumption as we can deal with a certain maximum span length due to computational constraints as it involves quadratic complexity. ### Data Augmentation The main goal of Data Augmentation in low resource settings is to increase the diversity of training data which in turn helps the model to generalize better on test data. In this regard, we implement the following three Data Augmentation techniques as preliminary analysis and leave the exploration of more advanced techniques as a future work. **Word deletion**[26] is a noise based method that deletes words within a sentence at random. The augmented data differs from the original without affecting the rhetorical role of the sentence as the rhetorical role of the sentence can be derived from the other words present in the sentence. This helps the model to derive better contextual understanding of the sentence rather than relying on word-level surface features. In **back-translation**[18], we translate the original text at sentence level into other languages and then back to the original language to obtain augmented data. Unlike word level methods, this method does not not directly deal with individual words but rewrites the whole sentence. This makes the model robust to any writing style based spuriously correlated features and learn the semantic information conveyed by the text. **Sentence swapping**[4] is based on the notion that a minor change in order of sentences is still readable for humans. We restrict swapping of sentences to those within a single span, which preserves the overall discourse flow of the document. While some discontinuities will be introduced, the text remains content complete and rhetorical roles do not change. This helps the model to learn the discourse flow of the document and makes the model overcome the limitation of having transition between same spans as described in the previous sub-section. ## 3 Experiments & Discussion **Datasets :** We experiment on two datasets - (i) BUILDNyAI dataset [12] consisting of judgement documents from the Indian supreme court, high court and district courts. It consists of publicly available train and validation splits with 184 and 30 documents, respectively, annotated with 12 different rhetorical role labels along with 'None'. As test dataset is not publicly available, we split and use training dataset for both training and validation and test it on the validation partition; (ii) the BVA PTSD dataset [25] consists of 25 decisions 2 by the U.S. Board of Veterans' Appeals (BVA) from appealed disability claims by veterans for service-related post-traumatic stress disorder (PTSD). We use 19 documents for training and validation, and 6 as test. They are annotated with 5 rhetorical roles along with 'None'. Footnote 2: The dataset actually contains 75 decisions, out of which only 25 documents have annotation label for every sentence **Baselines :** We compare our method, _HSLN-spanCRF+DA (data augmentation)_ against the following variants : _HSLN-CRF_ (normal CRF, no DA), _HSLN-spanCRF_ (spanCRF, no DA) and _HSLN-CRF+DA_ (normal CRF with DA). **Metrics :** We use both span-macro-F1 and span-micro-F1, which is computed based on match of span-by-span labels 3 (i.e., it encompasses both segmentation into exact spans as well their labeling). We also report span-segmentation-F1 which only evaluates on segmentation of spans ignoring the label. We further evaluate at the sentence level using micro-F1 and macro-F1 following previous works [12]. Footnote 3: We post-process and merge the same consecutive labels to obtain the span labels. **Implementation Details :** We use the hyperparameters of [12] for the HSLN model. For the semi-Markov CRF, we obtain the the maximum segment length using validation set and set it to 30 and 4 for BUILDNyAI and BVA datasets respectively. We used a batch size of 1 and trained our model end-to-end using Adam [15] optimizer with a learning rate of 1e-5. For data augmentation, we employed a maximum word deletion rate of 20%. For back-translation, we used English, German and Spanish as the sequence of languages. We augmented the dataset once using each DA technique and thus models with DA component were trained with four times the size of training dataset. **Performance Evaluation :** Table 1 reports the performance of our model and its variants on the two datasets. On BUILDNyAI, we observe that spanCRF performs better compared to a normal CRF in span-level metrics (statistically significant (p \(\leq\) 0.05) using McNemar Test), with a drop at the sentence-level. With the addition of data augmentation (DA), both CRF and spanCRF performance improves. However, the increase is larger for spanCRF's sentence level metrics (statistically significant (p \(\leq\) 0.05) using McNemar Test). This can be attributed to spanCRF having to compute the optimal segmentation path over all the possible paths, which requires enough data to learn and generalize better. On the other hand, on the BVA PTSD dataset, spanCRF did not show a significant impromavement compared to normal CRF. This is because 73.8% of the spans in BVA dataset (BUILDNyAI: 31%) have length 1 and the mean span length is 1.85 (BUILDNyAI: 6.81) which does not allow spanCRF to show its potential. However, the trend towards a beneficial effect of data augmentation persists. **Effect of Maximum Span Length :** We create variants of spanCRF by varying the maximum span length. First section in Table 2 shows that increasing the span length improved the performance on span-level metrics with a marginal drop at the sentence-level. We choose 30 as the maximum span length due to the computational resource constraints and our very long judgment documents. **Effect of Span representation :** We experiment with various span representations such as _grConv_[13] (Gated Recursive Convolutional Neural Networks), _simple_[28] involving concatenation of first and last sentence representation in span. We also create a variant of our proposed span representation by removing the BiLSTM (_ours w/o BiLSTM_). From second section in Table 2, we observe a performance drop without the BiLSTM layer (both at span- and sentence-level) indicating the importance of capturing context specifically at the span level to obtain good representations. We notice less improvement with _grConv_, which can also be attributed to its high number of parameters for our low data condition. Though _simple_ achieves an improvement in span-level metrics, it shows a huge drop in sentence-level performance. \begin{table} \begin{tabular}{|l||c|c|c||c|c||c|c|c|c|c|} \hline & \multicolumn{4}{c||}{**BUILDNyAI**} & \multicolumn{4}{c|}{**BVA PTSD**} \\ \hline & \multicolumn{3}{c||}{**Span**} & \multicolumn{2}{c||}{**Sentence**} & \multicolumn{3}{c||}{**Span**} & \multicolumn{2}{c||}{**Sentence**} \\ \hline **Model** & **s-mic.** & **s-mac.** & **s-seg** & **mic.** & **mac.** & **s-mic.** & **s-mac.** & **s-seg** & **mic.** & **mac.** \\ \hline CRF & 0.31 & 0.28 & 0.33 & 0.80 & 0.60 & 0.67 & 0.58 & 0.71 & 0.81 & 0.74 \\ spanCRF & 0.38 & 0.35 & 0.39 & 0.76 & 0.56 & 0.67 & 0.56 & 0.69 & 0.78 & 0.72 \\ CRF + DA & 0.32 & 0.32 & 0.34 & **0.82** & **0.63** & 0.72 & 0.64 & **0.75** & **0.85** & **0.81** \\ spanCRF + DA & **0.40** & **0.36** & **0.41** & 0.81 & 0.58 & **0.73** & **0.65** & **0.75** & 0.83 & 0.80 \\ \hline \end{tabular} \end{table} Table 1: Model performance on BUILDNyAI and BVA datasets **Ablation on Data Augmentation Strategies :** We observe the effect of each data augmentation strategy in isolation. From Table 3, we observe that, in the case of CRF, each of the augmentation strategies boosted performance at the sentence-level by a considerable margin. With all three augmentation strategies combined, CRF witnessed a considerable jump, indicating the complementarity between the strategies. Similarly, we observe an improvement with each data augmentation strategy in case of spanCRF, and the greatest increase when using all three strategies combined. ## 4 Conclusion Our experiments demonstrate that while semi-Markov CRFs help to boost the predictions at the span level, data augmentation strategies can mitigate data scarcity and improve the performance both at sentence- and span-levels, albeit conditioned on the documents exhibiting patterns of longer passages of the same rhetorical type. While this is typical for legal judgments, it is not universal. In the future, we hence would like to combine the complimentary sentence- and span-level methods. We would also like to explore different data augmentation strategies to alleviate the bottle neck of limited annotated data and expensive data annotation, especially in these specialized domains. \begin{table} \begin{tabular}{|l||c|c|c||c|c|c|} \hline & \multicolumn{4}{c||}{**Span**} & \multicolumn{2}{c||}{**Sentence**} \\ \hline **Model** & **s-mic.** & **s-mac.** & **s-seg** & **mic.** & **mac.** \\ \hline CRF (len = 1) & 0.31 & 0.28 & 0.33 & **0.80** & **0.60** \\ spanCRF (len=5) & 0.33 & 0.30 & 0.34 & 0.68 & 0.45 \\ spanCRF (len=10) & 0.34 & 0.32 & 0.36 & 0.71 & 0.48 \\ spanCRF (len=20) & 0.36 & 0.33 & 0.37 & 0.73 & 0.52 \\ spanCRF (len=30) & **0.38** & **0.35** & **0.39** & 0.76 & 0.56 \\ \hline CRF (no span) & 0.31 & 0.28 & 0.33 & **0.80** & **0.60** \\ Span CRF (ours) & **0.38** & **0.35** & **0.39** & 0.76 & 0.56 \\ Span CRF (ours w/o BiLSTM) & 0.36 & 0.32 & 0.37 & 0.75 & 0.55 \\ Span CRF (grConv) & 0.32 & 0.30 & 0.34 & 0.74 & 0.51 \\ Span CRF (simple) & 0.34 & 0.33 & 0.36 & 0.72 & 0.52 \\ \hline \end{tabular} \end{table} Table 2: First and second section indicates the effect of max span length (w/o DA) and different span feature representations (w/o DA) on BUILDNyAI \begin{table} \begin{tabular}{|l||c|c|c|c|c|c||c|c|c|c|} \hline & \multicolumn{4}{c||}{**Span**} & \multicolumn{4}{c|}{**Sentence**} \\ \hline & \multicolumn{2}{c|}{**s-mic.**} & \multicolumn{2}{c|}{**s-mac.**} & \multicolumn{2}{c||}{**s-seg**} & \multicolumn{2}{c|}{**mic.**} & \multicolumn{2}{c|}{**mac.**} \\ \hline **Model** & CRF & sp.CRF & CRF & sp.CRF & CRF & sp.CRF & CRF & CRF & sp.CRF \\ \hline No Augmentation & 0.31 & 0.38 & 0.28 & 0.35 & 0.33 & 0.39 & 0.80 & 0.76 & 0.60 & 0.56 \\ + Swapping & **0.32** & 0.39 & 0.30 & **0.36** & **0.34** & 0.40 & **0.82** & 0.80 & 0.62 & **0.58** \\ + Deletion & **0.32** & 0.39 & 0.30 & **0.36** & **0.34** & 0.40 & 0.81 & 0.78 & 0.61 & **0.58** \\ + Back translation & **0.32** & **0.40** & 0.31 & **0.36** & **0.34** & 0.40 & 0.81 & 0.77 & 0.62 & 0.57 \\ + All three DA & **0.32** & **0.40** & **0.32** & **0.36** & **0.34** & **0.41** & **0.82** & **0.81** & **0.63** & **0.58** \\ \hline \end{tabular} \end{table} Table 3: Different data augmentations on CRF and spanCRF on BUILDNyAI
2308.13778
Large-scale gradient-based training of Mixtures of Factor Analyzers
Gaussian Mixture Models (GMMs) are a standard tool in data analysis. However, they face problems when applied to high-dimensional data (e.g., images) due to the size of the required full covariance matrices (CMs), whereas the use of diagonal or spherical CMs often imposes restrictions that are too severe. The Mixture of Factor analyzers (MFA) model is an important extension of GMMs, which allows to smoothly interpolate between diagonal and full CMs based on the number of \textit{factor loadings} $l$. MFA has successfully been applied for modeling high-dimensional image data. This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional MFA training by stochastic gradient descent, starting from random centroid initializations. This greatly simplifies the training and initialization process, and avoids problems of batch-type algorithms such Expectation-Maximization (EM) when training with huge amounts of data. In addition, by exploiting the properties of the matrix determinant lemma, we prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed. At training time, the methods requires the inversion of $l\times l$ matrices only. Besides the theoretical analysis and proofs, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
Alexander Gepperth
2023-08-26T06:12:33Z
http://arxiv.org/abs/2308.13778v1
# Large-scale gradient-based training of Mixtures of Factor Analyzers ###### Abstract Gaussian Mixture Models (GMMs) are a standard tool in data analysis. However, they face problems when applied to high-dimensional data (e.g., images) due to the size of the required full covariance matrices (CMs), whereas the use of diagonal or spherical CMs often imposes restrictions that are too severe. The Mixture of Factor analyzers (MFA) model is an important extension of GMMs, which allows to smoothly interpolate between diagonal and full CMs based on the number of _factor loadings_\(l\). MFA has successfully been applied for modeling high-dimensional image data [1]. This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional MFA training by stochastic gradient descent, starting from random centroid initializations. This greatly simplifies the training and initialization process, and avoids problems of batch-type algorithms such Expectation-Maximization (EM) when training with huge amounts of data. In addition, by exploiting the properties of the matrix determinant lemma, we prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed. At training time, the methods requires the inversion of \(l\times l\) matrices only. Besides the theoretical analysis and proofs, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection. Gaussian Mixture Models, Stochastic Gradient Descent, Mixture of Factor Analyzers, Mixture Models ## I Introduction This contribution focuses on generative machine learning models applied to images. In the original sense of the term (see [2]), this implies that a learning algorithms aims at explicitly modeling the distribution of the data. This includes sampling from this distribution, but is by no means limited to this function. Most prominently, the direct modeling of the distribution itself allows outlier detection and inference. Mixture models figure prominently among generative models, since their approach is to represent the overall data distribution as a linear combination of elementary distributions \(\Psi_{k}(\mathbf{x})\), each of which has parameters \(\mathbf{\xi}_{k}\): \[p(vecx)=\sum_{k}\pi_{k}\Psi(\mathbf{x};\mathbf{\xi}_{k})\equiv\sum_{k}\pi_{k}\Psi_{k} (\mathbf{x}) \tag{1}\] Mixture modeling allows the derivation of important analytical results, e.g., if the \(\Psi_{k}\) are mathematically well-understood. A particular case is the use of multi-variate normal distributions: \[\Psi_{k}(\mathbf{x})=\mathcal{N}(\mathbf{x};\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\equiv \mathcal{N}_{k}(\mathbf{x}), \tag{2}\] which gives rise to so-called _Gaussian Mixture Models_) (GMMs). MFA represent a special case of GMMs: for describing \(d\)-dimensional variables, MFA assumes the existence of an \(l\)-dimensional _latent space_ with \(l\!\ll\!d\) for each mixture component \(k\). In the latent space, variables \(\mathbb{R}^{l}\!\in\!\mathbf{z}_{k}\) follow a simple normal distribution: \(\mathbf{z}_{k}\!\sim\!\mathcal{N}(0,I\in\mathbb{R}^{l\times l})\). The relation between latent and full space is given by a simple generative model: \[\mathbb{R}^{d}\ni\mathbf{x}_{k}=\mathbf{\mu}_{k}+\mathbf{\Lambda}_{k}\mathbf{z}_{k}+\mathbf{ \epsilon}_{k} \tag{3}\] with \(\mathbf{\epsilon}_{k}\!\sim\!\mathcal{N}(0,\mathbf{D}_{k})\), \(\mathbf{D}_{k}\in\mathbb{R}^{d\times d}\) diagonal and \(\mathbf{\Lambda}_{k}\in\mathbb{R}^{d\times l}\). From this, it directly follows that \(\mathbf{x}_{k}\!\sim\!\mathcal{N}(\mathbf{\mu}_{k},\mathbf{\Sigma}_{k})\) with \[\mathbf{\Sigma}_{k}\equiv\mathbf{D}_{k}+\mathbf{\Lambda}_{k}\mathbf{\Lambda}_{k}^{T}. \tag{4}\] . Inversely, the probability that a sample \(\mathbf{x}\) was generated by mixture component \(k\) is given as: \[\ln p_{k}(\mathbf{x})=-\frac{1}{2}\Big{\{}d\log(2\pi)+\log\det\mathbf{\Sigma}_{k}+ \tilde{\mathbf{x}}_{k}^{T}\mathbf{\Sigma}_{k}^{-1}\tilde{\mathbf{x}}_{k}\Big{\}}. \tag{5}\] We will use the convenient shorthand \(\tilde{\mathbf{x}}_{k}\equiv\mathbf{x}-\mathbf{\mu}_{k}\) throughout this article. The goal of MFA training is to find the _loading matrices_\(\mathbf{\Lambda}_{k}\), the component noise matrices \(\mathbf{D}_{k}\), the component means \(\mathbf{\mu}_{k}\) and the component weights \(\pi_{k}\). This is achieved by maximizing the MFA log-likelihood: \[\mathcal{L}=\sum_{n}\log\sum_{k}p_{k}(\mathbf{x}_{n}) \tag{6}\] Optimization of \(\mathcal{L}\) is typically done using the Expectation-Maximization (EM) algorithm which is applicable to many latent-variable models. It relies on the iterative optimization of a lower-bound to the model log-likelihood, which can be shown to be tight. Since EM is not conceptually based on gradient descent, it does not require learning rates or step sizes to be set and is thus very easy to handle. ### _Motivation_ While EM training is feasible for small amounts of low-dimensional data, SGD is a preferable alternative in other cases: Efficient training for large-scale problemsWhen the number of training samples is high, EM as a batch-type algorithm becomes infeasible since it requires a pass through the whole dataset for each iteration. Stochastic generalizations of EM to mini-batch type learning exist [3], but involve several new hard-to-tune hyper-parameters Here, SGD offers a principled alternative. Being based on mini-batches and having a solid theoretical foundation in the Robins-Monro procedure [4], it can be applied to arbitrarily large datasets. **Training from random initial conditions** EM convergence is strongly dependent on initialization, and thus initializing EM by a clustering algorithm such as k-means is common. This, too, becomes problematic for large-scale problems since k-means is a batch-type algorithm as well. We therefore propose training MFA by SGD from random initial conditions, which greatly simplifies the training procedure and removes the necessity to process the whole dataset for initialization. **Efficiency of MFA for high-dimensional data** MFA training involves the inversion of large matrices for high latent-space dimensionalities. since the log-likelihood 5 contains a determinant. However, in 5, covariance matrices must be inverted even when no training is performed. Formulating MFA in terms of precision matrices (i.e., inverse to covariance matrices) will remove this requirement. ### _Related Work_ The MFA model was introduced in [5], together with the basic idea of using the Woodbury matrix inversion theorem and the matrix determinant lemma to avoid inverting large matrices during MFA training. Application of MFA to high-dimensional (image) data was proposed by [6, 7], using the same mathematical ideas for ensuring efficiency even for high data dimensions. In addition to treating high-dimensional image data, [1] proposes training MFA on large-scale datasets like the CelebA face database and demonstrates excellent image generation performance. In this article, stochastic gradient descent is used instead of EM due to the large amount of samples, although the model is still needs to be initialized by k-means. Extending MFA to a deep model was first proposed by [8], where one MFA instance was trained on the latent variables extracted from data by another one. Advantages of this approach when training on high-dimensional data are described, although it is mentioned that more than two layers are rarely useful. This idea is expanded upon by [9], which introduces deeper MFA models based on a similar principle, although application to image data is not described. Both [8] and [9] exclusively use batch-type EM for training. All described works on MFA employ the variance-based description of MFA, meaning that the quantities to be optimized are covariance matrices as opposed to precisions. ### _Goals and contributions_ The goals of this article are to establish MFA training by SGD as a simple and scalable alternative to EM and sEM in streaming scenarios with potentially high-dimensional data. For doing so, we build upon previous work on the optimization of GMMs by SGD [10]. The main novel contributions are: * Mathematical analysis to prove that optimization of MFA log-likelihood by SGD is feasible * Proof that MFA can be formulated in terms of precision matrices * Procedure to train MFA by SGD from random initial conditions * Demonstration of MFA sampling and outlier detection on typical image datasets (MNIST, SVHN) Additionally, we provide a TensorFlow implementation.1 Footnote 1: [https://github.com/anon-scientist/mfa](https://github.com/anon-scientist/mfa) ## II Data We use the following datasets: **MNIST**[11] is the common benchmark for computer vision systems and classification problems. It consists of \(60\,000\)\(28\times 28\) gray scale images of handwritten digits (0-9). **FashionMNIST**[12] consists of images of clothes in 10 categories and is structured like the MNIST. It should be more challenging than the MNIST dataset. ## III Mathematical Analysis ### _Efficient formulation of MFA_ As shown in [1, 5], the component log-likelihoods can be expressed without having to store and invert \(d\)-dimensional matrices by exploiting the Woodbury matrix inversion lemma and the matrix determinant lemma: \[\log\det\boldsymbol{\Sigma}_{k} =\log\det\boldsymbol{L}_{k}+\log\det\boldsymbol{D}_{k} \tag{7}\] \[\boldsymbol{\Sigma}_{k}^{-1} =(\boldsymbol{\Lambda}_{k}\boldsymbol{\Lambda}_{k}^{T}+ \boldsymbol{D}_{k})^{-1}\] \[=\boldsymbol{D}_{k}^{-1}-\boldsymbol{D}_{k}^{-1}\boldsymbol{ \Lambda}_{k}\boldsymbol{L}_{k}^{-1}\boldsymbol{\Lambda}_{k}^{T}\boldsymbol{D} _{k}^{-1}\] where \(\boldsymbol{L}_{k}\!\equiv\!\boldsymbol{I}\!+\!\boldsymbol{\Lambda}_{k}^{T} \boldsymbol{D}_{k}^{-1}\boldsymbol{\Lambda}_{k}\) is an \(l\!\times\!l\) matrix and thus very efficient to compute and process. ### _Proof that MFA can be performed based on precisions_ Eq. (5) could be directly used for gradient descent, which is however difficult in practice due to the explicit matrix inversions. Although the computational load of the matrix inversions in Eq. (7) is reduced w.r.t. inverting full covariance matrices, is still significant and grows as \(\mathcal{O}(l^{3})\). Worse still, matrix inversion may be numerically problematic in SGD for initial solutions that are far from the optimum (unlike [1] which performs SGD starting from a k-means initialization). Lastly, Eq. (5) requires matrix inversions not only for training, but even when executing a trained model. We therefore resort to a more efficient and numerically robust strategy which avoids explicit matrix inversions. To this effect, we parameterize multi-variate normal densities by precision matrices \(\boldsymbol{P}_{k}\equiv\boldsymbol{\Sigma}_{k}^{-1}\). While the parameters of the generative model Eq. (3) have a simple relation to the learned covariance matrices, the same is not generally true for precision matrices. The question is thus how the parameters \(\boldsymbol{\Lambda}_{k}\), \(\boldsymbol{D}_{k}\) of the generative model Eq. (3) can be recovered from a trained precision matrix. For achieving this, we parameterize precision matrices differently from Eq. (4), using diagonal \(d\times d\) matrices \(\mathbf{E}_{k}\) and \(d\times l\) matrices \(\mathbf{\Gamma}_{k}\) as \[\mathbf{P}_{k}=\mathbf{\Sigma}_{k}^{-1}=\mathbf{E}_{k}-\mathbf{\Gamma}_{k}\mathbf{\Gamma}_{k}^{T}. \tag{8}\] With this definition, the log-likelihoods can be computed in a more efficient fashion than it would be possible when using covariance matrices, compare to Eq. (5): \[\log p_{k}(\mathbf{x}) =-\frac{1}{2}\Big{\{}d\log(2\pi)-\log\det\mathbf{P}_{k}+\tilde{\mathbf{x} }^{T}\mathbf{P}_{k}\tilde{\mathbf{x}}\Big{\}}\] \[=-\frac{1}{2}\Big{\{}d\log(2\pi)-\log\det\mathbf{P}_{k}+\tilde{\mathbf{x} }^{T}E\tilde{\mathbf{x}}-\] \[-(\Gamma_{k}^{T}\tilde{\mathbf{x}})^{2}\Big{\}} \tag{9}\] By the matrix determinant lemma, we again have \[\log\det\mathbf{P}_{k}=\log\det\mathbf{M}_{k}+\log\det\mathbf{E}_{k} \tag{10}\] where \(\mathbf{M}_{k}\!\equiv\!\mathbf{I}\!-\!\mathbf{\Gamma}_{k}^{T}\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_ {k}\) is again an \(l\times l\) matrix. The minus sign in Eq. (8) ensures that, by the Woodbury matrix inversion lemma, we can express the covariance matrices \(\mathbf{\Sigma}_{k}\) as a function of \(\mathbf{E}_{k}\) and \(\mathbf{\Gamma}_{k}\), which in turns allows the determination of the generative model parameters: \[\mathbf{\Sigma}_{k}=\mathbf{P}_{k}^{-1}=\left(\mathbf{E}_{k}-\mathbf{\Gamma}_{k}\mathbf{\Gamma}_{ k}^{T}\right)^{-1}=\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_{k}\mathbf{M}_{k}^{-1}\mathbf{\Gamma}_{k}^ {T}\mathbf{E}_{k}^{-1} \tag{11}\] Examining Eq. (11) and remembering that, as a consequence of the formulation of MFA as a generative model, we have \(\mathbf{\Sigma}_{k}=\mathbf{D}_{k}+\mathbf{\Lambda}_{k}\mathbf{\Lambda}_{k}^{T}\), we can compare terms and arrive at the following identifications: \[\mathbf{D}_{k} \equiv\mathbf{E}_{k}^{-1}\] \[\mathbf{\Lambda}_{k} \equiv\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_{k}\mathbf{\mathcal{M}}_{k}^{-0.5} \mathbf{O}_{k}\] \[\mathbf{O}_{k}\mathbf{\mathcal{M}}^{-1}\mathbf{O}_{k}^{T} =\mathbf{M}_{k}^{-1} \tag{12}\] The "square root" of the inverse of \(\mathbf{M}_{k}\) is performed by first diagonalizing \(\mathbf{M}_{k}\) as \(\mathbf{M}_{k}=\mathbf{O}^{T}\mathbf{\mathcal{M}}_{k}\mathbf{O}\), with \(\mathbf{O}\) orthogonal and \(\mathbf{\mathcal{M}}_{k}\) diagonal. The inverse of the diagonal matrix \(\mathbf{\mathcal{M}}_{k}\) is trivial, and we obtain \(\mathbf{M}_{k}^{-1}=\mathbf{O}_{k}^{T}\mathbf{\mathcal{M}}_{k}^{-1/2}\mathbf{\mathcal{M}}_{k}^ {-1/2}\mathbf{O}_{k}\). The orthogonal matrix \(\mathbf{O}_{k}\) can actually be omitted from \(\mathbf{\Lambda}_{k}\) here because an orthogonal transformation, when applied to the random latent vector, will produce another random vector of unit variance and zero mean. This defines all required parameters of the generative model in terms of the learned precision matrices, which allows us to sample efficiently in the low-dimensional space. ### _Properties of \(\mathbf{M}_{k}\)_ The following properties of the \(l\times l-\)matrices \(\mathbf{M}_{k}\) are relevant to SGD optimization. Here, we will just present some proofs and comments on why certain properties are desirable: **Symmetry**:\(\mathbf{M}_{k}^{T}=\mathbf{M}_{k}\). This automatically follows from the definition of \(\mathbf{M}_{k}\): \[\mathbf{M}_{k}^{T} =(I-\mathbf{\Gamma}_{k}\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_{k}^{T})^{T}\] \[=I-(\mathbf{\Gamma}_{k}^{T})^{T}(\mathbf{E}_{k}^{-1})^{T}\mathbf{\Gamma}_{k}^ {T}\] \[=I-\mathbf{\Gamma}_{k}\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_{k}^{T}=\mathbf{M}_{k} \tag{13}\] since the inverse of the diagonal matrix \(\mathbf{E}_{k}\) is diagonal as well and thus symmetric. Symmetry is important since the eigenvalues of a symmetric real matrix are real, and, by positive-definiteness, must be strictly positive. As a consequence, the definiteness of the \(\mathbf{M}_{k}\) can be monitored via their eigenvalues. These are cheap to compute since \(l\) is usually small. **Diagonality** This is not strictly required but facilitates the computation of eigenvalues and, above all, ensures linear independency of the columns of the loading matrices \(\mathbf{\Gamma}_{k}\), see Sec. IV. This can be proven as follows: for any two columns \(\mathbf{\Gamma}_{k:i}\), \(\mathbf{\Gamma}_{k:j}\) of the loading matrices, we know that \(\mathbf{M}_{ij}=\mathbf{I}_{ij}-\mathbf{\Gamma}_{k:i}^{T}\mathbf{E}^{-1}\mathbf{\Gamma}_{k:j}\). Diagonality of \(\mathbf{M}\) implies that \(\mathbf{M}_{ij}=0\) for \(i\neq j\). If, on the other hand, \(\mathbf{\Gamma}_{k:i}\) and \(\mathbf{\Gamma}_{k:j}\), \(i\neq j\) were linearly dependent (\(\mathbf{\Gamma}_{k:i}=k\mathbf{\Gamma}_{k:j}\), \(k\in\mathbb{R}\)), we would have \(\mathbf{M}_{ij}\!\!=\!\!\mathbf{I}_{ij}-k^{2}\mathbf{\Gamma}_{k:i}^{T}\mathbf{E}_{k}^{-1}\mathbf{ \Gamma}_{k:i}\) which cannot be zero due to the positive-definiteness of \(\mathbf{E}_{k}\). Thus, it is shown that diagonalizing the \(\mathbf{M}_{k}\) leads to linear independency of the columns in the loading matrices. **Positive-definiteness** The matrices \(\mathbf{M}_{k}=\mathbf{I}-\mathbf{\Gamma}_{k}^{T}\mathbf{E}_{k}^{-1}\mathbf{\Gamma}_{k}\) are not positive-definite by construction, although this property is a requirement of the model. Since we showed in the previous paragraph that linearly independent columns of the \(\mathbf{\Gamma}_{k}\) achieve diagonality of the \(\mathbf{M}_{k}\), all we need to show is that there exist choices of particular \(\mathbf{\Gamma}_{k}\) and \(\mathbf{E}_{k}\) for which one diagonal element of \(\mathbf{M}_{k}<0\). Such a choice is, e.g.: \((\mathbf{\Gamma}_{k})_{1:}=[2,0,\ldots]^{T}\) and \(\mathbf{E}_{k}=I\). In this case, we have \(M_{11}=-3\) which proves our proposition. The remaining columns of \(\mathbf{\Gamma}_{k}\) are arbitrary as long as they are independent from the first one. In order to maintain positive-definiteness, the eigenvalues of the diagonal \(\mathbf{M}_{k}\) must therefore be monitored and the appropriate columns in \(\mathbf{\Gamma}_{k}\) modified in case eigenvalues drop below 0. ### _Sampling from a trained MFA model_ Sampling is now a straightforward procedure which is conducted in two steps: initially, a component \(k^{*}\) is selected by drawing from a multinomial distribution parameterized by the component weights \(\pi_{k}\): \(k^{*}\sim M(\pi_{1},\ldots,\pi_{K})\). For the selected component \(k^{*}\), a realization of the latent variable \(\mathbf{z}\!\sim\!\mathcal{N}(0,\mathbf{I})\) is drawn and then transformed to the high-dimensional space using the generative model of Eq. (3): \[\mathbf{x}_{k^{*}} =\mathbf{\mu}_{k^{*}}+\mathbf{\Lambda}_{k^{*}}\mathbf{z}+\mathbf{\epsilon}_{k^{*}}\] \[\mathbf{\epsilon}_{k^{*}} \sim\mathcal{N}(0,\mathbf{D}_{k^{*}}). \tag{14}\] Here, we compute the loading matrices \(\mathbf{\Lambda}_{k^{*}}\) and the diagonal covariance matrices \(\mathbf{D}_{k^{*}}\) appearing in the generative model from the results of precision-based training as indicated in Eq. (12). As stated there, we omit orthogonal matrices from the definition of the loading matrices, and finally use: \[\mathbf{\Lambda}_{k^{*}}\equiv\mathbf{E}_{k^{-1}}^{-1}\mathbf{\Gamma}_{k^{*}}\mathcal{ M}_{k^{*}}^{-0.5} \tag{15}\] This two-step sampling procedure is equivalent to that of a GMM, except for the fact that the full-rank covariance matrices in GMMs are here expressed by the loading matrices of lower rank: \(\mathbf{\Sigma}_{k^{*}}=\mathbf{\Lambda}_{k^{*}}\mathbf{\Lambda}_{k^{*}}^{T}+\mathbf{D }_{k^{*}}\). This is a general property of MFA models, irrespectively of the way they are trained. ## IV Proposal for constrained SGD optimization When optimizing the precision-based MFA model by gradient descent, we use the procedure for training GMMs by SGD as described in [10]. In addition, several MFA-specific constraints are enforced in addition to the usual GMM constraint \(\sum_{k}\pi_{k}=1\): **Linearly independent columns of the loading matrices \(\mathbf{\Gamma}_{k}\)** Although it is not formally required by the model, the columns of the \(\mathbf{\Gamma}_{k}\) should be at least linearly independent. If they were not, the dependent columns would not contain additional information about the data and could thus be discarded. Indeed, it can be shown that a solution where two columns of the loading matrix are dependent or identical constitutes a local extremal point of the loss (to be avoided for SGD). For addressing the independence constraints, we first observe that, if the diagonal precision matrices are spherical, \(\mathbf{E}_{k}=c\mathbf{I}\), the diagonalization of \(\mathbf{M}_{k}\) achieves orthogonality of the columns in \(\mathbf{\Gamma}_{k}\). We therefore initialize diagonal precisions to \(\mathbf{E}_{k}=c\mathbf{I}\) and diagonalize \(\mathbf{M}_{k}\) after each SGD step. This will ensure orthogonality of the columns in the loading matrix in the initial phases of SGD. As the \(\mathbf{E}_{k}\) evolve under the influence of SGD, loading matrix columns will no longer be orthogonal but still linearly independent by virtue of diagonalizing the \(\mathbf{M}_{k}\), see Sec. III-B. **Positive-definiteness of the \(\mathbf{M}_{k}\)** If this were not the case, the logarithm in Sec. III-B would be undefined. The \(\mathbf{E}_{k}\) must be positive-definite in any case, which we achieve by re-parameterizing them as the square of a diagonal matrix: \(\mathbf{E}_{k}=\mathbf{\mathcal{E}}_{k}^{2}\). Thus, the matrices \(\mathbf{P}_{k}\) are positive-definite as well. For maintaining positive-definiteness of \(\mathbf{M}_{k}\), we must ensure that \(\det M_{k}>0\), which is not guaranteed, see Sec. III-C. Since the \(\mathbf{M}_{k}\) are diagonalized after each SGD step (see previous paragraph), their eigenvalues can be read off from their diagonal. Maintaining positive-definiteness then amounts to preventing negative eigenvalues during SGD. Where an eigenvalue \((\mathbf{M}_{k})_{ii}\) is below a threshold \(M_{\text{min}}\), we multiply the corresponding vector \(\mathbf{\Gamma}_{k:i}\) in the loading matrix \(\mathbf{\Gamma}_{k}\) by a factor that ensures that \((\mathbf{M}_{k})_{ii}=\mathbf{\Gamma}_{k:i}^{T}\mathbf{E}^{-1}\mathbf{\Gamma}_{k:i}=M _{\text{min}}\). Multiplication by a constant factor is the only simple operation that preserves the diagonality of the \(\mathbf{M}_{k}\) and the linear independence of the columns of the loading matrices. ## V Experiments We created a publicly available implementation of SGD-based MFA training based on Tensorflow 2.7 [13], in particular its keras package. This implementation is used for all experiments described here, with acceleration provided GPUs of the type "nVidia GTX 2080 Super". Each experiment for which metrics are presented is repeated 10 times, and we always report means and standard deviations of these metrics. Unless otherwise stated, we always use \(K=25\) components and a latent dimension of \(l=4\). Otherwise, default parameters and initialization from [10] are used. In particular, centroids are always initialized to uniform random values between -0.1 and 0.1. The diagonal precision matrices \(\mathcal{E}_{k}\) are clipped from above at at value of \(D_{\text{max}}=20\) in order to avoid unbounded precision values for pixels that have no variability. Throughout all experiments, a mini-batch size of 100 is used for SGD. In order to avoid undesirable local optima during early phases of training, it is imperative that centroids \(\mathbf{\mu}_{k}\) converge before the precision and loading matrices \(\mathbf{E}_{k}\), \(\mathbf{\Gamma}_{k}\). In practice, this can be achieved by giving different weights to the gradients \(\vec{\nabla}_{\mathbf{\mu}_{k}}\mathcal{L}\), \(\vec{\nabla}_{\mathbf{E}_{k}}\mathcal{L}\) and \(\vec{\nabla}_{\mathbf{\Gamma}_{k}}\mathcal{L}\). Sensible values for these weights are \(\lambda_{\mathbf{\Gamma}}=\lambda_{\mathbf{\mathcal{E}}}=0.1\) and \(\lambda_{\mathbf{\mu}}=1\), although in this case the precision matrices will converge rather slowly. A workaround to artificially accelerate training is to conduct separate training phases: in phase I, only the \(\mathbf{\mu}_{k}\) are adapted, whereas all quantities are adapted together with equal weights in phase II. Unless otherwise stated, we use 15 epochs for phase 1 and 50 epochs for phase II. ### _Training procedure and basic feasibility_ In order to demonstrate the feasibility of MFA training by constrained SGD, we train the MFA model as in Sec. V on MNIST and FashionMNIST. To avoid cluttered and over-complex results, we restrict training to classes 0,1 and 2 (although it works just fine with all classes). For both datasets, we report the final centroids, precisions and loading matrices in Fig. 1. As stated in [10], the centroids are initialized to random values between-0.1 and 0.1, variances are uniformly initialized to \(D_{\text{max}}=20\) and loading matrices are initialized such that \(\mathbf{M}_{k}=0.0001\mathbf{I}\). The component weights are uniformly initialized to \(\pi_{k}=\frac{1}{4}\). When repeating this experiment 10 times, we observe that convergence is always achieved, although of course the precise converged values vary due to random initial conditions. When inspecting the centroids, the SOM-like self-organization by similarity is apparent, which is an artifact of the training process, see Fig. 1. It is visually apparent from Fig. 1 that the columns of the loading matrices are all distinct (due to diagonalizing \(\mathbf{M}_{k}\)), and capture major directions of variations. We also note that the strength of variations decreases with higher \(l\), reminiscent of principal directions in PCA. ### _MFA sampling_ We use the trained models from Sec. V-A to perform sampling (see Sec. III-D) from mixture components 0,6 and 15 (MNIST) and 1,3,13 (FashionMNIST). As described in Sec. III-D, the mixture component to sample is be chosen randomly according to the component weights, but here we manually select components to sample from, in order to illustrate how MFA introduces variability into sampling from the same component. Sampling results are presented in Fig. 2. We observe that, despite the fact of originating from the same centroid, all samples are subtly different due to the multi-dimensional latent space that captures variations along the directions stored in the factor loadings. ### _Outlier detection and comparison to GMMs_ For demonstrating the outlier detection capacity and thus the validity of precision-based MFA, we train it on MNIST classes 0-8 and record the test test log-likelihoods on the remaining class 9. For this experiment, we use \(K=49\) components but leave the experimental procedure and parameters of Sec. V untouched otherwise. Since class 9 has not been used for training, it constitutes an outlier class and should be recognized as such. This is realized by a threshold \(\theta\) applied to each log-likelihood \(\mathcal{L}(\mathbf{x}_{n})\), where \(\mathcal{L}(\mathbf{x}_{n})<\theta\) indicates an outlier since log-likelihoods are maximized. For a given value of \(\theta\), we can compose two values: the Fig. 1: Centroids \(\mathbf{\mu}_{k}\) (left), precisions \(\mathbf{E}_{k}\) (second from left) and loading matrices \(\mathbf{\Gamma}_{k}\) for \(l=0,1,2,3\) (four right-most images) for precision-based MFA performed on MNIST (upper row) and FashionMNIST(lower row). Each tile in the loading matrix images belongs to the centroid at the corresponding tile position. Centroids are scaled in the \([0,1]\)-range, loading matrices in the \([-1,1]\)-range and precisions between 18 and 20. Fig. 2: Sampling from MNIST(upper row) and FashionMNIST(lower row) from selected mixture components. Selected components (to be compared to centroids in Fig. 1) are 0,6,15 for MNIST and 1,3,13 for FashionMNIST. Please enlarge the figure in order to observe that each sample is distinct in shape from the others. This is most notable for the rightmost MNIST samples of digit class 2. But also for other classes, differences in slant and strokes are observable. percentage \(\alpha\) of true inliers (class 0-8 samples) that are recognized, and the percentage \(\beta\) of true outliers (class 9 samples) that are rejected. By varying \(\theta\), we obtain the ROC-like plots, for which we compute the area-under-the-curve measure (AUC). AUCs for MNIST and FashionMNIST, both for a GMM (same number of components) and precision-based MFA, are given in Tab. I. We observe that MFA slightly improves upon GMM performance. ## VI Discussion The mathematical and experimental results presented in Sec. III and Sec. V indicate that, first of all, precision-based MFA can be successfully trained by SGD, from random initial conditions, and on large-scale datasets. We showed that MFA models trained by SGD can be successfully used for sampling and outlier detection, two important functionalities of unsupervised learning. Here, we will discuss the wider implications of these results: **Efficiency for streaming data** In conventional machine learning settings where a model is first trained, then deployed/applied without further adaptation, it is a feasible strategy to pre-compute the inverse covariance matrix after training has finished, and thus to avoid matrix inversions. In situations where the model is updated continuously while being applied (evaluate-then-train strategy, see [14]), this is no longer feasible since the covariance matrix will change continuously. Here, the precision-based approach will be superior since it requires matrix inversion for the training step only. **Simplicity** The proposed SGD approach to MFA has the advantage of being extremely simple. In particular, no complex initialization of centroids by, e.g., k-means, is required, and neither is the even more complex initialization of covariance matrices as,e.g., used in [1]. Instead, centroids are initialized to random small values with guaranteed convergence, as shown here and in [10]. **Processing of large and high-dimensional datasets** Due to the "trick" of computing determinants and matrix inverses on the low-dimensional matrices \(\mathbf{M}_{k}\), \(\mathbf{L}_{k}\), MFA can be applied to high-dimensional data, at least as long as \(l\) is small. This has been shown in previous works [5, 6] using EM for optimization. However, MFA training on large datasets or streaming data has been problematic due to the batch-type nature of EM, which causes memory requirements to grow linearly with the number of samples. Stochastic variants of EM only partially fix this problem since they introduce several new and unintuitive hyper-parameters that must be tuned by grid search. Here, SGD offers a principled alternative, since its memory requirements only depend on the chosen mini-batch size, and since the choice of the single learning-rate parameter is well-understood. **Independent factor loadings** In contrast to the original formulation of MFA [5] which imposes no constraints on the loading matrices, we demand that factor loadings be independent. We do not observe any problems related to this additional constraint, as convergence was universal in all conducted experiments. Conversely, we did observe a few cases where SGD was stuck in local extremal points with partially identical factor loadings. **Small-\(l\) regime** MFA is strongly related to principal components analysis (PCA), since the factor loadings aim to capture, for each mixture component, the directions that best explain that component's variance. In contrast to PCA, we do not require individual directions to be orthogonal, only independent. As well-known fact is that the number of directions required to explain a large part of the variance is rather small, especially for images. Thus, running MFA in a "small \(l\) regime" seems feasible. ## VII Conclusion and outlook This article has a mathematical as well as practical contribution, showing how precision-based MFA can be trained by SGD in theory, and then validating the mathematical proofs by experiments using an own keras-based implementation. Future work will include a convolutional generalization of MFA, and the stacking into deep MFA hierarchies for realistic sampling od complex images.
2310.07547
Entropy estimators for Markovian sequences: A comparative analysis
Entropy estimation is a fundamental problem in information theory that has applications in various fields, including physics, biology, and computer science. Estimating the entropy of discrete sequences can be challenging due to limited data and the lack of unbiased estimators. Most existing entropy estimators are designed for sequences of independent events and their performance vary depending on the system being studied and the available data size. In this work we compare different entropy estimators and their performance when applied to Markovian sequences. Specifically, we analyze both binary Markovian sequences and Markovian systems in the undersampled regime. We calculate the bias, standard deviation and mean squared error for some of the most widely employed estimators. We discuss the limitations of entropy estimation as a function of the transition probabilities of the Markov processes and the sample size. Overall, this paper provides a comprehensive comparison of entropy estimators and their performance in estimating entropy for systems with memory, which can be useful for researchers and practitioners in various fields.
Juan De Gregorio, David Sanchez, Raul Toral
2023-10-11T14:50:47Z
http://arxiv.org/abs/2310.07547v2
# Entropy estimators for Markovian sequences: A comparative analysis ###### Abstract Entropy estimation is a fundamental problem in information theory that has applications in various fields, including physics, biology, and computer science. Estimating the entropy of discrete sequences can be challenging due to limited data and the lack of unbiased estimators. Most existing entropy estimators are designed for sequences of independent events and their performance vary depending on the system being studied and the available data size. In this work we compare different entropy estimators and their performance when applied to Markovian sequences. Specifically, we analyze both binary Markovian sequences and Markovian systems in the undersampled regime. We calculate the bias, standard deviation and mean squared error for some of the most widely employed estimators. We discuss the limitations of entropy estimation as a function of the transition probabilities of the Markov processes and the sample size. Overall, this paper provides a comprehensive comparison of entropy estimators and their performance in estimating entropy for systems with memory, which can be useful for researchers and practitioners in various fields. ## I Introduction The entropy associated to a random variable is a measure of its uncertainty or diversity, taking large values for a highly unpredictable random variable (i.e. all outcomes equally probable) and low values for a highly predictable one (i.e. one or few outcomes much more probable than the others). As such, the concept has found multiple applications in a variety of fields including but not limited to nonlinear dynamics, statistical physics, information theory, biology, neuroscience, cryptography and linguistics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Due to its mathematical simplicity and clear interpretation, Shannon definition is the most widely used measure of entropy [12]. For a discrete random variable \(X\) with \(L\) distinct possible outcomes: \(x_{1},\ldots,x_{L}\), the Shannon entropy reads \[H[X]=-\sum_{i=1}^{L}p(x_{i})\ln(p(x_{i})), \tag{1}\] where \(p(x_{i})\) denotes the probability that the random variable \(X\) takes the value \(x_{i}\). It often occurs in practice that the probability distribution of the variable \(X\) is unknown, either due to mathematical difficulties or to the lack of a deep knowledge of the details of the underlying experiment described by the random variable \(X\). In those situations, it is not possible to compute the entropy using Eq. (1) directly. In general our information is restricted to a finite set of ordered data resulting from the observation of the outcomes obtained by repeating a large number of times, \(N\), the experiment. Hence, the goal is to estimate \(H\) from the ordered sequence \(S=X_{1},\ldots,X_{N}\), where each \(X_{j}\in\{x_{i}\}_{i=1}^{L}\) with \(j=1,\ldots,N\). A numerical procedure that provides an approximation to the true value of \(H\) based on the sequence \(S\) is called an _entropy estimator_. As the sequence \(S\) is random, it is clear that an entropy estimator is itself a random variable, taking different values for different realizations of the sequence of \(N\) outcomes. It would be highly desirable to have an unbiased entropy estimator, i.e., an estimator whose average value coincides with the true result \(H\) for all values of the sequence length \(N\). However, it can be proven that such an estimator does not exist [13] and that, besides the unavoidable statistical errors due to the finite number \(N\) of data of the sample (and which typically scale as \(N^{-1/2}\)), all estimators present systematic errors which are in general difficult to evaluate properly. Therefore, a large effort has been devoted to the development of entropy estimators that, although necessarily biased, provide a good value for \(H\) with small statistical and systematic errors [14]. The problem of finding a good estimator with small errors becomes more serious when the number of data \(N\) is relatively small. Indeed, when the size of available data is much larger than the possible outcomes (\(N\gg L\)), it is not difficult to estimate \(H\) accurately, and all of the most popular estimators are naturally satisfactory in this regime. The task becomes much harder as the numbers \(L\) and \(N\) come closer to each other. It is particularly difficult in the undersampled regime (\(N\lesssim L\)) [15], where some, or potentially many, possible outcomes may not be observed in the sequence. It is in this regime where the difference in accuracy among the available estimators is more significant. We emphasize that the discussed difficulties already appear for independent identically distributed (i.i.d.) random variables. Precisely, the previous literature has largely dealt with entropy estimators proposed for sequences of i.i.d. random variables [16; 17; 18; 14; 19]. However, it is not clear that real data arising from experimental observation can be described with i.i.d. random variables due to the ubiquitous presence of data correlations. The minimal correlations in discrete sequences are of Markovian nature. Then, how do the main entropy estimators behave for Markovian sequences? The purpose of this work is to make a detailed comparison of some of the most widely used entropy estimators in systems whose future is conditionally independent of the past (Markovian). In Markovian sequences, correlations stem from the fundamental principle that the probability of a data value appearing at a specific time depends on the value observed in the preceding time step. Markov chains have been used to model systems in a large variety of fields such as statistical physics [20], molecular biology [21], weather forecast [22], and linguistics [23], just to mention a few. Below, we analyze the strengths and weaknesses for estimators tested in correlated series of numerically generated data. We compare the performances for the estimators that have shown to give good results for independent sequences [14]. For definiteness, we below consider Markovian sequences of binary data. Furthermore, even in those cases in which the series can be considered genuinely constructed out of i.i.d. variables, the calculation of relevant quantities in information theory, such as entropy rate and predictability gain [24] require to estimate the _block entropy_ of a sequence, obtained from the estimation of the entropy associated not to a single result, but to a block of consecutive results. As we will argue in the following sections, the construction of the blocks induces correlations amongst them, even if the original sequence is not correlated. The calculation of the block entropy is also a tool that can be used to estimate the memory of a given sequence [25], which is of utmost importance when dealing with strongly correlated systems [26; 27; 28; 29; 30; 31]. The rest of the paper is organized as follows. In Sec. II we make a brief overview of the nine entropy estimators being considered in this study. In Sec. III we present the results of our comparative analysis of these estimators in two Markovian cases: A) binary sequences; and B) in an undersampled regime. Section IV contains the conclusions and an outlook. Finally, in Appendix A we provide a new interpretation in terms of geometric distributions of an estimator which is widely used as the starting point to construct others, and in Appendix B we prove the equivalence between a dynamics of block sequences and a Markovian random variable. ## II Entropy estimators In the following we will use the notation \(\hat{a}\) to refer to a numerical estimator of the quantity \(a\). The bias of \(\hat{a}\) is defined as \[B[\hat{a}]=\langle\hat{a}\rangle-a. \tag{2}\] The estimator \(\hat{a}\) is said to be unbiased if \(B[\hat{a}]=0\). The dispersion of \(\hat{a}\) is given by the standard deviation \[\sigma[\hat{a}]=\sqrt{\langle\hat{a}^{2}\rangle-\langle\hat{a}\rangle^{2}}. \tag{3}\] Ideally, \(\hat{a}\) should be as close to the true value \(a\) as possible. Therefore, it is desirable that \(\hat{a}\) has both low bias and low standard deviation. With this in mind, it is natural to consider the mean squared error of an estimator, given by \[\text{MSE}[\hat{a}]=B[\hat{a}]^{2}+\sigma[\hat{a}]^{2}, \tag{4}\] to asses its quality. Hence, when comparing estimators of the same variable, the one with lowest mean squared error is preferable. Several entropy estimators were developed with the explicit assumption that the sequences being analyzed are uncorrelated [32; 33]. The main assumption is that the probability of the number of times \(n_{i}\) that the outcome \(x_{i}\) occurs in a sequence of length \(N\) follows a binomial distribution, \[P(n_{i})=\binom{N}{n_{i}}p(x_{i})^{n_{i}}(1-p(x_{i}))^{N-n_{i}}. \tag{5}\] This approach is not valid when dealing with general Markovian sequences because Eq. (5) no longer holds. In fact, there is not known closed form to write the probability distribution for \(n_{i}\) for this case. Even for entropy estimators that were not developed directly using Eq. (5), their performance is usually only analyzed for independent sequences [14]. Hence, the need to compare and evaluate the different estimators in Markov chains. Even though there exists a plethora of entropy estimators in the literature [34; 35; 36; 37; 38; 39; 40; 41; 42], we here focus on nine of the most commonly employed estimators. ### Maximum likelihood estimator The maximum likelihood estimator (MLE) (also known as plug-in estimator) simply consists of replacing the exact probabilities in Eq. (1) for the estimated frequencies, \[\hat{p}(x_{i})=\frac{\hat{n}_{i}}{N}, \tag{6}\] where \(\hat{n}_{i}\) is the number of times that the outcome \(x_{i}\) is present in the sequence. It is well known that Eq. (6) is an unbiased estimator of \(p(x_{i})\), but the MLE estimator, given by \[\hat{H}^{\text{\tiny MLE}}=-\sum_{i=1}^{L}\hat{p}(x_{i})\ln(\hat{p}(x_{i})), \tag{7}\] is negatively biased [13], i.e. \(\langle\hat{H}^{\text{\tiny MLE}}\rangle-H<0\). ### Miller-Madow estimator The idea behind the Miller-Madow estimator (MM) [43] is to correct the bias of \(\hat{H}^{\text{\tiny MLE}}\) up to the first order in \(1/N\), resulting in \[\hat{H}^{\text{\tiny MM}}=\hat{H}^{\text{\tiny MLE}}+\frac{N_{0}-1}{2N}, \tag{8}\] where \(N_{0}\) is the number of different elements present in the sequence. Corrections of higher order are not considered because they include the unknown probabilities \(p(x_{i})\)[44]. ### Nemenman-Shafee-Bialek estimator A large family of entropy estimators are derived by estimating the probabilities using a Bayesian framework [45; 46; 47; 48; 39; 49; 50; 48]. The Nemenman-Shafee-Bialek estimator (NSB) [49; 51; 50] provides a novel Bayesian approach that, unlike traditional methods, does not rely on strong prior assumptions on the probability distribution. Instead, this method uses a mixture of Dirichlet priors, designed to produce an approximately uniform distribution of the expected entropy value. This ensures that the entropy estimate is not exceedingly biased by prior assumptions. The Python implementation developed in Ref. [52] was used in this paper for the calculations of the NSB estimator. ### Chao-Shen estimator The Chao-Shen estimator (CS) [16] takes into account two corrections to Eq. (7) to reduce its bias: first, a Horvitz-Thompson adjustment [53] to account for missing elements in a finite sequence; second, a correction to the estimated probabilities, \(\hat{p}^{\textsc{cs}}(x_{i})=\hat{C}^{\textsc{cs}}\hat{p}(x_{i})\), leading to \[\hat{C}^{\textsc{cs}}=1-\frac{N_{1}}{N}, \tag{9}\] where \(N_{1}\) is the number of elements that appear only once in the sequence. The Chao-Shen entropy estimator is then \[\hat{H}^{\textsc{cs}}=-\sum_{x_{i}\in S}\frac{\hat{p}^{\textsc{cs}}(x_{i})\ln (\hat{p}^{\textsc{cs}}(x_{i}))}{1-(1-\hat{p}^{\textsc{cs}}(x_{i}))^{N}}. \tag{10}\] ### Grassberger estimator Assuming that all \(p(x_{i})\ll 1\), the probability distribution of each \(n_{i}\) can be approximated by a Poisson distribution. Following this idea, Grassberger (G) derived the estimator presented in Ref. [32] by first considering Renyi entropies of order \(q\)[54]: \[H(q)=\frac{1}{q-1}\ln\sum_{i=1}^{L}p^{q}(x_{i}). \tag{11}\] Taking into account that the Shannon case can be recovered by taking the limit \(q\to 1\), the author proposed a low bias estimator for the quantity \(p^{q}\), for an arbitrary \(q\). This approach lead to the estimator presented in Ref. [55] and then, finally, to the improved estimator given by \[\hat{H}^{\textsc{cs}}=\ln(N)-\frac{1}{N}\sum_{i=1}^{L}\hat{n}_{i}G_{\hat{n}_{ i}}, \tag{12}\] with \(G_{1}=-\gamma-\ln 2\), \(G_{2}=2-\gamma-\ln 2\) and the different values of \(G_{n_{i}}\) are computed using the recurrence relation \[G_{2n+1} =G_{2n} \tag{13}\] \[G_{2n+2} =G_{2n}+\frac{2}{2n+1}, \tag{14}\] where \(\gamma=0.57721\dots\) is Euler's constant. ### Bonachela-Hinrichsen-Munoz estimator The idea behind the Bonachela-Hinrichsen-Munoz estimator (BHM) [33] is to make use of Eq. (5) to find a balanced estimator of the entropy that, on average, minimizes the mean squared error. The resulting estimator is given by \[\hat{H}^{\textsc{bhm}}=\frac{1}{N+2}\sum_{i=1}^{L}(\hat{n}_{i}+1)\sum_{j=\hat {n}_{i}+2}^{N+2}\frac{1}{j}. \tag{15}\] ### Shrinkage estimator The estimator proposed by Hausser and Strimmer [18] (HS) is a shrinkage-type estimator [56], in which the probabilities are estimated as an average of two models: \[\hat{p}^{\textsc{us}}(x_{i})=\lambda\frac{1}{L}+(1-\lambda)\hat{p}(x_{i}), \tag{16}\] where the weight \(\lambda\) is chosen so that the resulting estimator \(\hat{p}^{\textsc{us}}\) has lower mean squared error than \(\hat{p}\) and is calculated by [57] \[\lambda=\min\left(1,\frac{1-\sum_{i=1}^{L}(\hat{p}(x_{i}))^{2}}{(N-1)\sum_{i= 1}^{L}(1/L-\hat{p}(x_{i}))^{2}}\right). \tag{17}\] Hence, the shrinkage estimator is \[\hat{H}^{\textsc{us}}=-\sum_{i=1}^{L}\hat{p}^{\textsc{us}}(x_{i})\ln(\hat{p} ^{\textsc{us}}(x_{i})). \tag{18}\] ### Chao-Wang-Jost estimator The Chao-Wang-Jost estimator (CWJ) [58] uses the series expansion of the logarithm function, as well as a correction to account for the missing elements in the sequence. This estimator is given by \[\hat{H}^{\text{\tiny CWJ}} =\sum_{i=1}^{L}\frac{\hat{n}_{i}}{N}(\psi(N)-\psi(\hat{n}_{i})) \tag{19}\] \[+\frac{N_{1}}{N}(1-A)^{1-N}\left(-\ln(A)-\sum_{j=1}^{N-1}\frac{1}{ j}(1-A)^{j}\right),\] where \(\psi(z)\) is the digamma function and \(A\) is given by \[A=\begin{cases}\frac{2N_{2}}{(N-1)N_{1}+2N_{2}}&\text{if }N_{2}>0,\\ \frac{2}{(N-1)(N_{1}-1)+2}&\text{if }N_{2}=0,N_{1}>0,\\ 1&\text{if }N_{1}=N_{2}=0,\end{cases} \tag{20}\] with \(N_{1}\) and \(N_{2}\) the number of elements that appear once and twice, respectively, in the sequence. In the supplementary material of Ref. [58], it is proven that the first sum in Eq. (19) is the same as the leading term of the Grassberger estimator [32]. It is also used in the estimators developed in Refs. [36; 37]. In Appendix A we show that each term in this sum is also equivalent to an estimator that takes into account the number of observations made prior to the occurrence of the element \(x_{i}\). ### Correlation coverage-adjusted estimator The correlation coverage-adjusted estimator (CC) [25] uses the same ideas that support Eq. (10) but considers a different correction to the probabilities, \(\hat{p}^{\text{\tiny CC}}(x_{i})\) =\(\hat{C}^{\text{\tiny CC}}\hat{p}(x_{i})\), where now \(\hat{C}^{\text{\tiny CC}}\) is calculated sequentially taking into account previously observed data, \[\hat{C}^{\text{\tiny CC}}=1-\sum_{j=1}^{N^{\prime}}\frac{1}{N^{\prime}+j}I(X_ {N^{\prime}+j}\notin(X_{1},\ldots,X_{N^{\prime}+j-1})), \tag{21}\] where \(N^{\prime}\equiv N/2\) and the function \(I(Z)\) yields 1 if the event \(Z\) is true and 0 otherwise. By construction, this probability estimator considers possible correlations in the sequence. Then, the CC estimator is given by \[\hat{H}^{\text{\tiny CC}}=-\sum_{x_{i}\in S}\frac{\hat{p}^{\text{\tiny CC}}( x_{i})\ln(\hat{p}^{\text{\tiny CC}}(x_{i}))}{1-(1-\hat{p}^{\text{\tiny CC}}(x_{i} ))^{N}}. \tag{22}\] ## III Results We now proceed to compare the performance of the different estimators defined in previous Sec. II. Let us note first that, given a particular sequence, all entropy estimators, with the only exception of the CC estimator, will yield exactly the same value if we permute arbitrarily all numbers in the sequence. The reason behind this difference is that while the CC estimator takes into account the order in which the different elements appear in the sequence, all other estimators are based solely on the knowledge of the number of times that each possible outcome appears, and this number is invariant under permutations. Certain estimators, such as CS or CC, can be calculated without any prior knowledge of the possible number of outcomes, \(L\). This feature is particularly advantageous in fields like ecology, where the number of species in a given area may not be accurately known. Conversely, estimators like HS and NSB require an accurate estimate of \(L\) for their computation. As mentioned before, when analyzing an estimator there are two important statistics to consider: the bias and the standard deviation. Ideally, we would like an estimator with zero bias and low standard deviation. For the entropy, we have already argued that such an unbiased estimator does not exist. Hence, in this case, the "best" estimator (if it exists) would be the one that has the best balance between bias and standard deviation, i.e, the one with lowest mean squared error given by Eq. (4). In this section we will analyze and compare these three statistics, bias, standard deviation and mean squared error, for the nine entropy estimators reviewed in Sec. II in two main Markovian cases: A) binary sequences; and B) in an undersampled regime. As mentioned previously, a Markovian random variable is one in which the probability of the next event only depends on the current value. In other words, for these type of systems the transition probabilities satisfy \[P(X_{s} =x_{j}|X_{s-1}=x_{\ell},\ldots,X_{1}=x_{k})= \tag{23}\] \[P(X_{s} =x_{j}|X_{s-1}=x_{\ell}),\,j,\ell=1,\ldots,L\] with \(s\) the position in the series. An homogeneous Markov chain is one in which the transition probabilities are independent of the time step \(s\). Therefore, an homogeneous Markov chain is completely specified given the \(L\times L\) matrix of transition probabilities \(p(x_{j}|x_{\ell})=P(X_{s}=x_{j}|X_{s-1}=x_{\ell}),\,j,\ell=1,\ldots,L\). The definition can be generalized to an \(m\)-order Markov chain defined by the transition probabilities: \[P(X_{s} =x_{j}|X_{s-1}=x_{\ell},\ldots,X_{1}=x_{k})= \tag{24}\] \[P(X_{s} =x_{j}|X_{s-1}=x_{\ell},\ldots,X_{s-m}=x_{u}),\] that depend on the \(m\) previous results of the random variable. ### Binary sequences First, we consider homogeneous Markovian, binary (\(L=2\)) random variables, with possible outcomes \(0,1\). One advantage of discussing this system is that it is uniquely defined by a pair of independent transition probabilities: \(p(0|0)\) and \(p(1|1)\), where \(p(x_{i}|x_{j})\equiv P(X_{s+1}=x_{i}|X_{s}=x_{j})\). Then, \(p(1|0)=1-p(0|0)\) and \(p(0|1)=1-p(1|1)\). To shorten the notation we hereafter write \(p_{00}\) for \(p(0|0)\) and \(p_{11}\) for \(p(1|1)\). It is possible to compute the Shannon entropy of this random variable using the general definition given by Eq. (1). \[H=-p(0)\ln p(0)-p(1)\ln p(1) \tag{25}\] with the stationary values [5]: \[\begin{split} p(0)&=\frac{1-p_{00}}{2-p_{00}-p_{11} },\\ p(1)&=1-p(0).\end{split} \tag{26}\] The average value and standard deviation of the different entropy estimators can be computed using \[\langle\hat{H}^{k}\rangle=\sum_{S}P(S)\hat{H}(S)^{k}, \tag{27}\] for \(k=1,2\). Here the sum runs over all sequences \(S=X_{1},\ldots,X_{N}\) of length \(N\) and \(\hat{H}(S)\) is the value that the estimator takes on in this sequence. The probability of the sequence is \[P(S)=p(X_{1})\prod_{i=1}^{N-1}p(X_{i+1}|X_{i}), \tag{28}\] where \(p(X_{1})\) are the stationary values given by Eq. (26) and we have applied Eq. (23) successively. It is crucial to note that for Markovian sequences it is not possible to reduce Eq. (28) to a binomial distribution, as in Eq. (5), in terms of the number of occurrences for the symbols \(0\) and \(1\) in the sequence. Nevertheless, for moderate values of \(N\) it is still possible to perform the calculation of the expected value given by Eq. (27) by generating all \(2^{N}\) possible \(S\) sequences and computing the probability of each one using Eq. (28). We have followed this approach to compute the estimator bias \(B=\langle\hat{H}\rangle-H\) and its standard deviation \(\sigma=\sqrt{\langle\hat{H}^{2}\rangle-\langle\hat{H}\rangle^{2}}\). As an example, we plot the absolute value of the bias for sequences of length \(N=4\) in the color map of Fig. 1, for the nine entropy estimators presented in Sec. II, as a function of the transition probabilities \(p_{00}\) and \(p_{11}\). In Fig. 1 we can see that, for all nine estimators, the bias is larger in the region around the values \(p_{00}\simeq p_{11}\simeq 1\). The reason is that, in this region, the stationary probabilities of \(0\) and \(1\) are very similar, but given these particular values of the transition probabilities, a short sequence will most likely feature only one of these values, which makes it very hard to correctly estimate the entropy in those cases. Apart for this common characteristic, the performance of the estimators when considering only the bias is quite diverse, all of them having different regions where the bias is lowest (darker areas in the panels). In order to quantitatively compare the performance of the different estimators, we have aggregated all values in the \((p_{00},p_{11})\) plane. We define the aggregated bias of an estimator, \[\overline{B}=(\Delta p)^{2}\sum_{p_{00},p_{11}}|B(p_{00},p_{11})|, \tag{29}\] where the sum runs over all values of the transition probabilities used to produce Fig. 1, \(\Delta p=0.02\) is the step value used for the grid of the figure, and \(B(p_{00},p_{11})\) is the bias for the particular values of the transition probabilities. The aggregated bias given by Eq. (29) depends only on the sequence length \(N\). We conduct the previous analysis for different values of \(N\). The resulting plot of the aggregate bias \(\overline{B}\) of the entropy estimator as a function of the sequence length is shown in Fig. 2. In this figure, we can see that the CC estimator gives the overall best performance, expect for \(N=2\), where the CWJ estimator has the lowest aggregated bias. As expected, all the estimators yield an aggregated bias that vanishes as \(N\) increases. In the colormap of Fig. 3 we perform a similar analysis for the standard deviation \(\sigma\). In the figure we find that all nine estimators show a similar structure in the sense that the regions of lowest and highest \(\sigma\) are alike. The smallest deviation is mostly located near the left bottom corner of the colormaps and the largest deviation occurs around the regions (\(0.65\lesssim p_{00}\lesssim 0.9\), \(0\lesssim p_{11}\lesssim 1\)) and (\(0\lesssim p_{00}\lesssim 1,\,0.65\lesssim p_{11}\lesssim 0.9\)) (green areas in the figures). Of course, the values of \(\sigma\) inside these regions vary for each estimator but they all share this similar feature. In this case, by just looking at the colormaps, it is easy to see that BHM (panel f) and NSB (panel c) estimators are the ones with lowest standard deviation. The aggregated standard deviation \(\overline{\sigma}\), defined in a similar way to the aggregated bias, \[\overline{\sigma}=(\Delta p)^{2}\sum_{p_{00},p_{11}}\sigma(p_{00},p_{11}), \tag{30}\] is plotted in Fig. 4 as a function of the sequence size \(N\). In agreement with the previous visual test, the BHM and NSB estimators clearly outperform the rest, even though their advantage is less significant as \(N\) increases. Finally, for every particular \(N\), we compute the mean squared error of the entropy estimators, Eq. (4), as a function of \(p_{00}\) and \(p_{11}\). Its aggregated value \[\overline{\mathrm{MSE}}=(\Delta p)^{2}\sum_{p_{00},p_{11}}\mathrm{MSE}(p_{00}, p_{11}), \tag{31}\] is plotted as a function of \(N\) in Fig. 5. Even though the CC estimator outperforms the others when considering only the bias, its large dispersion dominates the mean squared error. Overall, it can be seen that the BHM and NSB estimators surpass the rest when both the bias and standard deviation are considered although, again, their advantage becomes less significant as \(N\) increases. ### Undersampled regime: block entropy Consider a sequence \(S=X_{1},\ldots,X_{N}\), where each \(X_{i}=0,1\) is a binary variable, with probabilities Figure 1: Colormaps representing the bias of the nine entropy estimators reviewed in Sec. II for Markovian binary sequences of length \(N=4\). The values of the transition probabilities \(p(0|0)\) and \(p(1|1)\) vary from \(0.01\) to \(0.99\) with step \(\Delta p=0.02\). a) MLE, b) Miller-Madow, c) Nemenman et al., d) Chao-Shen, e) Grassberger, f) Bonachela et al., g) Shrinkage, h) Chao et al., i) Correlation coverage-adjusted. \(p\), \(P(X_{i}=0)=1-p\). We group the sequence in blocks of size \(n\), such that the \(j\)th-block is \(B_{j}=(X_{j},\ldots,X_{j+n-1})\). We denote by \(\{b_{i}\}_{i=1,\ldots,2^{n}}\) the set of all possible blocks. The total number of (overlapping) blocks that can be constructed out of a series of \(N\) elements is \(N_{n}=N-n+1\), while the total number of possible blocks is \(L=2^{n}\). Hence, depending on the values of \(n\) and \(N\), the sequence formed by the \(N_{n}\) blocks, \(S_{n}=B_{1},\ldots,B_{N_{n}}\), will be in an undersampled regime whenever \(N_{n}\ll 2^{n}\). The block entropy \(H_{n}\) is defined by \[H_{n}=-\sum_{i=1}^{2^{n}}p(b_{i})\ln(p(b_{i})), \tag{32}\] where \(p(b_{i})\) is the probability of observing the block \(b_{i}\). The important thing to notice here is that, even if the different outcomes \(X_{1},\ldots,X_{N}\) of the binary variable \(X\) are independent, the block sequence \(B_{1},\ldots,B_{N_{n}}\) obeys a Markov process for \(n\geq 2\). This Markovian property can be easily established by noticing that the block \(B_{j}=(X_{j},\ldots,X_{j+n-1})\) can only be followed by the block \(B_{j+1}=(X_{j+1},\ldots,X_{j+n-1},1)\) with probability \(p\) or by the block \(B_{j+1}=(X_{j+1},\ldots,X_{j+n-1},0)\) with probability \(1-p\). Therefore, the probability of \(B_{j+1}\) depends only on the value of block \(B_{j}\). In Appendix B we show that this dynamics of block sequences in the case that the \(X_{i}\) are i.i.d. is equivalent to that of a new stochastic variable \(Z\) that can take any of \(L=2^{n}\) possible outcomes, \(z_{i}=0,1,\ldots,2^{n}-1\), with the following transition probabilities for each state \(z\): \[p(z_{i+1}|z_{i})=\begin{cases}1-p,&\text{if }z_{i+1}=2z_{i}\,(\text{mod }2^{n}),\\ p,&\text{if }z_{i+1}=2z_{i}\,(\text{mod }2^{n})+1,\\ 0,&\text{otherwise}.\end{cases} \tag{33}\] This type of Markovian systems have been related to Linguistics and Zipf's law [23]. The previous result can be generalized. If the original sequence \(X_{1},\ldots,X_{N}\) is Markovian of order \(m\geq 1\), then the dynamics of the block sequences \(B_{1},\ldots,B_{N_{n}}\) is also Markovian of order \(1\), for \(n\geq m\). It is well known [5] that the block entropy, when the original sequence \(S\) is constructed out of i.i.d. binary variables, obeys \[H_{n}=nH_{1}, \tag{34}\] where \(H_{1}\) can be calculated using Eq. (25) with \(p(1)=p\) and \(p(0)=1-p\). Therefore, the entropy rate is constant. We want to compare now the performance of the different estimators defined before when computing the block entropy. In this case we can not use an expression equivalent to Eq. (27), summing over all sequences \(S_{n}\), since the number of possible sequences is \((2^{n})^{N_{n}}\), and it is not possible to enumerate all the sequences even for relatively small values of \(n\) and \(N_{n}\). As an example, we employ in our numerical study \(N_{n}=20\) and \(n=6\), for which the total number of possible sequences is \(2^{120}\). Therefore, we use the sample mean \(\mu_{M}[\hat{H}_{n}]\) and the sample variance \(s_{M}^{2}[\hat{H}_{n}]\) as unbiased estimators to the expected value \(\langle\hat{H}_{n}\rangle\) and the variance \(\sigma^{2}[\hat{H}_{n}]\), respectively. After generating a sample of \(M\) independent sequences \(S_{n}^{i}\), \(i=1,\ldots,M\), and computing the estimator \(\hat{H}_{n}(S_{n}^{i})\) for each of the sequences, those statistics are computed as \[\mu_{M}[\hat{H}_{n}]=\frac{1}{M}\sum_{i=1}^{M}\hat{H}_{n}(S_{n}^{ i}), \tag{35}\] \[s_{M}^{2}[\hat{H}_{n}]=\frac{1}{M-1}\sum_{i=1}^{M}(\hat{H}_{n}(S _{n}^{i})-\mu_{M}[\hat{H}_{n}])^{2}.\] Using Eqs. (34,35) we can calculate the bias \(B_{n}=\mu_{M}[\hat{H}_{n}]-H_{n}\), the standard deviation \(s_{M}[\hat{H}_{n}]\) and the mean squared error \(s_{M}^{2}[\hat{H}_{n}]+B_{n}^{2}\). In the following we set \(M=10^{4}\) for our simulations. In Fig. 6 we show plots of \(B_{n}\) and \(s_{M}[\hat{H}_{n}]\) as a function of \(p\) ranging from \(0.02\) to \(0.5\) with step \(\Delta p=0.02\), for \(N_{n}=20\). We find that the CC estimator performs remarkably well in terms of bias and we highlight its robustness. Unlike the other estimators, which display significant variations in their bias as \(p\) changes, the CC estimator remains approximately constant at a low value. However, the CC estimator presents a high standard deviation, while the MLE and MM exhibit the lowest standard deviation. For the majority of estimators considered, we observe that the ones with higher bias are the ones with lower deviation. An exception is the HS estimator. To analyze the changes in the overall performances of the estimators with different values of \(N\), we calculated the aggregated bias as \[\overline{B}_{n}=\Delta p\sum_{p}|B_{n}(p)|. \tag{36}\] Similarly, we calculated the aggregated standard deviation as \[\overline{s}_{n}=\Delta p\sum_{p}s_{M}[\hat{H}_{n}](p), \tag{37}\] Figure 2: Aggregated bias of the entropy estimators for Markovian, binary sequences as a function of the sequence size \(N\). and the aggregated mean squared error as \[\overline{\text{MSE}}_{n}=\Delta p\sum_{p}(s_{M}^{2}[\hat{H}_{n}](p)+B_{n}(p)^{2}). \tag{38}\] The resulting plots are shown in Figs. 7, 8 and 9, respectively. It was expected that the total bias of the estimators would decrease by increasing \(N\), and in Fig. 7 it can be seen that this is indeed the case for all estimators except for the BHM estimator. Surprisingly, the bias of Figure 3: Colormaps representing the standard deviation of the nine entropy estimators reviewed in Sec. II for Markovian binary sequences of length \(N=4\). The values of the transition probabilities \(p(0|0)\) and \(p(1|1)\) vary from 0.01 to 0.99 with step \(\Delta p=0.02\). a) MLE, b) Miller-Madow, c) Nemenman et al., d) Chao-Shen, e) Grassberger, f) Bonachela et al., g) Shrinkage, h) Chao et al., i) Correlation coverage-adjusted. this estimator follows a typical pattern of decreasing as the sample size increases, just like the other estimators. However, it takes an unexpected turn starting at \(N=20\), as it begins to increase once more. A possible reason for this behaviour is that the BHM estimator is designed to minimize the MSE. Similarly to the results obtained for the binary Markovian case, the CC estimator demonstrates in Fig. 7 excellent performance when solely evaluating bias. Even though its performance for a data size of \(N=5\) is not outstanding, it begins to outperform all but the CS, CWJ and HS estimators starting at \(N=10\), and from that point onward, the CC estimator consistently ranks among the top-performing estimators, together with the NSB and CWJ estimators. By comparing Figs. 7 and 8 it can be seen that there is a certain balance: an estimator with a higher bias usually has a lower deviation when compared to others. This is clearly the case of the MLE and MM estimators, as they are the two with worst performances in terms of bias, but they have the lowest aggregated standard deviation for most of the data sizes considered. Figure 4: Aggregated standard deviation of the entropy estimators for Markovian, binary sequences as a function of the sequence size \(N\). Figure 5: Aggregated mean squared error of the entropy estimators for Markovian, binary sequences as a function of the sequence size \(N\). Figure 6: Bias (top) and standard deviation (bottom) of the entropy estimators, when applied to Markovian sequences of length \(N=20\) and \(L=2^{6}\), generated from the transition probabilities given by Eq. (33), as functions of \(p\), which vary from \(0.02\) to \(0.5\) with step \(\Delta p=0.02\). By construction the plot is symmetric around \(p=0.5\). Figure 7: Aggregated bias of the entropy estimators for Markovian sequences in the undersampled regime with \(L=2^{6}\), generated from the transition probabilities given by Eq. (33), as a function of the sequence size \(N\). In this interplay between bias and standard deviation observed for most of the entropy estimators considered here, the NSB estimator is the one that presents the best performance when considering both statistics. From Fig. 9 it is clear that this estimator shows the lowest aggregated mean squared error, although just from \(N=20\) the difference with other estimators like the CC or the G becomes vanishingly small. ## IV Conclusions We have made a detailed comparison of nine of the most widely used entropy estimators when applied to Markovian sequences. One crucial difference in the way these estimators are constructed is that only the correlation coverage-adjusted estimator [25] takes into account the order in which the elements appear in the sequence. To calculate this estimator it is necessary to know the entire history of the sequence, while for all other estimators it is sufficient to know the number of times that each element is present in the sequence, independently of the position in which they appear. Remarkably, this novel way of estimating the entropy allows to reduce the bias, even in undersampled regimes. Unfortunately, this estimator presents a large dispersion, which reduces its overall quality. We have found that, when dealing with Markovian sequences, on average, the Nemenman-Shafee-Bialek estimator [49; 50; 51] outperforms the rest when taking into account both the bias and the standard deviation for both analyzed cases, namely, binary sequences and an undersampled regime. Ref. [14] presented a similar analysis but for uniformly distributed sequences of bytes and bites, and concluded that the estimator with lowest mean squared error was the Shrinkage estimator [18]. Hence, when choosing a reliable estimator, it is not only important to consider the amount of data available, but also whether correlations might be present in the sequence. Further analyses should consider Markovian sequences of higher order [59; 60]. Another interesting topic would be systems described with continuous variables [61; 62], where the presence of noise is particularly important. Finally, we stress that there are alternative entropies not considered here [63], for which the existence of accurate estimators is still an open question. Finally, an exciting possibility would be a comparative study of estimators valid for more than one random variable or probability distributions, leading, respectively, to mutual information [64; 65] and relative entropy [66; 42; 67]. ###### Acknowledgements. Partial financial support has been received from the Agencia Estatal de Investigacion (AEI, MCI, Spain) MCIN/AEI/10.13039/501100011033 and Fondo Europeo de Desarrollo Regional (FEDER, UE) under Project APASOS (PID2021-122256NB-C21) and the Maria de Maeztu Program for units of Excellence in R&D, grant CEX2021-001164-M. Figure 8: Aggregated standard deviation of the entropy estimators for Markovian sequences in the undersampled regime with \(L=2^{6}\), generated from the transition probabilities given by Eq. (33), as a function of the sequence size \(N\). Figure 9: Aggregated mean squared error of the entropy estimators for Markovian sequences in the undersampled regime with \(L=2^{6}\), generated from the transition probabilities given by Eq. (33), as a function of the sequence size \(N\).
2302.08465
Towards $3n-4$ in groups of prime order
We show that if $A$ is a subset of a group of prime order $p$ such that $|2A|<2.7652|A|$ and $|A|<1.25\cdot10^{-6}p$, then $A$ is contained in an arithmetic progression with at most $|2A|-|A|+1$ terms, and $2A$ contains an arithmetic progression with the same difference and at least $2|A|-1$ terms. This improves a number of previously known results.
Vsevolod F. Lev, Oriol Serra
2023-02-16T18:18:52Z
http://arxiv.org/abs/2302.08465v1
# Towards \(3n-4\) ###### Abstract. We show that if \(A\) is a subset of a group of prime order \(p\) such that \(|2A|<2.7652|A|\) and \(|A|<1.25\cdot 10^{-6}p\), then \(A\) is contained in an arithmetic progression with at most \(|2A|-|A|+1\) terms, and \(2A\) contains an arithmetic progression with the same difference and at least \(2|A|-1\) terms. This improves a number of previously known results. Key words and phrases:Additive combinatorics, sumsets, small doubling 2020 Mathematics Subject Classification: Primary: 11P70; Secondary: 11B25 Supported by the Spanish Agencia Estatal de Investigacion under projects PID2020-113082GBI00 and the Severo Ochoa and Maria de Maeztu Program for Centers and Units of Excellence in R&D (CEX2020-001084-M). ## 1. Introduction A classical result in additive combinatorics, Freiman's \((3n-4)\)-theorem, says that if \(A\) is a finite set of integers satisfying \(|2A|\leq 3|A|-4\), then \(A\) is contained in an arithmetic progression of length \(|2A|-|A|+1\). It is believed that an analogue of Freiman's theorem holds for the "not-too-large" subsets of the prime-order groups; that is, if \(\mathcal{A}\) is a subset of a group of prime order such that \(|2\mathcal{A}|\leq 3|\mathcal{A}|-4\) then, subject to some mild density restrictions, \(\mathcal{A}\) is contained in an arithmetic progression with at most \(|2\mathcal{A}|-|\mathcal{A}|+1\) terms. The precise form of this (and indeed, somewhat more general) conjecture can be found in [7, Conjecture 19.2]. For an integer \(m\geq 1\), we denote by \(\mathbb{C}_{m}\) the cyclic group of order \(m\). Let \(p\) be a prime. Over sixty years ago, Freiman himself showed [4] that a subset \(\mathcal{A}\subseteq\mathbb{C}_{p}\) is contained in a progression with at most \(|2\mathcal{A}|-|\mathcal{A}|+1\) terms provided that \(|2\mathcal{A}|<2.4|\mathcal{A}|-3\) and \(|\mathcal{A}|<p/35\). Much work has been done to improve Freiman's result in various directions; we list just a few results of this kind. Rodseth [10] showed that the assumption \(|\mathcal{A}|<p/35\) can be relaxed to \(|\mathcal{A}|<p/10.7\). Green and Ruzsa [6] pushed the doubling constant from \(2.4\) up to \(3\), at the cost of a stronger density assumption \(|\mathcal{A}|<p/10^{215}\). In [11], Serra and Zemor obtained a result without any density assumption other than the conjectural one, but at the cost of reducing essentially the doubling coefficient; namely, assuming that \(|2\mathcal{A}|\leq(2+\varepsilon)|\mathcal{A}|\) with \(\varepsilon<0.0001\). An improvement, allowing in particular \(\varepsilon<0.1368\), was obtained by Candela, Gonzalez-Sanchez, and Grynkiewicz [1]. Candela, Serra, and Spiegel [2] Introduction Let \(\mathcal{A}\) be a set of integers, and let \(\mathcal{A}\) be a set of integers. Let \(\mathcal{A}\) be a set of integers. 3. _There is a proper subgroup_ \(\mathcal{H}<\mathbb{C}_{m}\) _such that_ \(\mathcal{A}\) _meets exactly three_ \(\mathcal{H}\)_-cosets, the cosets are not in an arithmetic progression, and_ \[3|\mathcal{H}|\leq|2\mathcal{A}|-|\mathcal{A}|.\] The following lemma originating from [2] relates the additive dimension of a set with its rectifiability. **Lemma 1**.: _Let \(l\) be a positive integer, and suppose that \(A\) is a set of integers satisfying \(\{0,l\}\subseteq A\subseteq[0,l]\) and \(\gcd(A)=1\). If there is a proper subgroup \(H<\mathbb{C}_{l}\) such that the image of \(A\) under the composite homomorphism \(\mathbb{Z}\to\mathbb{C}_{l}\to\mathbb{C}_{l}/H\) is rectifiable, then \(\dim(A)\geq 2\)._ Since the proof is just several lines long, we reproduce it here for the convenience of the reader. Proof.: Writing \(m:=l/|H|\), we identify the quotient group \(\mathbb{C}_{l}/H\) with the group \(\mathbb{C}_{m}\), and the map \(\mathbb{Z}\to\mathbb{C}_{l}\to\mathbb{C}_{l}/H\) with \(\varphi_{m}\). Let \(f\colon\varphi_{m}(A)\to\mathbb{Z}\) be Freiman's isomorphism of \(\varphi_{m}(A)\) into the integers. The set \(\{(a,f(\varphi_{m}(a)))\colon a\in A\}\subseteq\mathbb{Z}^{2}\) is easily seen to be isomorphic to \(A\), and to complete the proof we show that this set is not contained in a line. Assuming the opposite, from \(f(\varphi_{m}(0))=f(\varphi_{m}(l))\) we derive that \(f(\varphi_{m}(a))\) attains the same value for all \(a\in A\). The same is then true for \(\varphi_{m}(a)\), showing that \(\varphi_{m}(a)=\varphi_{m}(0)=0\) for any \(a\in A\); that is, all elements of \(A\) are divisible by \(m\), contradicting the assumption \(\gcd(A)=1\), except if \(m=1\) in which case \(H=\mathbb{C}_{l}\). From Theorem 2 and Lemma 1 we deduce the key proposition used in the proof of Theorem 1. **Proposition 1**.: _Let \(A\) be a finite set of integers satisfying \(|2A|<\frac{13}{4}\,|A|-\frac{9}{4}\). If \(\dim(A)=1\), then \(A\) is contained in an arithmetic progression with at most \(2\cdot 10^{5}|A|\) terms._ The proof essentially follows that of [2, Proposition 2.3], with some simplifications, and with Theorem 2 replacing [3, Theorem 1]. Proof of Proposition 1.: Without loss of generality we assume that \(\{0,l\}\subseteq A\subseteq[0,l]\) with an integer \(l>0\), and that \(\gcd(A)=1\). We want to show that \(l<2\cdot 10^{5}|A|\). Aiming at a contradiction, assume that \(l\geq 2\cdot 10^{5}|A|\). Let \(\mathcal{A}:=\varphi_{l}(A)\subseteq\mathbb{C}_{l}\); thus, \(|\mathcal{A}|=|A|-1\). Since \(\varphi_{l}(a)=\varphi_{l}(a+l)\) for any \(a\in A\setminus\{0,l\}\), and \(\varphi_{l}(0)=\varphi_{l}(l)=\varphi_{l}(2l)\), we have \(|2A|\geq|2\mathcal{A}|+|A|\). It follows that \[|2\mathcal{A}|\leq|2A|-|A|<\frac{9}{4}\,|A|-\frac{9}{4}=\frac{9}{4}\,| \mathcal{A}|,\] allowing us to apply Theorem 2. We consider three possible cases corresponding to the three cases in the conclusion of the theorem. Case (i): There is a subgroup \(\mathcal{H}\leq\mathbb{C}_{l}\) such that \(\mathcal{A}\) is contained in an \(\mathcal{H}\)-coset and \(|\mathcal{A}|>C^{-1}|\mathcal{H}|\), where \(C=2\cdot 10^{5}\). Since \(\gcd(A)=1\), the subgroup \(\mathcal{H}\) is not proper. Therefore \(|\mathcal{H}|=l<2\cdot 10^{5}|\mathcal{A}|<2\cdot 10^{5}|A|\), as wanted. Case (ii): There is a proper subgroup \(\mathcal{H}<\mathbb{C}_{l}\) and an arithmetic progression \(\mathcal{P}\subseteq\mathbb{C}_{l}\) of size \(|\mathcal{P}|>1\) such that \(|\mathcal{P}+\mathcal{H}|=|\mathcal{P}||\mathcal{H}|\), \(\mathcal{A}\subseteq\mathcal{P}+\mathcal{H}\), and \((|\mathcal{P}|-1)|\mathcal{H}|\leq|2\mathcal{A}|-|\mathcal{A}|\). The image of \(\mathcal{A}\) under the quotient map \(\mathbb{C}_{l}\to\mathbb{C}_{l}/\mathcal{H}\) is contained in an arithmetic progression of size \[|\mathcal{P}|\leq 1+(|2\mathcal{A}|-|\mathcal{A}|)/|\mathcal{H}|\leq 1+\frac{ 5}{4}\,|\mathcal{A}|/|\mathcal{H}|<\frac{5}{4}\,|A|/|\mathcal{H}|<\frac{1}{2} \,l/|\mathcal{H}|=\frac{1}{2}\,|\mathbb{C}_{l}/\mathcal{H}|.\] The difference of this progression is coprime with \(|\mathbb{C}_{l}/\mathcal{H}|\) in view of the assumption \(\gcd(A)=1\). Hence, the progression is rectifiable, and so is the image of \(A\) contained therein. The result now follows by applying Lemma 1. Case (iii): There is a proper subgroup \(\mathcal{H}<\mathbb{C}_{l}\) such that \(\mathcal{A}\) meets exactly three \(\mathcal{H}\)-cosets, the cosets are not in an arithmetic progression, and \(3|\mathcal{H}|\leq|2\mathcal{A}|-|\mathcal{A}|\). In this case the image of \(A\) in \(\mathbb{C}_{l}/\mathcal{H}\) consists of three elements not in an arithmetic progression; therefore the image is isomorphic, say, to the set \(\{0,1,3\}\subseteq\mathbb{Z}\), and an application of Lemma 1 completes the proof. **Lemma 2** (Freiman [5, Lemma 1.14]).: _For any finite, nonempty set \(A\) of integers, writing \(d:=\dim(A)\), we have_ \[|2A|\geq(d+1)|A|-\binom{d+1}{2}.\] **Lemma 3** (Candela-Serra-Spiegel [2, Corollary 2.6]).: _Let \(A\subseteq\mathbb{Z}\) be a finite set with \(\dim A=2\). If \(|2A|\leq\frac{10}{3}\,|A|-7\), then \(A\) is contained in the union of two arithmetic progressions, \(P_{1}\) and \(P_{2}\), with the same difference, such that \(|P_{1}\cup P_{2}|\leq|2A|-2|A|+3\) and the sumsets \(2P_{1}\), \(P_{1}+P_{2}\) and \(2P_{2}\) are pairwise disjoint._ The following result is, essentially, extracted from [9, Proof of Theorem 3], with a little twist that will help us keep the remainder terms under better control For a prime \(p\) and a subset \(\mathcal{A}\subseteq\mathbb{C}_{p}\), by \(\widehat{\mathcal{A}}\) we denote the non-normalized Fourier transform of the indicator function of \(\mathcal{A}\): \[\widehat{\mathcal{A}}(\chi)=\sum_{a\in\mathcal{A}}\chi(a);\quad\chi\in\widehat {\mathbb{C}_{p}}.\] The principal character is denoted by \(1\). We let \[\eta_{\mathcal{A}}:=\max\{|\widehat{\mathcal{A}}(\chi)|/|\mathcal{A}|\colon \chi\neq 1\}.\] **Proposition 2**.: _Suppose that \(p\) is a prime, and \(\mathcal{A}\subseteq\mathbb{C}_{p}\) is a nonempty subset of density \(\alpha:=|\mathcal{A}|/p<1/2\). If \(|2\mathcal{A}|=K|\mathcal{A}|\) and \(\mathcal{A}\) is not an arithmetic progression, then_ \[(1-\alpha K)(1-\eta_{\mathcal{A}}^{2})<1-K^{-1}-K^{-2}+(K-(1-2K^{-1})| \mathcal{A}|)/|\mathcal{A}|^{2}.\] Proof.: Let \(\mathcal{S}:=2\mathcal{A}\) and \(\mathcal{D}:=\mathcal{A}-\mathcal{A}\). For a set \(\mathcal{T}\subseteq\mathbb{C}_{p}\) and element \(x\in\mathbb{C}_{p}\), we write \(\mathcal{T}_{x}:=\mathcal{T}\cap(x+\mathcal{T})\); thus, \(|\mathcal{T}_{x}|\) is the number of representations of \(x\) as a difference of two elements of \(\mathcal{T}\), and in particular \(|\mathcal{T}_{0}|=|\mathcal{T}|\). Consider the easily-verified identity \[\frac{1}{p}\,\sum_{\chi\in\widehat{\mathbb{C}_{p}}^{-}}|\widehat{\mathcal{A}} (\chi)|^{2}|\widehat{\mathcal{S}}(\chi)|^{2}=\sum_{x\in\mathcal{D}}|\mathcal{A }_{x}||\mathcal{S}_{x}|. \tag{1}\] For the left-hand side using the Parseval identity we obtain the estimate \[\frac{1}{p}\,\sum_{\chi\in\widehat{\mathbb{C}_{p}}}|\widehat{ \mathcal{A}}(\chi)|^{2}|\widehat{\mathcal{S}}(\chi)|^{2} \leq\frac{1}{p}\,|\mathcal{A}|^{2}|\mathcal{S}|^{2}+\frac{1}{p}\, \eta_{\mathcal{A}}^{2}|\mathcal{A}|^{2}|\mathcal{S}|(p-|\mathcal{S}|)\] \[\leq\alpha K^{2}|\mathcal{A}|^{3}+\eta_{\mathcal{A}}^{2}K| \mathcal{A}|^{3}(1-\alpha K). \tag{2}\] To estimate the right-hand side we recall the _Katz-Koester observation_\(\mathcal{A}+\mathcal{A}_{x}\subseteq\mathcal{S}_{x},\ x\in\mathbb{C}_{p}\). Let \(N\) be the number of elements \(x\in\mathcal{D}\) with \(|\mathcal{A}_{x}|=1\). Notice that \(N\leq|\mathcal{D}|\leq K^{2}|\mathcal{A}|\); here the first estimate is trivial, and the second is the Plunnecke-Ruzsa inequality. From the assumption \(\alpha<1/2\) and the theorems of Cauchy-Davenport and Vosper, we get \[\sum_{x\in\mathcal{D}}|\mathcal{A}_{x}||\mathcal{S}_{x}| \geq\sum_{x\in\mathcal{D}\setminus\{0\}}|\mathcal{A}_{x}|| \mathcal{S}_{x}|+|\mathcal{A}||\mathcal{S}|\] \[\geq\sum_{x\in\mathcal{D}\setminus\{0\}}|\mathcal{A}_{x}|| \mathcal{A}+\mathcal{A}_{x}|+|\mathcal{A}||\mathcal{S}|\] \[\geq\sum_{x\in\mathcal{D}\setminus\{0\}}|\mathcal{A}_{x}|(| \mathcal{A}|+|\mathcal{A}_{x}|)-N+|\mathcal{A}||\mathcal{S}|\] \[\geq\sum_{x\in\mathcal{D}}|\mathcal{A}_{x}|(|\mathcal{A}|+| \mathcal{A}_{x}|)-N+|\mathcal{A}||\mathcal{S}|-2|\mathcal{A}|^{2}\] \[\geq|\mathcal{A}|^{3}+\mathsf{E}(\mathcal{A})-K^{2}|\mathcal{A}| +(K-2)|\mathcal{A}|^{2} \tag{3}\] where \(\mathsf{E}(\mathcal{A})=\sum_{x\in\mathcal{D}}|\mathcal{A}_{x}|^{2}\) is the additive energy of \(\mathcal{A}\), and where the third estimate follows from Vosper's theorem if \(|A+A_{x}|\leq p-2\), and otherwise from \(|\mathcal{A}+\mathcal{A}_{x}|\geq p-1>2\alpha p-1=2|\mathcal{A}|-1\geq|\mathcal{ A}|+|\mathcal{A}_{x}|-1\). Combining (1), (2), and (3), and using the basic bound \(\mathsf{E}(\mathcal{A})\geq|\mathcal{A}|^{3}/K\), we get \[\alpha K^{2}|\mathcal{A}|^{3}+\eta_{\mathcal{A}}^{2}K|\mathcal{A}|^{3}(1- \alpha K)\geq(1+K^{-1})|\mathcal{A}|^{3}-(K^{2}-(K-2)|\mathcal{A}|)|\mathcal{ A}|\] whence \[\alpha K+\eta_{\mathcal{A}}^{2}(1-\alpha K)\geq K^{-1}+K^{-2}-(K -(1-2K^{-1})|\mathcal{A}|)/|\mathcal{A}|^{2},\] \[(\eta_{\mathcal{A}}^{2}-1)(1-\alpha K)\geq K^{-1}+K^{-2}-1-(K-(1- 2K^{-1})|\mathcal{A}|)/|\mathcal{A}|^{2}\] which is equivalent to the inequality sought. **Corollary 1**.: _Let \(\mathcal{A}\), \(\alpha\), and \(K\) be as in Proposition 2. If \(\alpha<10^{-5}\), \(K<2.7652\), and \(|\mathcal{A}|\geq 10\), then \(\eta_{\mathcal{A}}>\frac{8}{13}\,K-1\)._ Proof.: Assuming \(\eta_{\mathcal{A}}\leq\frac{8}{13}\,K-1\) we get \[1-\eta_{\mathcal{A}}^{2}\geq\frac{16}{13}\,K-\frac{64}{169}\,K^{2}=\frac{16}{ 169}K(13-4K)\] whence \[(1-\alpha K)\frac{16}{169}K(13-4K)<1-K^{-1}-K^{-2}+(K-(2-K^{-1})|\mathcal{A}|)/ |\mathcal{A}|^{2}. \tag{4}\] The left-hand side is decreasing both as a function of \(K\) and a function of \(\alpha\), the right-hand side is an increasing function of \(K\). Therefore (4) stays true with \(K\) substituted by \(2.7652\) and \(\alpha\) by \(10^{-5}\); this results in a quadratic inequality in \(|\mathcal{A}|\) which is false for \(|\mathcal{A}|\geq 10\). The following lemma is standardly used to convert the "Fourier bias" (established in Corollary 1) into the "combinatorial bias". **Lemma 4** (Freiman [5]).: _Suppose that \(p\) is a prime, and \(\mathcal{A}\subseteq\mathbb{C}_{p}\) is a nonempty subset. There is an arithmetic progression \(\mathcal{P}\subset\mathbb{C}_{p}\) with \(|\mathcal{P}|\leq(p+1)/2\) terms such that_ \[|\mathcal{A}\cap\mathcal{P}|>\frac{1}{2}\,(1+\eta_{\mathcal{A}})|\mathcal{A}|.\] Finally, we need the symmetric case of a version of the \((3n-4)\)-theorem due to Grynkiewicz. **Theorem 3** (Special case of [7, Theorem 7.1]).: _Let \(A\) be a finite set of integers. If \(|2A|\leq 3|A|-4\), then \(A\) is contained in an arithmetic progression with at most \(|2A|-|A|+1\) terms, and \(2A\) contains an arithmetic progression with the same difference and at least \(2|A|-1\) terms._ ## 3. Proof of Theorem 1 Throughout the proof, we identify \(\mathbb{C}_{p}\) with the additive group of the \(p\)-element field; accordingly, the automorphisms of \(\mathbb{C}_{p}\) are identified with the dilates. We write \(d*\mathcal{A}:=\{da\colon a\in\mathcal{A}\}\) where \(d\) is an integer or an element of \(\mathbb{C}_{p}\). For \(u\leq v\), by \([u,v]\) we denote both the set of all integers \(u\leq z\leq v\) and the image of this set in \(\mathbb{C}_{p}\) under the homomorphism \(\varphi_{p}\). We may also occasionally identify integers with their images under \(\varphi_{p}\). For brevity, we write \(p^{\prime}:=(p-1)/2\). Assuming that \(\mathcal{A}\subseteq\mathbb{C}_{p}\) satisfies \(|2\mathcal{A}|\leq K|\mathcal{A}|-3\) with \(K<2.7652\) and \(20\leq|\mathcal{A}|<1.25\cdot 10^{-6}p\), we prove that \(\mathcal{A}\) is contained in an arithmetic progression with at most \((p+1)/2\) terms; equivalently, there is an affine transformation that maps \(\mathcal{A}\) into a subset of an interval of length at most \(p^{\prime}\). This will show that \(\mathcal{A}\) is rectifiable and imply the result in view of Theorem 3. Let \(\mathcal{A}_{0}\) be a subset of \(\mathcal{A}\) of the largest possible size such that \(\mathcal{A}_{0}\) is contained in an arithmetic progression with at most \((p+1)/2\) terms. We observe that, by the maximality of \(|\mathcal{A}_{0}|\), if \(\mathcal{A}_{0}\subseteq[0,l]\) with an integer \(0\leq l\leq p^{\prime}\), then the two intervals of length \(p^{\prime}-l-1\) adjacent to \([0,l]\) "from the left" and "from the right" do not contain any elements of \(\mathcal{A}\); that is, \[[l+p^{\prime}+1,p-1]\cap\mathcal{A}=[l+1,p^{\prime}]\cap\mathcal{A}=\varnothing.\] Therefore \[\mathcal{A}\setminus\mathcal{A}_{0}\subseteq[p^{\prime}+1,p^{\prime}+l]=p^{ \prime}+[1,l]. \tag{5}\] Suppose first that \(\mathcal{A}_{0}\) is contained in an arithmetic progression with at most \(2\cdot 10^{5}|\mathcal{A}_{0}|\) terms. Having applied a suitable affine transformation, we assume that \(\mathcal{A}_{0}\subseteq[0,l]\) with \(l<2\cdot 10^{5}|\mathcal{A}_{0}|\). By (5), we have \[2*\mathcal{A}\subseteq(2*\mathcal{A}_{0})\cup[1,2l-1]\subseteq[0,2l].\] In view of \(2l+1<4\cdot 10^{5}|\mathcal{A}_{0}|\leq p^{\prime}\), this shows that the affine transformation \(z\mapsto 2z\) maps \(\mathcal{A}\) into an interval of length at most \(p^{\prime}\), which is shown above to imply the result. We therefore assume from now on that \(\mathcal{A}_{0}\) is not contained in an arithmetic progression with \(2\cdot 10^{5}|\mathcal{A}_{0}|\) or fewer terms; in particular, the set \(\mathcal{A}_{0}\) itself is not an arithmetic progression. By Corollary 1 and Lemma 4, and in view of \(|\mathcal{A}_{0}|\geq\frac{1}{2}|\mathcal{A}|\geq 10\) and \(|\mathcal{A}_{0}|\leq|\mathcal{A}|<1.25\cdot 10^{-6}p<10^{-5}p\), we have \[|\mathcal{A}_{0}|>\frac{4}{13}K|\mathcal{A}|, \tag{6}\] and it follows that \[|2\mathcal{A}_{0}|\leq|2\mathcal{A}|\leq K|\mathcal{A}|-3<\frac{13}{4}| \mathcal{A}_{0}|-\frac{9}{4}. \tag{7}\] Recalling the way the set \(\mathcal{A}_{0}\) has been chosen, we find a set \(A_{0}\subseteq\mathbb{Z}\) such that \(\mathcal{A}_{0}=\varphi_{p}(A_{0})\), \(|A_{0}|=|\mathcal{A}_{0}|\), and \(A_{0}\) is contained in an arithmetic progression with a most \(p^{\prime}+1\) terms; thus, \(A_{0}\) is Freiman-isomorphic to \(\mathcal{A}_{0}\), and as a result, \[|2A_{0}|<\frac{13}{4}\,|A_{0}|-\frac{9}{4}.\] Since \(\mathcal{A}_{0}\) is not contained in an arithmetic progression with \(2\cdot 10^{5}|\mathcal{A}_{0}|\) or fewer terms, neither is \(A_{0}\). (This does not follow from the mere fact that \(A_{0}\) and \(\mathcal{A}_{0}\) are Freiman-isomorphic, but does follow immediately by observing that \(\mathcal{A}_{0}\) is the image of \(A_{0}\) under a group homomorphism.) Consequently, by Proposition 1, we conclude that \(\dim(A_{0})\geq 2\), and then, indeed, \(\dim(A_{0})=2\) by Lemma 2. Applying Lemma 3, we derive that \(A_{0}\) is contained in the union of two arithmetic progressions, say \(P_{1}\) and \(P_{2}\), with the same difference, such that \(|P_{1}\cup P_{2}|\leq|2A_{0}|-2|A_{0}|+3\) and the sumsets \(2P_{1}\), \(P_{1}+P_{2}\) and \(2P_{2}\) are pairwise disjoint. Hence, \(\mathcal{A}_{0}\) is contained in the union of the disjoint progressions \(\mathcal{P}_{1}:=\varphi_{p}(P_{1})\) and \(\mathcal{P}_{2}:=\varphi_{p}(P_{2})\). Let \(\mathcal{A}_{1}=\mathcal{A}_{0}\cap\mathcal{P}_{1}\) and \(\mathcal{A}_{2}=\mathcal{A}_{0}\cap\mathcal{P}_{2}\). Without loss of generality, we assume that \(|\mathcal{A}_{1}|\geq|\mathcal{A}_{0}|/2\). Applying a suitable affine transformation, we can arrange that 1. each of the progressions \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) has difference \(1\) or is a singleton; 2. there are integers \(0\leq b<c\leq d\) such that \(\mathcal{P}_{1}\subseteq[0,b]\), \(|\mathcal{P}_{1}|=b+1\), and \(\mathcal{P}_{2}\subseteq[c,d]\), \(|\mathcal{P}_{2}|=d-c+1\); 3. the interval \([b,c]\) is at most as long as the interval \([d,p]\): \[c-b\leq p-d.\] (8) Recalling (6), we obtain \[b+d-c=|\mathcal{P}_{1}|+|\mathcal{P}_{2}|-2 \leq|2\mathcal{A}_{0}|-2|\mathcal{A}_{0}|+1\] \[\leq|2\mathcal{A}|-2|\mathcal{A}_{0}|+1<K|\mathcal{A}|-\frac{8}{ 13}K|\mathcal{A}|=\frac{5}{13}K|\mathcal{A}|<2|\mathcal{A}|,\] whence \[b+(d-c)<2|\mathcal{A}|. \tag{9}\] Writing \(n:=|\mathcal{A}|\), we therefore have \[\mathcal{A}_{1}\subseteq[0,b]\subseteq[0,2n],\quad\mathcal{A}_{2}\subseteq c +[0,d-c]\subseteq c+[0,2n], \tag{10}\] and also \[(c-b)+(p-d)=p-(d-c)-b>p-2n.\] Along with (8), the last estimate gives \(p-d\geq p^{\prime}-n+1\) and, consequently, \(d\leq p^{\prime}+n\). In fact, we have \[4n<d<p^{\prime}-4n; \tag{11}\] here the lower bound follows immediately from the assumption that \(\mathcal{A}_{0}\) is not contained in a progression with \(2\cdot 10^{5}|\mathcal{A}_{0}|\) or fewer terms, and the upper bound follows by observing that if we had \(p^{\prime}-4n\leq d\leq p^{\prime}+n\), in view of (9) this would imply \([c,d]=[d-(d-c),d]\subseteq[d-2n,d]\subseteq p^{\prime}+[-6n,n]\) and, consequently, \(2*\mathcal{A}_{0}\subseteq[0,2b]\cup[-12n-1,2n-1]\subseteq[-12n-1,4n]]\), also in a contradiction with the same assumption. We have \(2\mathcal{A}_{0}=2\mathcal{A}_{1}\cup(\mathcal{A}_{1}+\mathcal{A}_{2})\cup 2 \mathcal{A}_{2}\) where the union is disjoint; therefore, by the Cauchy-Davenport theorem, \[|2\mathcal{A}_{0}|\geq(2|\mathcal{A}_{1}|-1)+(|\mathcal{A}_{1}|+|\mathcal{A}_ {2}|-1)+(2|\mathcal{A}_{2}|-1)=3|\mathcal{A}_{0}|-3.\] It follows that for any \(a\in\mathcal{A}\setminus\mathcal{A}_{0}\) we have \((a+\mathcal{A}_{1})\cap(2\mathcal{A}_{0})\neq\varnothing\), as assuming the opposite, \[|2\mathcal{A}|\,\geq\,|2\mathcal{A}_{0}|+|a+\mathcal{A}_{1}|\,\geq\,3| \mathcal{A}_{0}|-3+\frac{1}{2}|\mathcal{A}_{0}|\,>\,\frac{7}{2}\cdot\,\frac{4} {13}K|\mathcal{A}|-3\,=\,\frac{14}{13}K|\mathcal{A}|-3,\] a contradiction. Therefore, \[\mathcal{A}\setminus\mathcal{A}_{0}\subseteq 2\mathcal{A}_{0}-\mathcal{A}_{1} \subseteq\{0,c,2c\}+[-2n,4n]. \tag{12}\] On the other hand, since \(d<p^{\prime}\), we can apply (5) with \(l=d\) to get \[\mathcal{A}\setminus\mathcal{A}_{0}\subseteq p^{\prime}+[1,d]. \tag{13}\] Comparing (12) and (13), and observing that, in view of (11), both intervals \([-2n,4n]\) and \(c+[-2n,4n]\) are disjoint from the interval \(p^{\prime}+[1,d]\), we conclude that \[\mathcal{A}\setminus\mathcal{A}_{0}\subseteq 2c+[-2n,4n] \tag{14}\] and, consequently, \[\mathcal{A}\subseteq\{0,c,2c\}+[-2n,4n].\] We notice that the set \(2(\mathcal{A}\setminus\mathcal{A}_{0})\) is not disjoint from the set \(2\mathcal{A}_{0}\) as otherwise we would get \[|2\mathcal{A}|\geq|2(\mathcal{A}\setminus\mathcal{A}_{0})|+|2 \mathcal{A}_{0}|\geq 2|\mathcal{A}\setminus\mathcal{A}_{0}|-1+3|\mathcal{A}_{ 0}|-3\\ =2|\mathcal{A}|+|\mathcal{A}_{0}|-4\geq\left(2+\frac{4}{13}K \right)|\mathcal{A}|-4>K|\mathcal{A}|-3.\] Since \(2(\mathcal{A}\setminus\mathcal{A}_{0})\subseteq 4c+[-4n,8n]\) by (14), and \(2\mathcal{A}_{0}\subseteq\{0,c,2c\}+[0,4n]\) in view of (10), we conclude that \(kc\in[-8n,8n]\) for some \(k\in\{2,3,4\}\). Therefore \(k*\mathcal{A}_{0}\subseteq\{0,kc\}+[0,2kn]\subseteq[-8n,(8+2k)n]\). Hence, \(\mathcal{A}_{0}\) is contained in an arithmetic progression with at most \((16+2k)n+1<25n<2\cdot 10^{5}|\mathcal{A}_{0}|\) terms, a contradiction.
2305.03597
A Thru-free Multiline Calibration
This paper proposes a modification to the traditional multiline thru-reflect-line (TRL) or line-reflect-line (LRL) calibration method used for vector network analyzers (VNAs). Our proposed method eliminates the need for a thru (or line) standard by using an arbitrary transmissive two-port device in combination with an additional reflect standard. This combination of standards allows us to arbitrarily set the location of the calibration plane using physical artifacts. In contrast to the standard multiline TRL method, the suggested approach avoids a post-processing step to shift the calibration plane if a line standard is used. We demonstrate our proposed method with measurements on a printed circuit board (PCB) and compare it to the multiline TRL method with a perfectly defined thru.
Ziad Hatab, Michael Ernst Gadringer, Wolfgang Bösch
2023-05-05T15:01:16Z
http://arxiv.org/abs/2305.03597v2
# A Thru-free Multiline Calibration ###### Abstract This paper proposes a modification to the traditional multiline thru-reflect-line (TRL) or line-reflect-line (LRL) calibration method used for vector network analyzers (VNAs). Our proposed method eliminates the need for a thru (or line) standard by using an arbitrary transmissive two-port device in combination with an additional reflect standard. This combination of standards allows us to arbitrarily set the location of the calibration plane using physical artifacts. In contrast to the standard multiline TRL method, the suggested approach avoids a post-processing step to shift the calibration plane if a line standard is used. We demonstrate our proposed method with measurements on a printed circuit board (PCB) and compare it to the multiline TRL method with a perfectly defined thru. vector network analyzer, calibration, microwave measurement, millimeter-wave ## I Introduction The precision of measurements taken by a vector network analyzer (VNA) heavily relies on the calibration method's accuracy. Over the years, numerous improvements have been made to VNA calibration methods [1]. Since its inception in 1979 [2], the thru-reflect-line (TRL) calibration method is still regarded as the most precise method for traceable VNA calibration. Although the TRL method is inherently bandlimited, an extension of the method called multiline TRL was proposed, which uses multiple line standards of varying lengths to expand the usable frequency range [3]. For both TRL and multiline TRL, a fully defined thru standard (a zero-length line) is required to determine the location of the calibration plane. However, in some applications, such a thru standard cannot be realized. For example, in on-wafer applications, the calibration plane should be at the tip of the probes [4]. Undesirable effects can occur if the probes are placed too close to each other [5, 6]. In waveguide applications, the calibration plane is typically set at the adapter flanges. Although it is possible to create a thru standard by connecting the flanges directly, this results in a short length of the line standard at very high frequencies, which can be difficult to machine and handle [7, 8, 9]. To avoid using a thru standard, a common solution is to define the calibration plane using a line standard of known length. Like the thru standard, this line must be fully specified. This method is called the line-reflect-line (LRL) method [10]. During the calibration process, the chosen line standard is treated as a thru standard, which places the calibration plane at the center of this line standard. The reference plane is then shifted to the desired location using the propagation constant extracted from the calibration procedure. The main challenge with this technique is the need for an accurate measurement of the propagation constant, which depends on knowledge of the exact length of the line standards. Additionally, the accuracy of the extracted propagation constant also depends on the choice of the length of the line standards. For example, a longer line may be useful in reducing uncertainty in the extracted propagation constant. However, a long line may be impractical due to physical limitations. Another calibration method that does not require a thru standard is the short-open-load-reciprocal (SOLR) method [11]. Unlike the LRL method, SOLR does not require a definition of a thru or line standard; instead, it uses any transmissive reciprocal device. With SOLR calibration, the location of the calibration plane is explicitly defined by the SOL standards at each port, which must be fully characterized. Therefore, the SOLR method's accuracy depends on the definition of the SOL standards. Our proposed method eliminates the multiline calibration method's need for a thru standard. Instead, we use an arbitrary transmissive two-port device and an additional reflect standard to replace the thru standard. These standards physically define the location of the calibration plane. Although the suggested approach demands an additional reflect standard, all required standards are partially defined. This is in contrast to the multiline TRL (or LRL) method, where the thru (or line) standard is assumed to be perfectly defined. The remainder of this article is organized as follows. Section II presents the application of the thru standard in multiline TRL calibration. In Section III, we derive the mathematical equations used to perform a thru-free multiline calibration. In Section IV we experimentally compare our method with traditional multiline TRL calibration. Finally, we provide a summary in Section V. ## II The Thru Standard in TRL calibration The error box model of a two-port VNA measuring a line standard is depicted in Fig. 1. The error box model can be simplified into seven terms as follows: \[\mathbf{M}_{i}=\underbrace{k_{a}k_{b}}_{k}\underbrace{\begin{bmatrix}a_{11}&a_{1 2}\\ a_{21}&1\\ \mathbf{\mathcal{A}}\end{bmatrix}}\begin{bmatrix}e^{-\gamma l_{i}}&0\\ 0&e^{\gamma l_{i}}\end{bmatrix}\underbrace{\begin{bmatrix}b_{11}&b_{12}\\ b_{21}&1\\ \mathbf{B}\end{bmatrix}}_{\mathbf{B}}, \tag{1}\] where \(\mathbf{A}\) and \(\mathbf{B}\) are the one-port error boxes from each port, and \(k\) is the 7th error term that describes the transmission between the two ports. The first step in formulating TRL calibration is to set up the eigenvalue problem. This can be accomplished straightforwardly by taking measurements of two line standards with the same cross-section but different lengths (one of which can be a zero-length, i.e., thru). For example, the eigenvalue problem for the forward direction in terms of the matrix \(\mathbf{A}\) is given by \[\mathbf{M}_{i}\mathbf{M}_{j}^{-1}=\mathbf{A}\begin{bmatrix}e^{-\gamma(l_{i}-l_{j})}&0\\ 0&e^{\gamma(l_{i}-l_{j})}\end{bmatrix}\mathbf{A}^{-1}. \tag{2}\] This eigenvalue problem can also be applied in the reverse direction with respect to \(\mathbf{B}\). Furthermore, a generalized weighted eigenvalue problem that combines multiple line standards at once can be derived, as discussed in [12]. In both the TRL and the multiline TRL calibration, the eigenvectors solve for the error boxes. Therefore, we can only solve for the error boxes in a normalized way, since eigenvectors are only unique up to a scalar factor. Specifically, we can obtain the following normalized error boxes from the eigenvectors. \[\widetilde{\mathbf{A}}=\begin{bmatrix}1&a_{12}\\ a_{21}/a_{11}&1\end{bmatrix},\qquad\widetilde{\mathbf{B}}=\begin{bmatrix}1&b_{12} /b_{11}\\ b_{21}&1\end{bmatrix}. \tag{3}\] In order to recover all error terms of the VNA and denormalize the error boxes, we need to measure a thru standard and a symmetric reflect standard, as illustrated in Fig. 2. The thru standard is used to calculate the terms \(k\) and \(a_{11}b_{11}\), while the symmetric reflect standard is used to calculate the term \(a_{11}/b_{11}\). By combining these terms with the normalized error terms, we can accurately recover all error terms. Using the measurement of the thru standard, we can calculate the terms \(k\) and \(a_{11}b_{11}\) directly by applying the normalized error boxes as follows: \[\widetilde{\mathbf{A}}^{-1}\mathbf{M}_{\text{thru}}\widetilde{\mathbf{B}}^{-1}=\begin{bmatrix} ka_{11}b_{11}&0\\ 0&k\end{bmatrix}. \tag{4}\] where \(a_{11}b_{11}\) is calculated by taking the ratio of the diagonal elements as \(a_{11}b_{11}=ka_{11}b_{11}/k\). Using the symmetrical reflect measurement, we can derive two equations, one for each port, that describe the input reflection coefficient. The equation for the left port (port \(\mathbf{A}\)) is as follows: \[\Gamma_{a}=\frac{a_{12}+a_{11}\Gamma}{1+a_{21}\Gamma}\quad\implies\quad a_{11 }\Gamma=\frac{\Gamma_{a}-a_{12}}{1-(a_{21}/a_{11})\Gamma_{a}}, \tag{5}\] and from the right port (port \(\mathbf{B}\)) we have \[\Gamma_{b}=\frac{b_{11}\Gamma-b_{21}}{1-b_{12}\Gamma}\quad\implies\quad b_{11 }\Gamma=\frac{\Gamma_{b}+b_{21}}{1+(b_{12}/b_{11})\Gamma_{b}}, \tag{6}\] where \(\Gamma_{a}\) and \(\Gamma_{b}\) are the raw measurements of the input reflection as seen from each port, and \(\Gamma\) is the reflection coefficient of the symmetric reflect standard, which is not specified during calibration. By combining both (5) and (6), we can cancel the term \(\Gamma\) and solve for \(a_{11}/b_{11}\) as follows: \[\frac{a_{11}\Gamma}{b_{11}\Gamma}=\frac{a_{11}}{b_{11}}=\frac{\Gamma_{a}-a_{1 2}}{1-(a_{21}/a_{11})\Gamma_{a}}\frac{1+(b_{12}/b_{11})\Gamma_{b}}{\Gamma_{b}+ b_{21}}. \tag{7}\] We can solve for \(a_{11}\) and \(b_{11}\) by using the values of \(a_{11}b_{11}\) and \(a_{11}/b_{11}\) as follows: \[a_{11}=\pm\sqrt{\frac{a_{11}}{b_{11}}a_{11}b_{11}};\quad b_{11}=a_{11}\frac{b_ {11}}{a_{11}}. \tag{8}\] To resolve the sign ambiguity, we select the answer closest to an estimate of \(\Gamma\). We can apply the smallest Euclidean distance metric between the measured and estimated reflection coefficients to select the correct sign as summarized in (9). \[a_{11}=\operatorname*{argmin}_{a_{11}}\left\{\left|\frac{\Gamma_{a}-a_{12}}{ \pm a_{11}(1-(a_{21}/a_{11})\Gamma_{a})}-\Gamma_{\text{est}}\right|\right\}. \tag{9}\] Finally, we denormalize the error boxes as follows: \[\mathbf{A}= \begin{bmatrix}a_{11}&a_{12}\\ a_{21}&1\end{bmatrix}=\begin{bmatrix}1&a_{12}\\ a_{21}/a_{11}&1\end{bmatrix}\begin{bmatrix}a_{11}&0\\ 0&1\end{bmatrix} \tag{10a}\] \[\mathbf{B}= \begin{bmatrix}b_{11}&b_{12}\\ b_{21}&1\end{bmatrix}=\begin{bmatrix}b_{11}&0\\ 0&1\end{bmatrix}\begin{bmatrix}1&b_{12}/b_{11}\\ b_{21}&1\end{bmatrix}. \tag{10b}\] In summary, if we can compute the terms \(k\) and \(a_{11}b_{11}\) without relying on the availability of a thru standard, we have achieved our goal. ## III Derivation of Thru-free Calibration Instead of explicitly defining a thru standard, we combine a reflect standard with an unspecified two-port network standard. We assume that the eigenvalue problem from the various line standards has already been solved and that the normalized error terms have been derived. To perform the denormalization and determine the error terms \(a_{11}\) and \(b_{11}\), we use the standards shown in Fig. 3. To derive \(a_{11}b_{11}\), it is not necessary that the unknown network be reciprocal. Any transmissive network (i.e., \(|S_{12}|,|S_{21}|>0\)) will suffice. By applying the normalized error Fig. 1: Two-port VNA error box model that illustrates the measurement of a line standard. All matrices are provided as T-parameters. Fig. 2: Two-port VNA error box model that illustrates the measurement of a symmetric reflect standard and a thru standard. boxes to the network's measurement, we obtain the following expression: \[\widetilde{\mathbf{A}}^{-1}\mathbf{M}_{\mathrm{net}}\widetilde{\mathbf{B}}^{-1}=k\begin{bmatrix} a_{11}&0\\ 0&1\end{bmatrix}\begin{bmatrix}-\frac{\det(\mathbf{S})}{S_{21}}&\frac{S_{11}}{S_{21}}\\ \frac{-S_{22}}{S_{21}}&\frac{1}{S_{21}}\end{bmatrix}\begin{bmatrix}b_{11}&0\\ 0&1\end{bmatrix}, \tag{11}\] where \(\det\left(\mathbf{S}\right)=S_{11}S_{22}-S_{21}S_{12}\). Converting back to S-parameters yields the following result: \[\mathrm{t2s}\left(\widetilde{\mathbf{A}}^{-1}\mathbf{M}_{\mathrm{net}}\widetilde{\mathbf{ B}}^{-1}\right)=\begin{bmatrix}a_{11}S_{11}&a_{11}b_{11}S_{12}k\\ S_{21}/k&b_{11}S_{22}\end{bmatrix}. \tag{12}\] From the symmetric reflect measurement, we can derive two equations similar to the TRL calibration as presented in (5) and (6): \[a_{11}\Gamma=\frac{\Gamma_{a}-a_{12}}{1-(a_{21}/a_{11})\Gamma_{a}},\quad b_{11 }\Gamma=\frac{\Gamma_{b}+b_{21}}{1+(b_{12}/b_{11})\Gamma_{b}}. \tag{13}\] Finally, we use the last standard, which is the network-reflect standard. For the left configuration (i.e., port \(\mathbf{A}\)), we can derive the input reflection coefficient in a similar way to the previous case, by recognizing that the reflect standard is cascaded with the unknown network. This is given as follows: \[a_{11}\frac{\Gamma S_{11}S_{22}-\Gamma S_{12}S_{21}-S_{11}}{\Gamma S_{22}-1}= \frac{\Gamma_{N,a}-a_{12}}{1-(a_{21}/a_{11})\Gamma_{N,a}}. \tag{14}\] A similar equation can be derived if we consider the measurement from the right port (i.e., port \(\mathbf{B}\)), which is given as follows: \[b_{11}\frac{\Gamma S_{11}S_{22}-\Gamma S_{12}S_{21}-S_{22}}{\Gamma S_{11}-1}= \frac{\Gamma_{N,b}+b_{21}}{1+(b_{12}/b_{11})\Gamma_{N,b}}. \tag{15}\] From (12)-(15), we can summarize the following seven equations relating the model and measurement: \[m_{1} =a_{11}\Gamma, \tag{16a}\] \[m_{2} =a_{11}S_{11},\] (16b) \[m_{3} =b_{11}\Gamma,\] (16c) \[m_{4} =b_{11}S_{22},\] (16d) \[m_{5} =a_{11}b_{11}S_{21}S_{12},\] (16e) \[m_{6} =\frac{a_{11}\left(\Gamma S_{11}S_{22}-\Gamma S_{12}S_{21}-S_{11 }\right)}{\Gamma S_{22}-1},\] (16f) \[m_{7} =\frac{b_{11}\left(\Gamma S_{11}S_{22}-\Gamma S_{12}S_{21}-S_{22 }\right)}{\Gamma S_{11}-1}. \tag{16g}\] The value of \(m_{5}\) in (16e) was calculated by multiplying the off-diagonal elements of the S-parameters in (12). We begin the derivation of \(a_{11}b_{11}\) with the measurement of \(m_{6}\) from (16f). First, we distribute \(a_{11}\) over the numerator, \[m_{6}=\frac{a_{11}\Gamma S_{11}S_{22}-a_{11}\Gamma S_{12}S_{21}-S_{11}a_{11}} {\Gamma S_{22}-1}. \tag{17}\] Then, we substitute \(m_{1}=a_{11}\Gamma\) and \(m_{2}=a_{11}S_{11}\), which gives us \[m_{6}=\frac{m_{1}S_{11}S_{22}-m_{1}S_{12}S_{21}-m_{2}}{\Gamma S_{22}-1}. \tag{18}\] Subsequently, we multiply both the numerator and the denominator by \(a_{11}b_{11}\). This gives us the following expression: \[m_{6}=\frac{m_{1}a_{11}b_{11}S_{11}S_{22}-m_{1}a_{11}b_{11}S_{12}S_{21}-m_{2} a_{11}b_{11}}{a_{11}b_{11}\Gamma S_{22}-a_{11}b_{11}}. \tag{19}\] We simplify the above expression by substituting the corresponding values of \(m_{1}\), \(m_{2}\), \(m_{3}\), \(m_{4}\), and \(m_{5}\). This results in the following expression in terms of \(a_{11}b_{11}\): \[m_{6}=\frac{m_{1}m_{2}m_{4}-m_{1}m_{5}-m_{2}a_{11}b_{11}}{m_{1}m_{4}-a_{11}b_{11 }}. \tag{20}\] Lastly, we rearrange the above expression and solve for \(a_{11}b_{11}\) as follows: \[a_{11}b_{11}=\frac{m_{1}m_{2}m_{4}-m_{1}m_{5}-m_{6}m_{1}m_{4}}{m_{2}-m_{6}}. \tag{21}\] The above expression for \(a_{11}b_{11}\) can be further simplified as follows: \[a_{11}b_{11}=m_{1}m_{4}-\frac{m_{1}m_{5}}{m_{2}-m_{6}}. \tag{22}\] It is worth noting that in (22), we did not need the measurement \(m_{7}\) from (16g). However, the same process can be done for \(m_{7}\) without the need for \(m_{6}\). Basically, in the above result, we swap \(m_{6}\leftrightarrow m_{7}\), \(m_{1}\leftrightarrow m_{3}\), and \(m_{2}\leftrightarrow m_{4}\). This results in the alternative solution for \(a_{11}b_{11}\) as follows: \[a_{11}b_{11}=m_{3}m_{2}-\frac{m_{3}m_{5}}{m_{4}-m_{7}}. \tag{23}\] If both \(m_{6}\) and \(m_{7}\) are available, we can establish an average measurement for \(a_{11}b_{11}\) or compare the two results of \(a_{11}b_{11}\) for a calibration consistency check. Once \(a_{11}b_{11}\) has been solved, we can use equations (7)-(9) to solve for \(a_{11}\) and \(b_{11}\), and then denormalize the error boxes using (10). Fig. 3: Error box model of the required standards for the denormalization of the error terms in the thru-free calibration method. To complete the calibration, we only need to solve for the transmission error term \(k\). We can use the same method as in SOLR calibration [11] by calculating \(k\) through the determinate of the single-port corrected measurement of a two-port reciprocal device, i.e., \(S_{21}=S_{12}\). For a reciprocal network, like the line standards, the calibrated measurement by the single-port error boxes is given by: \[\boldsymbol{A}^{-1}\boldsymbol{M}_{\mathrm{recip}}\boldsymbol{B}^{-1}=\frac{k }{S_{21}}\begin{bmatrix}S_{21}^{2}-S_{11}S_{22}&S_{11}\\ -S_{22}&1\end{bmatrix}. \tag{24}\] By taking the determinant from both sides, we obtain \[\det\left(\boldsymbol{A}^{-1}\boldsymbol{M}_{\mathrm{recip}}\boldsymbol{B}^{- 1}\right)=k^{2}. \tag{25}\] Hence, \(k\) is solved as follows: \[k=\pm\sqrt{\det\left(\boldsymbol{A}^{-1}\boldsymbol{M}_{\mathrm{recip}} \boldsymbol{B}^{-1}\right)} \tag{26}\] To determine the appropriate sign, we choose the answer closest to a known estimate of the reciprocal network. This estimate could be based on the line standard through the estimated value of the propagation constant or material properties. Furthermore, since all line standards are reciprocal, we can compute \(k^{2}\) from all of them and determine an average value. In Table I, we present a summary comparison of the definition of standards in the multiline TRL calibration and the thru-free calibration. ## IV Experiment ### _Measurement setup_ In this experiment, we fabricated a set of multiline standards as microstrip lines on a printed circuit board (PCB). The PCB consists of four copper layers, with the top two layers used for the fabricated microstrip lines. The substrate material is Panasonic Megstrom 7, with a specified dielectric constant of 3.4 and a tangent loss of 0.002. The multiline TRL kit includes multiple microstrip lines with lengths of \(\{0,0.5,1.5,2,3,5,6.5\}\,\mathrm{mm}\), and a reflect standard implemented as a short using microvias. The microstrip lines' probing pads are implemented using a low-return loss design of ground-signal-ground (GSG) pads, as discussed in [13]. The microstrip lines have a width of \(0.107\,\mathrm{mm}\) and a substrate thickness of \(0.05\,\mathrm{mm}\), corresponding to an average characteristic impedance of \(50\,\Omega\). We use the same line and reflect standards for the thru-free kit as in the multiline TRL kit. Additionally, we use a network standard implemented as a \(1\,\mathrm{mm}\) line and a network-reflect standard implemented as an offset short, which is implemented using the same microvia, offsetted by \(1\,\mathrm{mm}\). The network-reflect standard is implemented for both ports to demonstrate that the usage of either port will result in the same solution. In addition to the calibration standards, we included a device under test (DUT) for comparison purposes. The DUT is implemented as a stepped-impedance line with a length of \(5\,\mathrm{mm}\) and a width of \(0.22\,\mathrm{mm}\), corresponding to an average characteristic impedance of \(30\,\Omega\). The instrumentation setup consists of an Anritsu VectorStar VNA with millimeter-wave extensions to support frequencies up to \(150\,\mathrm{GHz}\). The probes used are ACP probes from FormFactor with a GSG-pitch of \(150\,\mathrm{\mu m}\). The measurement was performed on the SUMMIT200 probe station. A photograph of the measurement setup is shown in Fig. 4. ### _Results and discussion_ The raw S-parameter measurements of the calibration standards were collected over multiple frequency sweeps. For each standard, 25 frequency sweeps were collected at an IF-bandwidth of \(100\,\mathrm{Hz}\) and a source power of \(-10\,\mathrm{dBm}\). Each frequency sweep covers the range \(1-150\,\mathrm{GHz}\) with 299 frequency points. The collected data was processed in Python with help of the package _scikit-rf_[14], and the multiline TRL algorithm from [12] was used. We also applied the same eigenvalue formulation from [12] for the thru-free multiline calibration. Both methods result in the same normalized error terms, as a thru definition is not required in the formulation of the eigenvalue problem. We denormalized the error terms for the multiline TRL calibration using the reflect standard (short) and the thru standard (\(0\,\mathrm{mm}\) line) to define the location of the calibration plane at the center of the thru standard. Thereafter, we used the same reflect standard (short) for the thru-free calibration, in addition to the network standard implemented as a \(1\,\mathrm{mm}\) line and the network-reflect standard implemented as an offset short, with the offset being identical to the network standard (\(1\,\mathrm{mm}\) line). Furthermore, since we collected multiple sweeps for each standard, we computed the covariance matrix due to instrument noise and linearly propagated its uncertainty Fig. 4: Measurement setup depicting the ACP probes and the PCB carrying the calibration standards and DUT. through both calibrations using the technique discussed in [15, 16]. In Fig. 5, we show the S-parameters of the calibrated DUT (\(5\,\mathrm{mm}\) long \(30\,\mathrm{\SIUnitSymbolMicro}\) stepped-impedance line) using both calibration methods. For the thru-free method, we investigated both cases when using the network-reflect standard from either port. Generally, both the multiline TRL and the thru-free calibration methods show overlapping agreement. However, when we look at the uncertainty bounds, we see that for the calibrated \(S_{11}\), we obtain similar uncertainty bounds for both calibration methods, whereas for the calibrated \(S_{21}\) measurement, we see that the uncertainty in the magnitude is slightly higher for the thru-free method at frequencies above \(110\,\mathrm{GHz}\). More notability, the uncertainty of the thru-free method is much higher when using the network-reflect standard at port \(\mathbf{A}\). The noise impact on the thru-free calibration becomes no-noticeable after \(110\,\mathrm{GHz}\), but the calibration algorithm does not cause this. Instead, it is attributed to the VNA itself, specifically its poor performance at port 1 (i.e., port \(\mathbf{A}\)). Measurements taken at this port are always noisier compared to the opposite port, which explains why the uncertainty bounds are much higher when using the network-reflect at port \(\mathbf{A}\) than when using network-reflect standard at port \(\mathbf{B}\). Appendix A provides a more detailed analysis of the noise imbalance between the ports of the Anritsu ME7838D VNA. While the noise sensitivity between different ports is directly related to the VNA, it is still important to analyze the uncertainty contribution from each calibration standard due to VNA noise to the calibrated DUT. To do this, we consider the uncertainty budget due to each standard in the calibrated DUT. Both calibration methods use the same line standards in the exact same way in formulating the eigenvalue problem, therefore, these standards are not included in the budget analysis. Instead, we consider the thru and reflect standards for the multiline TRL calibration and the reflect, network, and network-reflect standards for the thru-free method. In Fig. 6, we show the uncertainty contribution from these standards to the calibrated S-parameters of the DUT. For the magnitude response, we have plotted the uncertainties in linear scale, as it is easier to interpret than in the dB scale. Regarding the uncertainties in \(S_{11}\), all standards exhibit similar contributions in terms of magnitude and phase, except for the network-reflect standard at port \(\mathbf{B}\), which is a single-port measurement that inherently has less noise than the other port. It should be noted that the reflect standard for both multiline TRL and thru-free method is a two-port measurement. Hence the high noise from port \(\mathbf{A}\) is present. As for the uncertainty contribution in \(S_{21}\), we observe that all calibration standards contribute to the uncertainty for the thru-free method. In contrast, for multiline TRL calibration, the reflect standard has no impact at all. This behavior may seem counterintuitive since the reflect standard is part of the calibration. However, this result is not surprising since the reflect standard contributes to deriving the ratio error term \(a_{11}/b_{11}\), which, in turn, allows the separation of the error terms \(a_{11}\) and \(b_{11}\). We can demonstrate that the calibrated \(S_{21}\) can be entirely calculated without the requirement of the reflect standard. This is because only the normalized error terms \(\{a_{12},a_{21}/a_{11},b_{21},b_{12}/b_{11}\}\), the combined error term \(a_{11}b_{11}\), and the transmission error term \(k\) are needed to describe the calibrated \(S_{21}\) response. A derivation of this relationship is presented in Appendix B. As an additional analysis, we selected a line with a non-zero length as the reference in the multiline TRL calibration. In the example mentioned previously, the reference line was a thru standard. Thus, post-processing to shift the calibration plane was unnecessary. For the current example, we choose the \(6.5\,\mathrm{mm}\) line as the reference line in multiline TRL calibration. The calibrated DUT result is shown in Fig. 7. As the plot shows, we need to shift the calibration plane backward using the propagation constant derived from the calibration to Fig. 5: The calibrated measurement of the \(5\,\mathrm{mm}\) long \(30\,\mathrm{\SIUnitSymbolMicro}\) stepped-impedance line. The calibrated measurement of \(S_{22}\) and \(S_{12}\) are not shown, as the behave similarly to \(S_{11}\) and \(S_{21}\). The uncertainty bounds correspond to a 95 % coverage of a Gaussian distribution due to noise from the VNA propagated linearly through the calibrations. Fig. 6: Uncertainty budget of the calibrated stepped-impedance line due to the calibration standards. The uncertainty is represented as 95 % coverage of a Gaussian distribution. The network-reflect standard is from port \(\mathbf{A}\) as the uncertainty from port \(\mathbf{B}\) is similar. The traces have been smoothed for readability using a Savitzky-Golay filter [17] with a window size of 9 and a polynomial order of 2. establish the reference plane at the desired location. However, for the thru-free method, no changes are made, and the calibration plane is automatically set by the measured network, network-reflect, and reflect standards. Therefore, in the thru-free method, we establish the calibration plane location using physical artifacts, whereas in multiline TRL, if a thru standard is not utilized, we must shift the calibration plane location in post-processing utilizing the derived propagation constant. ## V Conclusion We presented a modified version of multiline TRL calibration that eliminates the need for explicitly defining a thru standard. The proposed thru-free multiline calibration was compared to multiline TRL using measurements of microstrip lines fabricated on a PCB with a stepped-impedance DUT for verification. We observed excellent agreement between the proposed method and the multiline TRL calibration when a thru standard was used to set the reference plane. In cases where a thru standard is not available, the multiline TRL method requires shifting the calibration plane in post-processing to the desired location. This is in contrast to the proposed method, where the location of the calibration plane is set automatically by the measured artifacts. The advantage of the proposed thru-free method is that it eliminates the requirement to explicitly define a thru standard in multiline TRL calibration, making all calibration standards in the thru-free method partially defined. ## Appendix A Port Uncertainty of Anritsu ME7838D VNA The purpose of this section is to draw attention to the imbalance in noise uncertainty between the two ports of the Anritsu ME7838D VNA used for the measurements discussed in this paper. The test measurement for evaluating the uncertainty of each port was fairly straightforward. We connected a 0.8 mm coaxial short standard to each port, as shown in Fig. 8. The short standard was measured while the VNA was in an uncalibrated state. The measurement was performed in four configurations, with power levels -10 dBm and -20 dBm, and IF-bandwidths of 100 Hz and 1 kHz. To evaluate the statistics of the VNA, a frequency sweep between 1 GHz and 150 GHz was conducted, 100 times for the 100 Hz IF-bandwidth and 500 times for the 1 kHz IF-bandwidth. In Fig. 9, we present the mean value of the measured short standard. Across all configurations, there appears to be no difference between the ports. However, in Fig. 10, we show the standard deviation of the measurements, which clearly indicates a significant noise contribution in port 1 (port \(\mathbf{A}\)), in comparison to port 2 (port \(\mathbf{B}\)). The uncertainty jump in the \(S_{11}\) measurement start at \(54\,\mathrm{GHz}\), which is where the power level settings of the Fig. 8: Measurement setup depicting the mm-wave extenders with coaxial 0.8 mm short standards connected to them. Fig. 10: Uncertainty of the raw measurement of the 0.8 mm coaxial short standard. The uncertainty is reported as the 95 % coverage of a Gaussian distribution. Fig. 7: Calibrated measurement of a \(5\,\mathrm{mm}\) long \(30\,\mathrm{\SIUnitSymbolOhm}\) stepped-impedance line. The reference line used in multiline TRL calibration has a length of \(6.5\,\mathrm{mm}\). The uncertainty bounds correspond to a 95 % coverage of a Gaussian distribution due to noise from the VNA propagated linearly through the calibrations. Fig. 9: Mean-value of the raw measurement of the 0.8 mm coaxial short standard under different VNA configurations. Anritsu ME7838D VNA split. This VNA has two power level settings, one for frequencies below \(54\,\mathrm{GHz}\) and the other for frequencies above this value. Although Fig. 10 already demonstrates the poor statistical performance of port 1 compared to port 2, we can see a clear difference in the uncertainty of the traces at 110 GHz when the settings are -10 dBm and 100 Hz. Specifically, port 1 yields an expanded uncertainty of 0.00132, while port 2 yields an expanded uncertainty of 0.00011, which is a factor of 10 difference between the two ports. This difference scales even further during calibration, as demonstrated in the measurements presented in Section IV. ## Appendix B Deriving Calibrated S-parameters The calibrated S-parameters can be computed easily by multiplying the inverse of the error boxes as T-parameters and then converting them to S-parameters. This can be expressed as follows: \[\mathbf{S}_{\mathrm{cal}}=\mathrm{t2s}\left(\frac{1}{k}\mathbf{A}^{-1}\mathrm{s2t} \left(\mathbf{S}_{\mathrm{raw}}\right)\mathbf{B}^{-1}\right), \tag{27}\] where \(\mathbf{S}_{\mathrm{cal}}\) and \(\mathbf{S}_{\mathrm{raw}}\) represent the calibrated and raw measurements of the S-parameter of an arbitrary DUT. Applying the equation above, the calibrated S-parameters can be expressed as follows: \[S_{11}^{\mathrm{cal}}=\frac{b_{12}\left(\det\left(\mathbf{S}_{\mathrm{raw}} \right)-a_{12}S_{22}^{\mathrm{raw}}\right)-b_{11}\left(a_{12}-S_{11}^{\mathrm{ raw}}\right)}{b_{11}\left(a_{11}-a_{21}S_{11}^{\mathrm{raw}}\right)+b_{12} \left(a_{11}S_{22}^{\mathrm{raw}}-\det\left(\mathbf{S}_{\mathrm{raw}}\right)a_{21} \right)}, \tag{28a}\] \[S_{21}^{\mathrm{cal}}=\frac{kS_{21}^{\mathrm{raw}}\left(a_{11}-a _{12}a_{21}\right)\left(b_{11}-b_{12}b_{21}\right)}{b_{11}\left(a_{11}-a_{21}S_ {11}^{\mathrm{raw}}\right)+b_{12}\left(a_{11}S_{22}^{\mathrm{raw}}-\det\left( \mathbf{S}_{\mathrm{raw}}\right)a_{21}\right)},\] (28b) \[S_{12}^{\mathrm{cal}}=\frac{S_{12}^{\mathrm{raw}}/k}{b_{11}\left(a _{11}-a_{21}S_{11}^{\mathrm{raw}}\right)+b_{12}\left(a_{11}S_{22}^{\mathrm{ raw}}-\det\left(\mathbf{S}_{\mathrm{raw}}\right)a_{21}\right)},\] (28c) \[S_{22}^{\mathrm{cal}}=\frac{a_{11}\left(b_{21}+S_{22}^{\mathrm{raw}} \right)-a_{21}\left(\det\left(\mathbf{S}_{\mathrm{raw}}\right)+b_{21}S_{11}^{ \mathrm{raw}}\right)}{b_{11}\left(a_{11}-a_{21}S_{11}^{\mathrm{raw}}\right)+b_ {12}\left(a_{11}S_{22}^{\mathrm{raw}}-\det\left(\mathbf{S}_{\mathrm{raw}}\right)a_ {21}\right)}, \tag{28d}\] where \(\det\left(\mathbf{S}_{\mathrm{raw}}\right)=S_{11}^{\mathrm{raw}}S_{22}^{\mathrm{ raw}}-S_{12}^{\mathrm{raw}}S_{21}^{\mathrm{raw}}\). The expressions for calibrated \(S_{21}\) and \(S_{12}\) do indeed show dependence on \(a_{11}\) and \(b_{11}\). However, simplifying the expressions reveals that the calibrated \(S_{21}\) and \(S_{12}\) only depend on the normalized error terms obtained from the eigenvalue formulation \(\{a_{12},a_{21}/a_{11},b_{21},b_{12}/b_{11}\}\), the combined error term \(a_{11}b_{11}\), and the transmission error term \(k\), which are obtained from the thru measurement, as given by (4). The expressions for \(S_{21}^{\mathrm{cal}}\) and \(S_{12}^{\mathrm{cal}}\) can be rewritten and simplified as follows: \[S_{21}^{\mathrm{cal}}=\frac{kS_{21}^{\mathrm{raw}}u}{v},\qquad S_{12}^{ \mathrm{cal}}=\frac{S_{12}^{\mathrm{raw}}/k}{v}, \tag{29}\] where the numerator \(u\) and denominator \(v\) are given by \[u =a_{11}b_{11}\left(1-a_{12}\frac{a_{21}}{a_{11}}\right)\left(1- \frac{b_{12}}{b_{11}}b_{21}\right), \tag{30a}\] \[v =a_{11}b_{11}\left[1-\frac{a_{21}}{a_{11}}S_{11}^{\mathrm{raw}} +\frac{b_{12}}{b_{11}}\left(S_{22}^{\mathrm{raw}}-\det\left(\mathbf{S}_{\mathrm{ raw}}\right)\frac{a_{21}}{a_{11}}\right)\right]. \tag{30b}\] The expressions for \(u\) and \(v\) show that \(S_{21}^{\mathrm{cal}}\) and \(S_{12}^{\mathrm{cal}}\) indeed depend solely on the normalized error terms \(\{a_{12},a_{21}/a_{11},b_{21},b_{12}/b_{11}\}\), the combined error term \(a_{11}b_{11}\), and the transmission error term \(k\). This means that the terms \(a_{11}\) and \(b_{11}\) are never found separately, which explains why the uncertainty due to the reflect standard is nullified in multiline TRL calibration as observed in Fig. 6. ## Acknowledgment The financial support by the Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development is gratefully acknowledged. The authors also thank AT&S for manufacturing the PCB and ebsCENTER for lending their equipment for the measurements.
2301.11464
The formation of supermassive black holes from Population III.1 seeds. II. Evolution to the local universe
We present predictions for cosmic evolution of populations of supermassive black holes (SMBHs) forming from Population III.1 seeds, i.e., early, metal-free dark matter minihalos forming far from other sources, parameterized by isolation distance, $d_{\rm{iso}}$. Extending previous work that explored this scenario to $z=10$, we follow evolution of a $(60\:{\rm{Mpc}})^3$ volume to $z=0$. We focus on evolution of SMBH comoving number densities, halo occupation fractions, angular clustering and 3D clustering, exploring a range of $d_{\rm{iso}}$ constrained by observed local number densities of SMBHs. We also compute synthetic projected observational fields, in particular a case comparable to the Hubble Ultra Deep Field. We compare Pop III.1 seeding to a simple halo mass threshold model, commonly adopted in cosmological simulations of galaxy formation. Major predictions of the Pop III.1 model include that all SMBHs form by $z\sim25$, after which their comoving number densities are near-constant, with low merger rates. Occupation fractions evolve to concentrate SMBHs in the most massive halos by $z=0$, but with rare cases in halos down to $\sim10^8\:M_\odot$. The $d_{\rm{iso}}$ scale at epoch of formation, e.g., $100\:$kpc-proper at $z\sim30$, i.e., $\sim3\:$Mpc-comoving, is imprinted in the SMBH two-point angular correlation function, remaining discernible as a low-amplitude feature to $z\sim1$. The SMBH 3D two-point correlation function at $z=0$ also shows lower amplitude compared to equivalently massive halos. We discuss prospects for testing these predictions with observational surveys of SMBH populations.
Jasbir Singh, Pierluigi Monaco, Jonathan C. Tan
2023-01-26T23:35:41Z
http://arxiv.org/abs/2301.11464v2
The formation of supermassive black holes from Population III.1 seeds. II. Evolution to the local universe ###### Abstract We present predictions for cosmic evolution of populations of supermassive black holes (SMBHs) forming from Population III.1 seeds, i.e., early, metal-free dark matter minihalos forming far from other sources, parameterized by isolation distance, \(d_{\rm iso}\). Extending previous work that explored this scenario to \(z=10\), we follow evolution of a \((60\ {\rm Mpc})^{3}\) volume to \(z=0\). We focus on evolution of SMBH comoving number densities, halo occupation fractions, angular clustering and 3D clustering, exploring a range of \(d_{\rm iso}\) constrained by observed local number densities of SMBHs. We also compute synthetic projected observational fields, in particular a case comparable to the Hubble Ultra Deep Field. We compare Pop III.1 seeding to a simple halo mass threshold model, commonly adopted in cosmological simulations of galaxy formation. Major predictions of the Pop III.1 model include that all SMBHs form by \(z\sim 25\), after which their comoving number densities are near-constant, with low merger rates. Occupation fractions evolve to concentrate SMBHs in the most massive halos by \(z=0\), but with rare cases in halos down to \(\sim 10^{8}\,M_{\odot}\). The \(d_{\rm iso}\) scale at epoch of formation, e.g., \(100\,{\rm kpc}\)-proper at \(z\sim 30\), i.e., \(\sim 3\,{\rm Mpc}\)-comoving, is imprinted in the SMBH two-point angular correlation function, remaining discernible as a low-amplitude feature to \(z\sim 1\). The SMBH 3D two-point correlation function at \(z=0\) also shows lower amplitude compared to equivalently massive halos. We discuss prospects for testing these predictions with observational surveys of SMBH populations. keywords: black holes - formation - early universe ## 1 Introduction The formation of stellar mass black holes is relatively well understood, but the same is not true for supermassive black holes (SMBHs). These black holes have masses \(\geq 10^{5}M_{\odot}\) and are found at the center of most large galaxies. The biggest mystery regarding their formation is explaining their high masses in the early universe. A stellar-mass BH, formed at very high redshift from the collapse of a massive primordial star, can grow by accreting gas as long as accretion can be sustained for a long time. However, this accretion is believed to be Eddington-limited by radiation pressure, so when gas inflow is abundant the growth of BH mass is expected to be exponential, with an e-fold time of \(\sim 4\times 10^{7}\ {\rm yr}\). Recent discoveries of high redshift quasars, for example J0313-1806 at \(z=7.642\) (farthest observed to date, Wang et al.2021) and J1007+2115 at \(z=7.515\)(Yang et al.2020), both hosting a SMBH more massive than \(10^{9}M_{\odot}\) put stringent constrain on any SMBH formation scenario. The existence of these quasars imply that these black holes grew to such high masses by the time the universe was only \(\sim 700\) million years old. Even assuming a very early formation at \(z\sim 30\), the BH seed should be at least as massive as \(500\ M_{\odot}\) to grow to the desired mass by the observation redshift, and later formation would imply higher seed masses. A variety of theories have been proposed to explain the formation of SMBHs, with different degrees of complexity and rooted to small-scale physics that is typically unresolved a cosmological simulations. As a consequence, simplified assumptions are typically used in these simulations to create black holes in a given dark matter halo or the galaxy contained in it, based on the properties of the parent halo or the galaxy, often using sub-grid physics. One of the simplest and widely used models is the halo mass threshold (HMT) seeding scheme based on the methods developed by Sijacki et al. (2007) and Di Matteo et al. (2008), in which a seed black hole is assumed to form in a halo crossing a certain mass threshold. The Illustris project (Vogelsberger et al.2014) uses this mechanism to add SMBHs of mass \(1.4\times 10^{5}M_{\odot}\) in each halo which crosses a mass threshold of \(m_{\rm th}=7.1\times 10^{10}M_{\odot}\). A similar approach is used in the Evolution and Assembly of GalAzies and their Environments (EAGLE) simulation (Barber et al.2016). There have been many attempts to explain the formation of SMBHs via more physical mechanisms, dating back to the last century (e.g., Rees1978). One of the most popular mechanisms is _direct collapse_, which involves the collapse of a large primordial composition gas cloud in a halo of mass \(\sim 10^{8}M_{\odot}\) into a single supermassive star of \(10^{4}-10^{6}M_{\odot}\) that then collapses to form a SMBH at the centre of the halo (Bromm & Loeb, 2003; Begelman et al., 2006; Lodato & Natarajan, 2006; Shang et al., 2010; Montero et al., 2012). Although the number density of black holes emerging from direct collapse would be enough to explain the currently known population of high redshift quasars, the conditions required for this scenario are not thought to be common enough to explain the total observed population of SMBHs at \(z=0\)(Chon et al., 2016; Wise et al., 2019). Furthermore, recent simulations have shown that the supermassive stars forming via this mechanism might not be as massive as initially predicted, but only reaching \(\lesssim 10^{4}M_{\odot}\), due to the turbulent environment present in the initial stages of galaxy formation, which disrupts the accretion flow (Regan et al., 2020). Another mechanism to form intermediate, or even supermassive black holes is through runaway stellar mergers in young and dense clusters to create massive stars seeds of the order \(\sim 200-10^{3}M_{\odot}\)(e.g., Portegies Zwart et al., 2004). This mass can be reached through repeated collisions if the massive stars can reach the cluster core to increase the collision rate drastically (Ebisuzaki, 2003) before they explode as supernovae. However, predicting whether such conditions arise in galaxies and at what rate is very challenging given the the need to resolve the formation and evolution of individual stars, so predictions for the cosmological population of such systems are highly uncertain (see, e.g., Boekholt et al., 2018; Clon & Omukai, 2020; Tagawa et al., 2020). Some methods take into consideration more local properties of the host galaxy, such as the ones used in Horizon-AGN simulation (Volonteri et al., 2016), in which the gas and stellar densities and the stellar velocity dispersion is required to exceed a certain threshold for the galaxy to be seeded with a black hole. In addition to this, all the forming black holes must be separated by at least 50 comoving kpc, and the formation is limited until \(z=1.5\). If all these conditions are met, the halo is seeded with a \(10^{5}M_{\odot}\) black hole. Adopting a similar criteria, the more recent obelsik simulation (Trebitsch et al., 2021) also applies the condition of gas and stellar density exceeding a threshold, and an isolation of 50 kpc to avoid multiple black holes forming in the same galaxy. Furthermore, they also require the gas to be Jeans unstable. If all these conditions are satisfied, then a black hole of \(3\times 10^{4}M_{\odot}\) is assigned to the galaxy. In another approach that also uses the local properties of the galaxy to assign a seed, the romulus simulation (Tremmel et al., 2017) employs the criteria of the limit on the metallicity, a threshold on the gas density, and a temperature range. Once all these conditions are satisfied, the mass of the seed black hole is set to be \(10^{6}M_{\odot}\). In this work, we focus on a formation scenario which invokes the Population III.1 stars formed in the early universe as the progenitors of SMBHs. Pop III.1 stars are defined to be Pop III (i.e., metal free) stars forming in first dark matter minihalos that are isolated from other stellar or SMBH feedback sources (McKee & Tan, 2008). It is assumed that in the absence of any significant radiative (or mechanical) feedback, a single dominant protostar forms at the center of the minihalo and has its structure affected by the energy input from Weakly Interacting Massive Particle (WIMP) dark matter self annihilation inside the protostar (Spolyar et al., 2008; Natarajan et al., 2009; Freese et al., 2010; Rindler-Daller et al., 2015). Such protostars maintain relatively cool outer layers, which allows efficient accretion of the baryonic content of the minihalo, i.e., \(\sim 10^{5}\ M_{\odot}\), to form a supermassive star, which subsequently collapses efficiently to a SMBH after a few Myr. This Pop III.1 seeding mechanism, which is based on locating isolated minihalos, was applied on a cosmological simulation in Banik et al. (2019) (hereafter Paper I). The evolution was followed from high redshifts down to \(z=10\). The main free parameter in the model is the _isolation distance_ (\(d_{\rm iso}\)), i.e., how far a newly forming minihalo needs to be from previously formed halos in order to be a Pop III.1 source. For a fiducial value of \(d_{\rm iso}=100\) kpc (proper distance), the model yields co-moving number densities of SMBHs that match the estimated level of the known \(z=0\) SMBH population. Note, that in this case (and all other reasonable cases) most minihalos do not form Pop III.1 sources. Rather, most are Pop III.2 sources, which are metal free, but having been disturbed by radiative feedback undergo significant fragmentation to form only lower-mass (e.g., \(\sim 10\ M_{\odot}\)) stars (Greif & Bromm, 2006). In this paper, we take this Pop III.1 seeding mechanism and extend the results down to the local universe, \(z=0\). In SS2, we briefly describe our seeding algorithm and the tools used to apply it. Then we present our results in SS3, starting with the evolution of number density of seeded halos down to \(z=0\). We compare these results with the HMT scheme, and also discuss the SMBH occupation fraction and clustering properties of seeded halos. Finally, we create synthetic Hubble Ultra Deep Fields (HUDFs) to demonstrate the possibility of using the HUDF to differentiate among different seeding mechanisms. We then present our conclusions in SS4. ## 2 Methods ### pinocchio simulations As in Paper I, to test our Pop III.1 seeding mechanism, we used the Pinocchio code (Monaco et al., 2002; Munari et al., 2017) to generate a cosmological box of 59.7 Mpc (40 \(h^{-1}\) Mpc for \(h=0.67\)) with standard Planck cosmology (Planck Collaboration, 2020) and study the formation of DM (mini-)halos in that box. Pinocchio uses Lagrangian Perturbation Theory (LPT, e.g., Moutarde et al., 1991) to approximate the evolution of cosmological perturbations in a \(\Lambda\)CDM universe. For a given set of initial conditions, the code generates outputs in the form of catalogs at different redshifts, which contain mass, position and velocity of the DM halos, and a complete information of the merger histories of all the halos, with continuous time sampling. This code was written for applications in cosmology, where huge volumes with moderate mass resolution are requested, and its performance heavily depends on the mass resolution adopted. To resolve minihalos of \(\sim 10^{6}M_{\odot}\) it is necessary to sample a 59.7 Mpc box with \(4096^{3}\) particles; this results in a particle mass of \(1.23\times 10^{5}M_{\odot}\), and we adopted a minimum mass of 10 particles (that would be unacceptable for an N-body simulation, but it is acceptable for a semi-analytic code like Pinocchio), resulting in a minihalo mass of \(1.23\times 10^{5}M_{\odot}\). Such a large simulation can only be run on a supercomputer, distributing the computation on a large number of nodes. Since the fragmentation of collapsed particles into halos is done in Lagrangian space, the domain distributed to a task is not much larger than the dimension of the largest halo, so massive halos will not be reconstructed correctly. As a result, with V4 of pinocchio (Munari et al., 2017) used in Paper I, we were only able to push the simulation down to \(z=10\). We use here the novel V5 of the code, that implements a number of numerical techniques to improve memory efficiency. This code will be presented elsewhere, the strategy to perform halo construction at high resolution is the following: a first step of halo construction is performed using subboxes; then the domain is augmented with all particles that lie within \(N_{\rm Lag}\) times the Lagrangian size of the constructed halos; and then halo construction is performed again. Memory occupation depends on \(N_{\rm Lag}\), so we were forced to use \(N_{\rm Lag}=2\), while a value of 3 is a better guarantee of convergence in halo construction. The 59.7 Mpc box with full 4096\({}^{3}\) resolution was run to z=0 on 800 MPI tasks over 100 computing nodes (each with 256 GB of RAM), so the domain was divided into \(6\times 6\times 7.5\) Mpc sub-volumes for halo construction. The resulting halo mass function showed two problems that are presented in greater detail in an Appendix. We discuss here their nature and their implications. As a consequence of the difficulty of calibrating the formation of halos with a very steep power spectrum, the mass of the first halos is underestimated by a factor of \(\sim 2\) at \(z\sim 30\), decreasing to a negligible value at \(z\sim 10\). This is a known trend in pinocchio, visible, e.g., in Figure 1 of Munari et al. (2017) where the \(z=3\) halo MF is slightly underestimated in those tests. We are working to improve this prediction, but we do not consider this as a showstopper for several reasons: our seed BHs are already predicted to form very early, so this underestimation only causes us to be slightly conservative in their formation redshift, i.e., in fact they would already have formed at slightly higher \(z\). In our simple modeling we are assuming here immediate formation of the protostar and then the SMBH, whereas in reality this might take several Myr or even tens of Myr. The time span that separating \(z=32\) from \(z=29\) is only \(\sim 14\) Myr, so neglecting astrophysical timescales leads to an overestimation of formation redshift, which compensates against the underestimation problem. Finally, the minihalo threshold mass can be consider to be a second free parameter of the modeling (although one that has physical motivation to be close to \(10^{6}\,M_{\odot}\)), so one can simply consider our predictions to be valid for minihalo masses of \(2.5\times 10^{6}\,M_{\odot}\). We add to these arguments the fact that inaccuracies in halo masses do not propagate as inaccuracies in halo positions, that are crucial outcomes of our seeding scheme. A more serious problem is connected to the inaccurate reconstruction of halos more massive than \(10^{12}M_{\odot}\). Indeed, the small size of the sub-box domain for constructing halos results in a poor reconstruction of massive halos. This problems makes predictions at \(z=0\) unreliable. We thus produced the same box at a lower resolution, sampled with \(1024^{3}\) particles, on a single MPI task on a 256 GB node. Again, this was possible thanks to V5 of the code. In this case halo construction is as good as it can be. However, the identification of halos that contain seed SMBHs has been performed in the high resolution box, and though the simulations share the same large-scale structure, matching massive halos in the two boxes is not a clean procedure. We then resorted to this algorithm: starting from the fact that one low-resolution particle contains 64 high-resolution ones, we calculated which particle in the lower resolution box includes the seeded mini-halo, and assigned the seed to the halo that contains that specific low-resolution particle. We checked that results at \(z=0\) produced with the low- and high-resolution simulations were consistent, with a significant difference in halo clustering of halos more massive than a certain threshold that is an expected consequence of the inaccurate mass reconstruction and the known relation of halo bias with halo mass. In the following we will present results at \(z=0\) based on the low resolution box, unless mentioned otherwise. ### Seeding scheme To determine which halos are seeded with a Pop III.1 star and thence SMBH, consider the scenario depicted in Fig. 1, unfolding in the early universe. The figure shows three stars A, B and C in different halos where only A and C become Pop III.1 stars whereas B is a Pop III.2 star, depending on the separation and formation order. Star A formed first, which then influenced its environment within a sphere of radius equal to \(d_{\rm feedback}\), expected to be primarily radiative feedback. Since this star is in a pristine primordial gas without the influence of any feedback from nearby stars, it is defined to be a Pop III.1 star. Star B, which subsequently forms at a distance less than \(d_{\rm feedback}\) from star A, is affected by the feedback and hence is a Pop III.2 star (or even a Pop II star if it has been chemically polluted). Finally, star C forms outside the sphere of influence of both A and B, and is thus also assigned to be a Pop III.1 star and thus a SMBH. For the model considered here, the feedback distance is set equal to the isolation distance \(d_{\rm iso}\). So effectively, the condition for a star to be regarded as a Pop III.1 star is that when it is forming, there should be no previously formed halos present in the sphere of radius \(d_{\rm iso}\). We consider \(d_{\rm iso}\) as a free parameter in our theory and vary it to match the observed number density of the SMBHs in the local Universe. ### Seed identification in the dark matter catalogs To perform the seed identification analysis from the dark matter catalogs generated by pinocchio, we first divided the entire redshift range (from \(z=0\) to the redshift when the first minihalo forms, \(z\approx 40\)) into small bins of widths ranging from \(\Delta z=1\), 2 or 3, depending on the output catalogs available, which in turn depends on the relative change in positions of (mini)halos. The bins are wider at high redshifts, but smaller at lower redshifts. Then for each redshift interval \((z_{l},z_{h}]\) where \((z_{h}>z_{l})\), we utilised \(k\)-th data structure to create a three dimensional map in position space of all the halos existing between \(z_{h}\) and \(z_{l}\). The positions used to create the tree are taken from the output catalog of pinocchio at the lower redshift of the interval \((z_{l})\). Since the positions are not updated once the tree is constructed, we account for the change in the positions within this redshift interval by finding the maximum change (\(\delta\)) of position among all the halos existing for the entire redshift range. Then for each minihalo crossing the mass threshold of \(10^{6}M_{\odot}\) (or as in the nomenclature of pinocchio: "appearing") at a redshift \(z_{\rm app}\in(z_{l},z_{h}]\), we perform a ball search using the \(k\)-d tree to find all the halos around the appearing minihalo within a sphere of radius \(d_{\rm iso}-2\delta\)1. If there exists even a single halo at the redshift \(z_{\rm app}\) within this sphere, then this minihalo Figure 1: A schematic illustration of the Pop III.1 SMBH seeding scenario depicting the conditions for a star to be isolated enough to be considered as a Pop III.1 star (see text). is flagged as a halo containing a non-Pop III.I star at its center. If there are no halos existing at this redshift, then the ball search is performed again with the same minihalo at the center, but this time within a sphere of radius \(d_{\rm iso}+2\delta\). Then for all the halos existing at redshift \(z_{\rm app}\) within the shell of radius \(d_{\rm iso}\pm 2\delta\), we find the exact distance between the minihalo at the center and all these halos using the exact positions at \(z_{\rm app}\). If this distance is greater than \(d_{\rm iso}\) for all the halos within the shell, then the minihalo at the center is flagged as a Pop III.1 source, i.e., an SMBH-seeded halo. This process is repeated for each minihalo crossing the threshold mass within the two redshifts, and then this whole procedure is performed again for all the redshift intervals, until the whole redshift range is covered. In this way we are able check the isolation condition for each minihalo appearing in the cosmological box and find all the seeded minihalos. At smaller redshifts, the change in positions of the halos (\(\delta\)) within the redshift intervals becomes comparable to the isolation distance. This implies that the quantity \(d_{\rm iso}-2\delta\) can become negative (in our simulation box, this happens at around \(z\approx 15\) for \(d_{\rm iso}=50\) kpc). In this case, the ball search is directly performed in a sphere of radius \(d_{\rm iso}+2\delta\), and then the exact distances between the minihalo at the center and all the other halos existing at \(z_{\rm app}\) is calculated. This division of the entire redshift interval and creating the \(k\)-d only at specific redshifts is performed to avoid reconstructing the tree with the up-to-date position at every instance a new minihalo appears. Since the number of minihalos is very large, it becomes highly expensive computationally to reconstruct the tree with updated positions each time a new minihalo appears. ## 3 Results ### Number density evolution As explained in the last section and in detail in Paper I, we identify SMBH-seeded halos by the condition that the isolation sphere of radius \(d_{\rm iso}\) around a newly forming minihalo is not be populated by any other existing halo (of mass greater than our minihalo threshold mass). The obtained results for the evolution of number density for different values of \(d_{\rm iso}\) (in proper distance units) are shown in Fig. 2. The estimate for the observed number density of SMBHs in the local Universe, \(n_{\rm SMBH}\,(z=0)\) (black square in the figure) is calculated by assuming that each galaxy with luminosity greater than \(0.33L_{*}\) hosts a SMBH (see Paper I). Here \(L_{*}\) is the characteristic luminosity corresponding to \(M_{\rm B}=-19.7+5\log h=-20.55\)(for e.g., Norberg et al., 2002). The colored dotted lines show the number density evolution of total number of SMBHs, whereas the colored solid lines show the number density for seeded halos (which can be slightly smaller due to mergers). These results are from the highest resolution simulation with \(4096^{3}\) particles. Compared to the number densities in Figure 1 of Paper I, the values obtained here are slightly lower (by a factor of \(\sim 1.45\) for 100 kpc, and \(\sim 1.65\) for 50 kpc) because we have considered periodic boundary conditions when identifying the seeds, which was not done in Paper I. From Fig. 2, it can be clearly seen that as the isolation distance is reduced, the number of formed SMBHs increases. This is expected because smaller \(d_{\rm iso}\) results in more halos satisfying the isolation criteria for hosting SMBH seeds within our simulation volume. We can also conclude that for a certain range of \(d_{\rm iso}\) (\(\approx 90\) kpc to 170 kpc), the number density obtained is in reasonable agreement with the \(z=0\) estimate. A key feature of the fiducial Pop III.1 SMBH seeding model, i.e., with \(d_{\rm iso}=100\) kpc, is that _all SMBHs have formed very early in the Universe: the process is essentially complete by \(z\simeq 25\)_. We compare this prediction to a halo mass threshold model (HMT scheme; shown by the green dashed line in the figure) in which each halo more massive than \(m_{\rm th}=7.1\times 10^{10}M_{\odot}\) is seeded (e.g., the Illustris project: Vogelsberger et al., 2014; Sijacki et al., 2015, etc.); note, this seeding scheme is driven by the mass resolution of the simulation, i.e., halos are seeded as soon as they are resolved with a sufficient number of particles). Our model predicts that all SMBHs formed much earlier in the universe. While a comparison with other physical model of seeding is planned for future papers, this figure shows the potentiality of distinguishing models by searching for AGNs at high redshift. We find that only a small number of mergers between seeded halos occur. Table 1 shows the total number of SMBHs that formed (\(N_{\rm SMBH,form}\)) and the number of halos containing them at \(z=0\) (\(N_{\rm SMBH}(z=0)\)). Assuming efficient merging of SMBHs that are in the same halo, then the number of mergers is \(\Delta N_{\rm SMBH}=N_{\rm SMBH,form}-N_{\rm SMBH}(z=0)\). A feature of the Pop III.1 seeding mechanism is that SMBHs are initially spread out from each other, so that there are relatively few binary SMBHs and few mergers. A detailed analysis of the mergers including the binary (and higher order multiples) AGN number densities, and the gravitational wave background emanating from these mergers will be discussed in a future paper in this series. A caveat of our seeding model is that at small redshifts, around \(\lesssim 6\), the isolation distance in comoving units becomes so small that many minihalos that appear after this redshift start satisfying the isolation criteria. This effect would result in an increase in number density by around 2 orders of magnitude by \(z=0\) from the converged values around \(z\approx 20\), for all cases of \(d_{\rm iso}\). However, since reionization has completed by \(z\approx 8\)(Planck Collaboration, 2020), we assume that the formation of Pop III.1 sources is also not possible below this redshift. Hence, in our analysis, we set a limit of seed formation to be only possible until \(z=8\). For most cases of the isolation distances we considered (\(\geq 75\) kpc), the number density is already converged at redshifts greater than \(z=20\). However, for the case of 50 kpc, new seeds still keep on appearing until \(z=8\) (although below \(z=15\) the total number only increases by about 1%). In Figure 3, we show a visual representation of the seeded halos in the box at different redshifts, for all the isolation distances considered in Fig. 2. As discussed, the 50 kpc case is the most crowded with the highest number of seeded halos at every epoch shown. Initially all the seeds emerge in a relatively unclustered manner, but eventually the clustering increases as lower-mass seeded halos migrate towards more massive halos and merge with them in overdense regions. We perform a more detailed analysis of clustering in SS3.3. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(d_{\rm iso}\) [kpc] & \(N_{\rm SMBH,form}\) & \(N_{\rm SMBH}(z=0)\) & \(\Delta N_{\rm SMBH}\) & \(f_{\rm merger}\) \\ \hline 50 & 15470 & 14499 & 971 & 6.28 \\ 75 & 3394 & 3303 & 91 & 2.68 \\ 100 & 1234 & 1222 & 12 & 0.97 \\ 150 & 306 & 306 & 0 & 0 \\ 200 & 121 & 121 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: Total number of formed SMBHs (\(N_{\rm SMBH,form}\)), total number of SMBHs remaining at \(z=0\) assuming efficient mergers (\(N_{\rm SMBH}(z=0)\)), the difference between these (\(\Delta N_{\rm SMBH}=N_{\rm SMBH,form}-N_{\rm SMBH}(z=0)\)), which is equivalent to the number of mergers, and the fraction of original SMBHs that are destroyed by mergers (\(f_{\rm merger}=\Delta N_{\rm SMBH}/N_{\rm SMBH,form}\)). ### Occupation fraction of seeded halos From observations of local galaxies, it appears that almost all massive galaxies contain a nuclear SMBH. This implies that the SMBH occupation fraction of halos should approach unity as halo mass rises. Figure 4 shows the evolution of occupation fraction from one realization of our 59.7 Mpc box, through 4 different redshifts for halos ranging from \([10^{6},10^{14}]M_{\odot}\) (the upper limit of the mass range is chosen to include the most massive halo at \(z=0\) in our \(1024^{3}\) resolution simulation box, measuring \(7.8\times 10^{13}M_{\odot}\)). As expected, with the decrease in the isolation distance, more and more halos are seeded and hence the occupation fraction is higher compared to the same mass range for larger \(d_{\rm iso}\). All the fractions at \(z=0\) approach unity for the most massive halos, independent of the isolation distance. Interestingly, the most massive halo is not always occupied by a SMBH throughout the redshift evolution in our simulations. For example, at \(z=4\) there can be significant fractions of the most massive halos, i.e., \(\sim 10^{12}\)\(M_{\odot}\), that are not seeded. Figure 5 shows the evolution of the cumulative occupation fraction, i.e., for all halos more massive than \(\{10^{8},10^{9},10^{10},10^{11},10^{12},10^{13}\}M_{\odot}\), for three different cases of isolation distance. If we consider only the most massive halos (\(>10^{13}M_{\odot}\)), the fraction is close to one (as also evident from Fig. 4). At a given redshift, as we consider less massive halos, the occupation fraction decreases. At a given mass threshold, as we move out to higher redshift the occupation generally rises, since these halos become relatively more extreme members of the global halo population. Interestingly, the occupation fraction for all halos more massive than \(10^{8}\) and \(10^{9}M_{\odot}\) (\(10^{10}M_{\odot}\) as well, although to a lower degree) at \(z=0\) differ by factors of approximately 10 among the three cases of isolation distances considered, reflecting the same differences in the global number densities at \(z=0\) (see Fig. 2). ### Clustering We perform a clustering analysis using the corrfunc library (Sinha & Garrison, 2020) for python, and the results are shown in Fig. 6. By sampling \(r\) in 20 logarithmic bins of \(r_{\rm min}=0.5\) Mpc/h to \(r_{\rm max}=13.3\) Mpc/h, we evaluate the 3D 2-point correlation function2 (2pcf) \(\xi_{\rm fhf}(r)\) for all halos more massive than \(10^{10}M_{\odot}\) at \(z=0\). Since pinocchio only evolves dark matter halos, the information of substructures such as subhalos within halos is not stored or tracked. This implies that only radial scales greater than the size of a typical dark matter halo (3 to 4 Mpc at \(z=0\)), are relevant for consideration. In other words, the correlation function presented here does not include the one-halo term. From the figure, we observe that the clustering of the SMBH-seeded halos (blue points) is always lower compared to other cases. This is expected because of the nature of our model, which results in larger distances between SMBHs and hence smaller clustering amplitude. The plots for \(d_{\rm iso}=50\) and 100 kpc clearly depict this, while the case of 200 kpc suffers from low number statistics. The red points, which represent the clustering of random halos with the same number and mass distribution as of the seeded halos, are generally more than \(1\sigma\) higher than the blue points, except at the largest scales. This can be clearly seen for the fiducial case of Figure 2: The comoving number density evolution of SMBHs for different cases of the isolation distance (in proper distance). The dotted colored lines show the total number of SMBHs, whereas the solid colored lines show the number of halos containing the black holes. The dashed green line indicates the number density obtained from the HMT scheme, in which each halo with mass higher than \(m_{\rm th}=7.1\times 10^{10}M_{\odot}\) is seeded (see text). The green shaded region represents the change in number density by lowering and raising \(m_{\rm th}\) by a factor of 2. The black solid square indicates the estimate for the number density of SMBHs at \(z=0\) by assuming each galaxy with luminosity higher than \(L_{\rm min}=0.33L_{*}\) contains one SMBH. The black line denotes the range in \(n_{\rm SMBH}(z=0)\) by varying \(L_{\rm min}\) from \(0.1L_{*}\) to \(L_{*}\). Figure 3: Projection of the positions of seeded halos (_red_) and non-seeded halos (_blue_) along the XY plane of the box for different isolation distances. The redshift is shown in the top right corner of each panel (same for each row). Only the 30,000 most massive non-seeded halos within each panel are shown for ease of visualisation. 100 kpc. We also show the clustering for the fiducial case of HMT schemes with \(m_{\rm th}=7.1\times 10^{10}M_{\odot}\)(Sijacki et al., 2015), depicted by green points. This model also generally shows higher clustering than our Pop III.1 seeding model. Thus a clustering analysis of census of a local Universe (\(z=0\)) survey of all (or a significant fraction) of SMBHs has the potential to distinguish between these SMBH seeding mechanisms. In Figure 7, we show the evolution of the projected correlation function for the \(d_{\rm iso}=\)50 and 100 kpc cases (blue lines), compared to halos with the same mass and number distribution as the respective seeded halos (red lines). As seen in the 3D 2pcf, the clustering of the seeded halos is always lower than the randomly selected halos and this trend is observed even at higher redshifts. Furthermore, there is a significant drop of the clustering amplitude of the seeded halos for scales lower than \(d_{\rm iso}(\bar{z}_{\rm form})\) (vertical grey band), a signature of feedback cleared bubbles, first discussed in Paper I for \(z\geq 10\). Here we see that this signature of suppressed clustering persists to lower redshift, although is gradually diminished as the Universe evolves to a more clustered state. We emphasise that comparing our clustering predictions at redshifts greater than 1 or 2 is not feasible with currently available observational data. The measurements from a range of luminosity of AGNs at these redshifts imply minimum halo masses of \(\sim 5\times 10^{11}h^{-1}M_{\odot}\) at \(z\sim 3\)(Allevato et al., 2014) to more than \(10^{12}h^{-1}M_{\odot}\) at \(z\sim 4\)(He et al., 2018). For our 59.7 Mpc box, the number of seeded halos above these thresholds are quite low. For instance, for the \(d_{\rm iso}=\)100 kpc case, only around 6% of sources are above this threshold at \(z=3\) and only 0.7% sources are more massive than \(10^{12}h^{-1}M_{\odot}\) at \(z=4\). If we apply these halo mass cuts on our seeded halos, then the clustering signal is too noisy to make any decent comparison with the observational data. Moreover, at high halo masses the occupation fraction approaches unity, so for the measured clustering of bright AGNs, hosted in relatively massive halos, we expect that they may cluster as their host halos, with no appreciable difference with respect to Figure 4: Evolution of SMBH occupation fraction of halos for different cases of \(d_{\rm iso}\). Top row depicts the fraction in log scale, while the bottom row shows the same data in linear scale. The mass bins are divided into equal bins of width 0.2 dex. Figure 5: Cumulative occupation fractions of halos having masses greater than a given value (see legend). The shaded region represents \(\pm 1\sigma\) error due to counting statistics. Figure 6: The 3D 2 point correlation function for the seeded halos more massive than \(10^{10}M_{\odot}\), at \(z=0\) for different isolation distances. The blue points show the correlation function for only the halos containing SMBHs, while the orange points show the correlation for all the halos, with or without a SMBH. For the red points, we randomly select halos from the pool of all the halos, but with the same number and mass distribution as the seeded halos. The error bars indicate \(1\sigma\) deviations from the mean value from randomly sampling 50 times. The green points show the correlation for halos seeded according to the halo mass threshold (HMT) scheme, in which all the halos greater than \(m_{\rm th}=7.1\times 10^{10}M_{\odot}\) are seeded. Figure 7: Evolution of projected correlation function for \(d_{\rm iso}=50\) kpc (top row) and 100 kpc (bottom row) cases. The blue line is the average after computing the correlation of the seeds from 3 orthogonal sides of the box and the shaded region represents the \(1\sigma\) spread. The control sample is the correlation of halos selected randomly but with the same mass and number distribution as the seeded halos at that redshift. The red line refers to the average after randomly sampling 10 times and the shaded region refers to \(1\sigma\) deviations from the mean. The vertical grey line refers to the size of the isolation radius at the mean formation redshift (\(d_{\rm iso}(z_{\rm form})\)) of the seeded halos, and the grey region represents \(1\sigma\) deviation from the mean. For 100 kpc, \(\tilde{z}_{\rm form}=32.08\), and for 50 kpc, \(\tilde{z}_{\rm form}=27.14\). The angular axis on top of each panel corresponds to the angular scale of \(r_{p}\) projected on the sky at the respective redshift. currently used models. More data on AGN, especially those that are present in lower-mass halos/galaxies is needed to test the models. As a crude comparison, in Figure 8 we include the clustering measurements from Zehavi et al. (2011), who performed the projected clustering analysis of volume-limited sample of 570,000 galaxies from the Seventh Data Release (Abazajian et al., 2009) of the Sloan Digital Sky Survey (SDSS, York et al., 2000). The galaxies used in their data extend out to \(z=0.25\), with a median redshift of \(z\sim 0.1\). We compare our results at \(z=0\) for \(d_{\rm iso}=\)50 and 100 kpc, along with the HMT scheme, with their galaxy luminosity threshold cut result for \(M_{r}<-19.0\). We computed the relation between DM halo mass and \(r\)-band absolute magnitude by comparing the clustering amplitude of pnoscoscion DM halos with Zehavi et al.'s measurements, minimising the \(\chi^{2}\) of the clustering amplitude only for \(r_{P}>3h^{-1}\) Mpc (to avoid the one-halo clustering scales); for \(M_{r}<-19.0\) we find a clustering-matched halo mass of \(M_{\rm PB}^{-19.0}=1.91\times 10^{12}h^{-1}M_{\odot}\), higher than the value suggested in that paper (\(M_{\rm PB}^{-19.0}=2.55\times 10^{11}h^{-1}M_{\odot}\)); this is not surprising, given the different cosmology assumed in 2011. We then applied this halo mass cut on our \(d_{\rm iso}=50\) and 100 kpc sources, as well as the HMT scheme, and compared the projected correlation function for the \(M_{r}<-19.0\) threshold galaxies in Figure 8. For the region of interest, the clustering of the seeded halos shows good agreement, within the errors, with the observations. The \(d_{\rm iso}=50\) kpc correlation completely overlaps the HMT one because all the sources more massive than \(M_{\rm PB}^{-19.0}\) are seeded in this model. Also, at this high-mass cut, most of the \(d_{\rm iso}=50\) kpc sources are also seeded in the \(d_{\rm iso}=100\) kpc model, and hence their clustering follows similar trends. This is due to the fact that the occupation fraction approaches unity for the most massive halos (see SS3.2) for all the isolation distances, and since the mass cut is high, this means that most, if not all, the halos are seeded, regardless of the isolation distance. ### Ultra Deep Field One potential way to compare our model with observational data is to count the number of SMBHs (i.e., appearing as AGN) present in projected deep fields of the Universe, such as the Hubble Ultra Deep Field (HUDF). We thus create a synthetic ultra deep field (UDF) populated with SMBHs that have formed in our simulations. To achieve this, we use snapshots of halos at different redshifts in the 59.7 Mpc cosmological box, using the highest resolution run. We pierce the box orthogonally from random positions (avoiding repetitions) and then stack the fields in redshift space to generate the light cone of a 2.4 arcminute side length (i.e., same as the HUDF). Figure 9 shows our constructed HUDF, for \(d_{\rm iso}=50\) kpc and 100 kpc. The fields shown are for the redshift range \(z\in[4,16]\), with the number of halos in the field equal to 9352 and 764 for \(d_{\rm iso}=50\) kpc and 100 kpc, respectively. As expected, the field for the 50 kpc case is much more densely populated with seeded halos as compared to 100 kpc. Figure 10 shows the distribution of SMBHs within the redshift range \(z=5-10\) in our synthetic HUDF, where we also display the number of sources in redshift bins of \(\Delta z=1\). The total number of sources in the field (_last column_) for the fiducial \(d_{\rm iso}=\)100 kpc model is five times higher than the fiducial HMT scheme. Thus a census of AGNs at high redshifts (\(z\gtrsim 7\)) can distinguish between these models. Since the number density of sources in the HMT scheme is quite low (effectively 0 for redshifts \(\gtrsim 8\) or 9), finding even a handful of sources at these redshifts can put stringent constrains on this seeding scheme. In Table 2, we show the number of seeds in the field for an extended redshift range by averaging from multiple random realisations of the light cone, and by integrating the number density over the field volume. Almost all the averages in the redshift bins from the light cone are within \(1\sigma\) of the analytically calculated value from the number density. The analytic numbers also show the drastic difference in the number of sources in the different seeding schemes at high redshifts. ## 4 Conclusions We have explored the implication of the Pop III.1 seeding model for cosmological distributions of SMBHs. This is a model that forms all SMBHs with a single mechanism based on the change of protostellar structure in some Pop III stars due to WIMP dark matter particle self annihilation. This leads to reduced ionizing feedback from the protostar and efficient accretion of the baryonic content of the minihalo, thus naturally leading to a characteristic seed mass of \(\sim 10^{5}\,M_{\odot}\). The model requires the Pop III.1 minihalo to form in relative isolation from other sources. Thus the Pop III.1 seeding model involves all SMBHs forming very early in the Universe, i.e., by \(z\sim 25\), and with a relatively unclustered initial distribution. Indeed, compared to all other astrophysical models for SMBH formation, the Pop III.1 model involves the earliest and least clustered distribution of seeds. This implies that in the Pop III.1 model, black holes have plenty of time to grow via accretion to explain the known high redshift quasars, without the need of sustained super-Eddington accretion. The Pop III.1 model, while being a physical model for the formation of the whole SMBH population, is relatively simple, i.e., with only one free parameter, the isolation distance \(d_{\rm iso}\). This means that the model can be easily explored in cosmological volume simulations that resolve minihalos, as was done first in Paper I. The constraint of matching an estimate for the local comoving number density of SMBHs, gives quite tight constraints on \(d_{\rm iso}\simeq 100\) kpc (proper Figure 8: Comparison of the results for the projected correlation function \(w_{P}(r_{P})\) obtained from our simulations for \(d_{\rm iso}=\)50 kpc, 100 kpc and the HMT scheme at \(z=0\) with the observational data from Zehavi et al. (2011) for a \(M_{r}<-19.0\) magnitude cut. The shaded region shows scales smaller than the size of a typical halo at \(z=0\), i.e., \(r_{P}<3h^{-1}\)Mpc, which are not of interest for our comparison due to limitations of our model (lack of sub-halos). The HMT scheme and 50 kpc models overlap, as all halos above the threshold are seeded for that value of \(d_{\rm iso}\). distance). This implies most SMBHs formed at \(z\approx 30\), when the isolation distance corresponded to a comoving scale of \(\sim 3\) Mpc. Following on from Paper I, we have explored the implications of the Pop III.1 SMBH seeding model down to low redshifts, i.e., all the way to \(z=0\), which is important to allow connection to observations, including the HUDF and local galaxy and SMBH populations. We have also compared this model with another simple seeding scheme, i.e., the halo mass threshold (HMT) model, that is commonly implemented in cosmological volume simulations. As presented before, all SMBHs form very early in the universe, and their number density then remains approximately constant after a redshift of \(\sim 25\). Only a small fraction of the seeded halos merge with each other by \(z=0\). The evolution of the occupation fraction of seeded halos shows a rise to unity for the most massive halos by \(z=0\). However, at intermediate redshifts there can be significant fractions of most massive halos that are unseed. Our clustering analysis found that Pop III.1 seeded halos show lower levels of clustering compared to random halos with the same mass and number distribution as the seeded halos, at all redshifts. However, to connect this result to observations of AGN (e.g., Allevato et al., 2014; He et al., 2018) requires development of a SMBH growth model, which is planned for a future paper in this series. We also noticed a dip in the clustering of the seeded halos at scales smaller than the isolation distance at the mean formation redshift, which Figure 10: The distribution of SMBHs in redshift intervals in the range \(z=5-10\) in a synthetic HUDF, where the last column shows all the sources. The first row shows the case for \(d_{\rm iso}=\)50 kpc. The second row shows the case for \(d_{\rm iso}=\)100 kpc. The third row shows the distribution from the fiducial HMT scheme with \(m_{\rm H_{0}}=7.1\times 10^{10}M_{\odot}\). The total number of SMBHs in each panel are indicated in the top right corners of each. Figure 9: Synthetic Hubble Ultra Deep Field (HUDF) consisting of only the seeded halos for \(d_{\rm iso}=50\) kpc and 100 kpc cases over a redshift range from 4 to 16. is due to the feedback suppression of the isolation bubbles. This was first discussed at \(z=10\) in Paper I, and we have shown that this suppression persists even at lower redshift, discernible down to \(z\approx 1-2\). To compare the clustering of our seeded halos with observational data of galaxies, we turned to the galaxy clustering results from Zehavi et al. (2011). We were able to conclude that the clustering of the seeded halos for 50 and 100 kpc isolation distances are in agreement with the observations, after applying appropriate mass cuts on the halo masses. The properties of binary AGN and resulting mergers, i.e., the extreme end of the clustering signal, will be considered in detail in a forthcoming paper in this series. Finally, we discussed the potential of using high redshift AGN number counts in the HUDF (or other deep fields) to differentiate among seeding mechanisms and for constraining the value of isolation distance. Detection of just a small number of SMBHs at \(z\gtrsim 8\) would begin to discriminate between the fiducial HMT scheme and the Pop III.1 model. ## Acknowledgements We thank Nilanjan Banik for helpful comments and useful discussions. JS thanks Vieri Cammelli and Jacopo Salvalaggio for numerous discussions regarding the simulations and the support of the computing centre of INAF-Osservatorio Astronomico di Trieste, under the coordination of the CHIPP project Bertocco et al. (2020); Taffoni et al. (2020). JCT acknowledges support from ERC Advanced Grant MSTAR. ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2310.08674
Pay Attention to How You Drive: Safe and Adaptive Model-Based Reinforcement Learning for Off-Road Driving
Autonomous off-road driving is challenging as risky actions taken by the robot may lead to catastrophic damage. As such, developing controllers in simulation is often desirable as it provides a safer and more economical alternative. However, accurately modeling robot dynamics is difficult due to the complex robot dynamics and terrain interactions in unstructured environments. Domain randomization addresses this problem by randomizing simulation dynamics parameters, however this approach sacrifices performance for robustness leading to policies that are sub-optimal for any target dynamics. We introduce a novel model-based reinforcement learning approach that aims to balance robustness with adaptability. Our approach trains a System Identification Transformer (SIT) and an Adaptive Dynamics Model (ADM) under a variety of simulated dynamics. The SIT uses attention mechanisms to distill state-transition observations from the target system into a context vector, which provides an abstraction for its target dynamics. Conditioned on this, the ADM probabilistically models the system's dynamics. Online, we use a Risk-Aware Model Predictive Path Integral controller (MPPI) to safely control the robot under its current understanding of the dynamics. We demonstrate in simulation as well as in multiple real-world environments that this approach enables safer behaviors upon initialization and becomes less conservative (i.e. faster) as its understanding of the target system dynamics improves with more observations. In particular, our approach results in an approximately 41% improvement in lap-time over the non-adaptive baseline while remaining safe across different environments.
Sean J. Wang, Honghao Zhu, Aaron M. Johnson
2023-10-12T19:20:32Z
http://arxiv.org/abs/2310.08674v1
Pay Attention to How You Drive: Safe and Adaptive Model-Based Reinforcement Learning for Off-Road Driving ###### Abstract Autonomous off-road driving is challenging as risky actions taken by the robot may lead to catastrophic damage. As such, developing controllers in simulation is often desirable as it provides a safer and more economical alternative. However, accurately modeling robot dynamics is difficult due to the complex robot dynamics and terrain interactions in unstructured environments. Domain randomization addresses this problem by randomizing simulation dynamics parameters, however this approach sacrifices performance for robustness leading to policies that are sub-optimal for any target dynamics. We introduce a novel model-based reinforcement learning approach that aims to balance robustness with adaptability. Our approach trains a System Identification Transformer (SIT) and an Adaptive Dynamics Model (ADM) under a variety of simulated dynamics. The SIT uses attention mechanisms to distill state-transition observations from the target system into a context vector, which provides an abstraction for its target dynamics. Conditioned on this, the ADM probabilistically models the system's dynamics. Online, we use a Risk-Aware Model Predictive Path Integral controller (MPPI) to safely control the robot under its current understanding of the dynamics. We demonstrate in simulation as well as in multiple real-world environments that this approach enables safer behaviors upon initialization and becomes less conservative (i.e. faster) as its understanding of the target system dynamics improves with more observations. In particular, our approach results in an approximately 41% improvement in lap-time over the non-adaptive baseline while remaining safe across different environments. model-based reinforcement learning, robust control, adaptive control, sim2real ## I Introduction Autonomous off-road driving has the potential to revolutionize applications such as environmental monitoring, planetary exploration, and agricultural automation by enabling robots to reach remote and challenging terrains [1, 2, 3, 4]. However, developing autonomous controllers for off-road driving can be challenging due to the dangerous nature of driving over uneven, unpredictable, and unstructured terrains. Inappropriate or misjudged actions can cause substantial damage to the robot, requiring expensive and time-intensive recovery and repair efforts. Consequently, simulation has become instrumental in the development and validation of off-road driving algorithms. Beyond offering a risk-free environment for testing, simulations can operate faster than real-time, benefit from parallelization, and conduct trials autonomously. Simulation has been especially crucial in the development of model-free reinforcement learning algorithms [5, 6, 7], which aim to directly optimize a policy over many trials. However, the performance of policies trained and validated in simulation do not always transfer to the real world. This discrepancy arises from the "reality gap" - the inevitable differences between the simulated environment and the real world. Addressing these challenge requires effectively translating simulation-trained policies into the real-world, known as the "sim2real" transfer problem. While some methods aim to minimize the reality gap [8, 9, 10, 11, 12], accurately modeling intricate dynamics of a robot interacting with a diverse range of unstructured terrains remains challenging. Robot dynamics are not only affected by the robot properties, such as weight distribution, tire friction coefficient, and motor models, but they are also affected by the unknown terrain properties including soil cohesion, dampness, or presence of debris. Some approaches aim to train a policy that is effective on a wide range of dynamics, ideally including the dynamics of the real world system. In domain randomization [13, 10, 14, 15], simulation parameters are randomized during policy training to make the policy robust against variations in system dynamics. However, this robustness comes at the expense of conservative performance as the policy is not specifically tailored towards any particular system but generalized to all possible systems. Alternatively, some approaches train a latent vector condi Fig. 1: Method Overview: The System Identification Transformer (SIT) and Adaptive Dynamics Model (ADM) are trained with randomized simulation dynamics to gain a probabilistic understanding of any target system’s dynamics. The SIT leverages an attention mechanism to condense state-transition observations from the target system into a compact context vector. The ADM predicts state transition distributions conditioned on robot state, action, and context vector. Online, Risk Aware MPPI chooses safe actions according to the ADM’s probabilistic predictions. tioned policy that can be adapted to some particular dynamics simply by identifying a suitable latent vector. In [16, 17, 18], suitable latent vectors were found through optimization techniques such as CMA-ES [19]. Although these approaches can tailor the policy towards the particular system, they still require trial and error to refine the policy, and may be unsafe while the policy is being refined. In [8, 20], an auxiliary neural network is used to rapidly identify a suitable latent vector given a short fixed-length horizon of prior states and actions. While this allows for faster adaptation, the fixed-horizon input only utilizes recent observations for latent vector inference. Furthermore, the model-free nature of these methods prohibits any interpretability with respect to the adaptation process or the resultant policy. We propose a novel framework for sim2real transfer that balances robustness with adaptability. Our method follows the model-based reinforcement learning (MBRL) paradigm, where a probabilistic predictive dynamics model is first trained then used for decision making. We train the model in simulation with varying simulation parameters to make our model robust across a variety of system dynamics. Similar to prior methods [8, 20], we train a neural network to extract a latent context vector to help adapt the policy to the target system's particular dynamics. In our approach, this neural network, called the System Identification Transformer (SIT), uses attention mechanisms to distill state-transition observations from the particular system into a context vector understanding of its particular dynamics. Unlike other approaches that condition a policy on this context vector, our approach instead conditions a dynamics model on the context vector. Given this context vector, the Adaptive Dynamics Model (ADM) probabilistically models the system's dynamics, capturing uncertainty both from the system's inherent stochasticity and from ambiguities due from insufficient state-transition observations. Online, we use a Risk-Aware Model Predictive Path Integral (RA-MPPI) controller [21] to safely control the robot under its current understanding of dynamics. The remainder of this paper aims to validate the following hypotheses: 1. Our proposed approach enables safer control in terms of the number of constraint violations, even when there is insufficient historical observation data (e.g. upon initialization). 2. Leveraging the attention mechanism to extract context allows for continual improvement of the adapted policy (i.e. better lap times) as the number of state-transition observations increases. 3. Using a risk aware MPPI controller reduces the number of constraint violations compared to a risk unaware controller with the same SIT and ADM models. ## II Probabilistic Predictive Dynamics Model We formulate the autonomous off-road driving problem as a distribution of Markov decision processes (MDPs), where each real world environment is represented by a single MDP. For a given environment \(i\), the problem is defined as \((\mathcal{S},\mathcal{A},\mathcal{P}_{i},\mathcal{C}_{i})\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{P}_{i}(s_{t+1}|s_{t},a_{t})\) is the stochastic discrete-time transition dynamics from \(s_{t}\in\mathcal{S}\) to \(s_{t+1}\in\mathcal{S}\) under action \(a_{t}\in\mathcal{A}\), and \(\mathcal{C}_{i}(s,a)\) is the cost function for a given state-action pair. In this formulation, the state and action space is shared between environments, but the transition dynamics and cost function are unique to each environment. The function \(\mathcal{P}_{i}\) is a member of the function space \(\mathcal{F}\) that comprises all possible stochastic transition functions. We define \(\mathcal{W}\) as the distribution over of the function space \(\mathcal{F}\) which encompasses all potential dynamics functions the robot might encounter in the real world. Note that it is impossible to perfectly simulate the unknown dynamics \(\mathcal{P}_{i}\) for any given real world environment \(i\), let alone the distribution \(\mathcal{W}\) of all real world dynamics. We instead define a proxy distribution of dynamics in simulation, \(\hat{\mathcal{W}}\), such that \(\text{supp}(\mathcal{W})\subseteq\text{supp}(\hat{\mathcal{W}})\). That is, all of the true dynamics in \(\mathcal{W}\) lie within the range of dynamics functions represented in \(\hat{\mathcal{W}}\). Using many cheap simulations sampled from \(\hat{\mathcal{W}}\), we train a policy that can safely adapt to the particular system dynamics within \(\text{supp}(\hat{\mathcal{W}})\), which includes all real world systems lying in \(\mathcal{W}\). Following the model-based reinforcement learning paradigm, we train a predictive model to approximate the probabilistic transition dynamics of any given system. The predictive model consists of two key components: the System Identification Transformer (SIT) and the Adaptive Dynamics Model (ADM). This model is then utilized for decision making, specifically using MPPI with a Conditional Value-at-Risk cost to drive the robot safely given the stochastic predictions from the predictive model. _System Identification Transformer (SIT):_ The SIT, denoted by \(\mathcal{T}_{\theta}\), identifies the dynamics of a given target system by analyzing prior state-transition observations collected on that system, denoted by \(\mathcal{H}\), and extracting relevant information about the target system's dynamics into a latent context vector, denoted as \(c\), \[c_{t}=\mathcal{T}_{\theta}(\mathcal{H}_{t}) \tag{1}\] In this formulation, state-transition observations collected for a target system at time \(t\) are, \[\mathcal{H}_{t}=\{(s_{i},a_{i},s_{i+1})|i<t-1\}, \tag{2}\] We use a transformer network [22] for the SIT due to several advantages it offers. Transformers can natively accommodate sequences of varying lengths by utilizing self-attention mechanisms to selectively focus on specific segments of the input sequence. These advantages are crucial for our application since state-transition observation sequences expand with the system's run-time. Furthermore, not all state-transition observations are of equal significance (e.g., periods when the robot remains stationary may offer minimal insights), so this selective focus ensures the extracted context is most representative of the system's dynamics. The SIT's architecture mirrors the encoder component from [22]. It comprises of a series of identical layers, each featuring a multi-head self-attention sub-layer followed by a position-wise, fully connected feed-forward network sub-layer. Each sub-layer incorporates a residual connection [23] followed by a layer normalization [24]. Unlike the original design, we opted not to use positional encoding for the input sequences. In our application, the order of (state, action, state-transition) observations is unimportant, and incorporating positional encoding negatively impacted performance. Finally, we aggregated the vector outputs from the last layer by taking their mean, resulting in a single context vector. This compact representation, \(c\in\mathbb{R}^{32}\) for our implementation, encapsulates the essence of all prior state-transition observations. Adaptive Dynamics Model (ADM)The ADM provides a probabilistic understanding of the robot's dynamics based on the context vector extracted by the SIT. The ADM, denoted as \(\mathcal{P}_{\theta}(s_{t+1}|s_{t},a_{t},c_{t})\), is trained to predict state-transition distributions conditioned on the robot's current state, action, and context vector \(c_{t}\) extracted by the SIT. By predicting state-transitions as probability distributions, the ADM can capture uncertainty inherent to the non-deterministic system as well as ambiguities resulting from limited state-transition observations. Similar to [25], the adaptive dynamics model can be used to predict a trajectory distribution for the robot by sequentially iterating through each time step of the prediction horizon and chaining samples from the predicted state-transitions distribution, as is done in Algorithm 1. We chose to use a Long Short Term Memory (LSTM) architecture [26], which inherently captures temporal dependencies across state-transition sequences. For our implementation, the LSTM is followed by a fully connected network to predict a multivariate Gaussian state-transition distribution, parameterized by its mean and lower triangular terms of the LU decomposition of its covariance matrix. ``` Input : Initial State: \(s_{t_{0}}\), Context Vector: \(c_{t_{0}}=\mathcal{T}_{\theta}(\mathcal{H}_{t_{0}})\) Candidate Actions: \(a_{t_{0}},a_{t_{1}},\dots,a_{t_{f}}\), Number of Stochastic Evaluations: \(N\) Confidence Level: \(\alpha\) Output : CVaR cost for\(j\gets 1\)to\(N\)do \(\hat{s}_{t_{0}}\gets s_{t_{0}}\) \(J_{j}\gets 0\) for\(t\gets t_{0}\)to\(t_{f}\)do \(\hat{s}_{t+1}\sim\mathcal{P}_{\theta}(\hat{s}_{t+1}|\hat{s}_{t},a_{t},c_{t})\) \(J_{j}\gets J_{j}+\mathcal{C}(s_{t},a_{t})\) returnAverage of top \(\lceil\alpha\cdot N\rceil\) values of \(J\) ``` **Algorithm 1**Calculating CVaR Cost ## III Risk-Aware Model Predictive Path Integral Control In this section, we describe how controls can be made robust against the uncertainty in the probabilistic output of the SIT and ADM. This allows the robot to drive safely even when it is unsure about its dynamics while improving performance as its understanding improves with more state-transition observations. Track Driving ProblemFor our application, the robot is tasked with driving down different tracks. Each track is defined by a \(path\) (its center line) and a fixed width \(w\). Given \(path\), we structure the task as the following constrained optimization problem, \[\underset{a_{t_{0}},\dots,a_{t_{f}}}{\text{minimize}} L_{path}(s_{t_{f}+1})\] (3) subject to: \[s_{t+1}\sim\mathcal{P}_{i}(s_{t+1}|s_{t},a_{t}) \tag{4}\] \[D_{path}(s_{t})\leq w\] (5) \[|\tilde{s}_{t,lateral}|\leq A, \tag{6}\] where \(\mathcal{L}_{path}(s)\) denotes the distance of state \(s\) along \(path\), \(D_{path}(s)\) denotes the distance of state \(s\) from \(path\), and \(\tilde{s}_{t,lateral}\) denotes the lateral component of the robot's acceleration (calculated through numerical differentiation). Intuitively, the robot's task is to make as much progress down the track (3), while staying on track (5), and keeping lateral acceleration under a threshold to prevent it from rolling over (6), subject to the stochastic dynamics (4). Robust ControlsWhile numerous methods exist for robust control of systems with probabilistic dynamics, e.g. [27, 28], we use Model Predictive Path Integral (MPPI) [29] with a Conditional Value-at-Risk (CVaR) cost to avoid risky actions, similar to [21]. MPPI is a variant of Model Predictive Control (MPC) that relies on a sampling-based approach for trajectory optimization. During each MPPI optimization iteration, candidate action sequences are sampled from a distribution centered around the previous solution. The cost associated with each candidate action sequence is evaluated by simulating the system with a predictive model. The solution is then updated by weighting the candidate actions based on their costs. To minimize constraint violation within MPPI, we use the relaxed logarithmic barrier function introduced in [30]. This function reformulates a constraint of the form \(z\geq 0\), into the following as an additional cost term: \[\hat{B}(z) =\begin{cases}-ln(z)&z>\delta\\ \beta_{e}(z;\delta)&z\leq\delta\end{cases} \tag{7}\] \[\beta_{e}(z;\delta) =\exp{(1-\frac{z}{\delta})}-1-\ln\delta \tag{8}\] In our approach, we enhance the robustness of MPPI against uncertainties in system dynamics by incorporating a CVaR cost, Algorithm 1. The CVaR cost quantifies the expected cost in the worst \(\alpha\) percent of scenarios. To calculate the CVaR cost for each candidate action sequence, we perform multiple trajectory simulations using our stochastic dynamics model (ADM) and average the cost of the worst-performing trajectories. This enables the optimizer to be risk-aware when choosing actions. ## IV Training in Simulation We train the SIT and ADM solely in simulation. However, instead of using one simulated system, we sample a large number of simulated systems from the distribution \(\hat{\mathcal{W}}\), created by randomly varying physical parameters in simulation. By doing so, we train the SIT and ADM to adapt to a wide variety of systems including real world systems from the distribution \(\mathcal{W}\). During training, we cycle between a data collection phase and a model training phase. Data CollectionDuring the data collection phase, we first generate a set of new systems in simulation using PyBullet. To generate a new system, we randomize link dimensions, link inertial terms, scaling of steering and throttle commands, motor torque and PID values, contact parameters (friction, stiffness, and damping), and suspension parameters (limits, stiffness, damping). For each system, we collect a set of trajectories by driving the system using the current SIT and ADM models within the Risk Aware MPPI framework. At each time step during driving, the policy is adapted to the particular system by feeding all prior collect data on that system into the SIT. Neural Network TrainingDuring the model training phase, we sample a system and time step from the dataset and use the SIT and ADM to predict the state-transition given the robot's current state, action, and all state-transition observations collected on the particular system prior to that time step. We update the neural network parameters of SIT and ADM using a negative log-likelihood loss with an Adam optimizer [31]. ## V Experimental Results We compare our method against a different baselines in simulation and on a real world robot. In simulation, we run large statistical tests comparing the performance metrics of different approaches on newly generated systems and tracks, none of which were seen during training. On the real world system, we evaluate whether the trained model and resultant policy can safely adapt to different real world systems. We vary the dynamics of the real world system by changing the robot's configuration and varying the type of terrain used. Both the simulated and real world systems use a four-wheeled robot with flexible solid-axle suspension and all wheel steering. For the real world system, MPPI controls was ran on board at 10 Hz using a NVIDIA GeForce RTX 2060 GPU. ### _Fast and Continual Adaptation to New Dynamics_ In the first experiment, we evaluate this method's ability to generate a safe and effective policy for a new system upon initialization and then continually adjust that policy to better adapt to the target system. For each newly generated system, we run trials over randomly generated tracks starting with no state-transition observations. These new state-transition observations created by driving the system are collected and used to adapt the model at every time step. For the baseline comparison, we use a model-based reinforcement learning policy where the neural network dynamics model is reinitialized for each new system and trained using only data collected on that particular system. In this baseline approach, the dynamics model uses the same architecture as our Adaptable Dynamics Model, but is given a fixed zero vector for the context input. We collect training data by driving the robot using the baseline model and retraining the model every 250 time steps. We evaluated the performance of both methods as a function of time steps collected for training or adaptation. We fix the models created given different amounts of data and use them to drive the robot down a new test track. The test track is fixed between all methods and models for a particular system, but varied for the different systems or trials. Note that for our method, we allowed the model to continue adapting on the test track run since adaptation involved simple SIT inference, which could be computed at each time step. During each evaluation, we record the lap time (in time steps of 0.1 seconds) needed to complete the test track as well as the number of constraint violations (either the robot driving off track or exceeding the lateral acceleration limit). We also record the number of times the robot made no progress, or was stationary for too long, due to MPPI struggling to find a non-trivial solution. In cases where the robot makes no progress or violates the constraints, the robot is reset to the center of the track at the last progress point and allowed to continue. The average lap time and number of constraint violations for both methods across 230 systems are shown in Fig. 2. Compared to the baseline, our method was able to reach much higher levels of performance in low-data regimes. With our method, the robot was able to drive down the test track even when initialized with zero data. With the baseline method, we were unable to evaluate its performance with less than 500 time steps of training data, as the robot Fig. 2: Adaptive Method vs. Baseline Method. For the baseline, a new policy was trained on each target system. The standard error is shown with shaded region. For each time step at fixed intervals, we take the model trained at that time step to run each system on a test track in simulation. We averaged out the lap time and violations across all systems as it completed the track. often could not finish the track. When comparing our method given zero data and the baseline given 500 time steps of data, our method had a much faster lap time and exhibited many fewer constraint violations. Furthermore, the baseline method averaged 3.4 incidents of no progress per test track run, where the robot needed to be reset due to not making any progress. In comparison, our method averaged \(0.004\) incidents. Unlike our approach, the baseline approach is impractical to deploy on real world systems due to the high number of constraint violations and resets needed in low-data regimes. This evidence supports **hypothesis 1**, since our approach enables safer control in absence of historical observation data. For both methods, the policy's performance improved with more training data. When given 5000 time steps of data, the baseline method exhibited an average lap time of \(69.43\) time steps and averaged \(0.0087\) constraint violations. In contrast, after only 500 time steps our approach had an average lap time of \(72.17\) and averaged \(0.0043\) constraint violations. By using attention mechanisms in the SIT, our approach can use variable length state-transition observation sequences to tailor the ADM to a particular system. This allows for continual improvement of the policy for a potentially long period of time, where the observation sequence is long. This is shown in our experiment (Fig. 2), where our method exhibits gradual performance improvements from 0 to 500 time steps of data. Furthermore, at 500 time steps of data, the performance of our method is comparable to the performance limits of training a policy from scratch for the particular system. This supports **hypothesis 2**, since the adaptive method continually improves in lap-time performance as the number of state-transition observations increases. ### _Safety During Adaptation_ We evaluated our method's ability to remain safe during initial periods of adaptation by comparing it to a baseline that did not consider uncertainty in MPPI. During MPPI, this baseline calculated the cost of an action sequence by predicting the resulting trajectory using the deterministic transition model \(\hat{s}_{t+1}=\mathbb{E}[\mathcal{P}_{\theta}(\hat{s}_{t+1}|\hat{s}_{t},a_{t}, c_{t})]\). This is in contrast to our method that calculates a CVaR cost based on predicting multiple possible trajectories under the stochastic dynamics \(\mathcal{P}_{\theta}(s_{t+1}|s_{t},a_{t},c_{t})\), described in Sec. III. We compared the two methods by generating 1000 new systems in simulation and one track per system. For each system, we use both methods to drive the robot down the same track 5 times, starting with zero state-transition observations on the first run and adapting the model at each time step throughout the 5 runs. For the two methods, we plot the average lap time and number of constraint violation for the 5 runs, across the 1000 systems, in Fig. 3. Our method exhibited far fewer constraint violations than the baseline method that did not use risk aware MPPI. The average number of violations among all runs were \(0.015\) for our method and \(0.49\) for the risk unaware MPPI method. The risk unaware MPPI method exhibited more violations on the first run, with an average of \(0.59\), than on the last run, with an average of \(0.45\), due to the model adapting and improving. For both methods, the lap time dramatically improved from the first run to the second run, but had minimal improvements after the second run. We attribute this to the robot driving down the same track for all runs leading to saturation of useful information that could be extracted after the first run. The risk unaware MPPI method had a significantly faster lap time than the risk aware MPPI method. However, it achieved faster lap times by driving aggressively off track and leveraging the penalty-free resets to the middle of the track whenever a constraint was violated. This evidence supports **hypothesis 3**, since the risk aware MPPI is shown to have significantly fewer constraints compared to the risk unaware version. ### _Sim2real Transfer_ In this experiment, we evaluated the ability of our method to transfer to real world dynamics, Fig. 4. As a baseline, we trained a model-based reinforcement learning policy in simulation using only the fixed nominal dynamics (simulation parameters were not varied). The training procedure for this baseline method followed closely to that from Sec. V-A. We then ran the policy from both methods on a real world robot. Between trials, we introduced variations to the system's dynamics by changing the terrain type (concrete, dirt, and gravel) and the robot's configuration by changing the scaling of steering and throttle commands as well as swapping the standard rubber tires with low friction PLA 3D printed tires. For each new system dynamics, we reinitialized our method and allowed it to adapt to the new system's dynamics. The baseline method was fixed and therefore was not retrained whenever the system changed. In total, we ran 10 trials of each. For each trial, we used both methods to drive the robot down a fixed track 5 times. For our adaptive method, the robot was given no state-transition observations at the start of the first run, but allowed to adapt using collected observations at each time step throughout the 5 runs. For Fig. 3: Risk Aware vs Risk Unaware MPPI with Adaptive Model. Lap time and number of violations per run are shown, with the average over all systems as the solid line and the standard error shown as the shaded region. the baseline method, performing more runs had no effect since there was no mechanism for adaptation. As such, we averaged the performance over all runs for the baseline method. For all runs, the robot was automatically stopped anytime a constraint was violated and manually placed on the center of the track. For every reset, we assigned a 10 second penalty, as the manually resets usually took longer than 10 seconds. The penalized lap times for both methods are shown in Fig. 5. Our method completed all runs with a 100% success rate, where success is defined as completing the track with no constraint violations. This is much higher than the baseline, which had a 40% success rate. Furthermore, we ran a paired t-test between the first and second run's laptime for our method and found significant improvement for the second run with a p value of 0.017. However, none of the successive runs showed any further significant improvement (\(p<0.05\)) from the second run. Again, we attribute this to the fixed track leading to saturation of useful adaptation information after the first run. For the baseline method, which used a non adaptive model, there was no statistically significant difference in lap times between runs. This provides additional evidence for **hypotheses 1** and **2**, since the adaptive approach is shown to remain safe in low-data regimes, while continually improving as it collects more observation data across different real-world environments. ## VI Discussion & Conclusion In this paper, we propose a novel sim2real transfer framework that balances robustness with adaptability. Our approach trains two neural network models, the System Identification Transformer (SIT) and the Adaptive Dynamics Model (ADM), in simulation while randomizing simulation dynamics parameters. The SIT leverages attention mechanisms to distill state-transition observations collected on the target system into a context vector, which succinctly encodes knowledge about the particular system's dynamics. The ADM predicts state-transition distributions given the robot's current state, action, and context vector from the SIT. Together, the SIT and ADM capture a probabilistic understanding of a target system's dynamics from state-transition observations on the target system. In real-time, our framework utilizes MPPI combined with a CVaR cost to safely control the system under its current understanding of dynamics. Our approach ensures safe controls even with sparse observations by capturing a probabilistic understanding of the robot's dynamics, thereby enhancing control robustness. Furthermore, our method facilitates continual adaptation and performance enhancement as the robot operates and accumulates more state-transition observations. This adaptability stems from the attention mechanisms in the SIT, which can process variable-length observations and focus on pertinent segments of extended sequences to distill insights about the system's dynamics. In our experiments, both in simulation and in the real world, we demonstrate the effectiveness of our approach in safely driving unseen systems right from the initialization, with zero state-transition observations. Moreover, as more state-transition observations were gathered, our method exhibited marked performance enhancements, indicating its adaptability to the dynamics of the target system. This adaptability was particularly evident in trials where observations were collected across different tracks. Each track appeared to enrich the system's understanding, subsequently elevating its performance. One limitation of our approach is the presumption of static system dynamics during execution. However, in real-world settings, a robot's dynamics can often change due to transitions between different terrains, wear and tear of the hardware, and more. Future work could incorporate mechanisms to detect these dynamic shifts, and subsequently re-initializing the adaptation process. Additionally, refinements to the current SIT can lead to potential improvement of adaptation to such dynamic changes. Fig. 4: Sim2real Experiments. From top to bottom, the wheeled robot driving on dirt, concrete, and gravel. The red line indicates the predefined track for each trial. The tires were changed from compliant rubber tires to hard plastic tires in the dirt experiment shown. Fig. 5: Adaptive Model vs. Nominal Model for Sim2real. For the adaptive model, we show average lap time of different systems across each run. Given the nominal model’s inability to adapt leading to no difference in method between runs, we plot the average across all runs and systems. Shaded region indicates standard error.
2306.04970
Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints
This paper studies the motion planning problem of the pick-and-place of an aerial manipulator that consists of a quadcopter flying base and a Delta arm. We propose a novel partially decoupled motion planning framework to solve this problem. Compared to the state-of-the-art approaches, the proposed one has two novel features. First, it does not suffer from increased computation in high-dimensional configuration spaces. That is because it calculates the trajectories of the quadcopter base and the end-effector separately in the Cartesian space based on proposed geometric feasibility constraints. The geometric feasibility constraints can ensure the resulting trajectories satisfy the aerial manipulator's geometry. Second, collision avoidance for the Delta arm is achieved through an iterative approach based on a pinhole mapping method, so that the feasible trajectory can be found in an efficient manner. The proposed approach is verified by three experiments on a real aerial manipulation platform. The experimental results show the effectiveness of the proposed method for the aerial pick-and-place task.
Huazi Cao, Jiahao Shen, Cunjia Liu, Bo Zhu, Shiyu Zhao
2023-06-08T06:59:02Z
http://arxiv.org/abs/2306.04970v1
# Motion Planning for Aerial Pick-and-Place based on Geometric Feasibility Constraints ###### Abstract This paper studies the motion planning problem of the pick-and-place of an aerial manipulator that consists of a quadcopter flying base and a Delta arm. We propose a novel partially decoupled motion planning framework to solve this problem. Compared to the state-of-the-art approaches, the proposed one has two novel features. First, it does not suffer from increased computation in high-dimensional configuration spaces. That is because it calculates the trajectories of the quadcopter base and the end-effector separately in the Cartesian space based on proposed geometric feasibility constraints. The geometric feasibility constraints can ensure the resulting trajectories satisfy the aerial manipulator's geometry. Second, collision avoidance for the Delta arm is achieved through an iterative approach based on a pinhole mapping method, so that the feasible trajectory can be found in an efficient manner. The proposed approach is verified by three experiments on a real aerial manipulation platform. The experimental results show the effectiveness of the proposed method for the aerial pick-and-place task. _Note to Practitioners--_ Aerial manipulators have attracted increasing research interest in recent years due to their potential applications in various domains. In this paper, we particularly focus on the motion planning problem of the pick-and-place of aerial manipulators. We propose a novel partially decoupled motion planning framework, which calculates the trajectories of the quadcopter base and the end-effector in Cartesian space, respectively. Geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution. Three experiments on a real aerial manipulator platform demonstrate the effectiveness of the approach. In future research, we will address the motion planning problem of aerial manipulators in complex environments. Aerial manipulator, Delta arm, Aerial pick-and-place, Motion planning, Collision avoidance ## I Introduction An aerial manipulator is a novel type of flying robot that consists of a multirotor and a robotic arm. Due to their ability to move quickly and operate precisely in high-altitude and complex workspaces, the aerial manipulator has potential applications in various domains, including transportation, inspection, and maintenance (see [1, 2, 3, 4] for recent surveys). Aerial manipulation has been studied from various aspects such as platform design [5, 6, 7], motion control [8, 9, 10], motion planning [11, 12, 13] and visual servoing [14, 15, 16, 17] up to now. Our work focuses on the motion planning problem of aerial pick-and-place tasks, where the aerial manipulator is required to grasp and move objects in the environment (see Fig. 1). It is noted that safety and high efficiency are important to the aerial pick-and-place task. This motivates our study to focus on an effective motion planning scheme that ensures collision-free trajectories. Different from the motion planning of multirotors, the motion planning of an aerial manipulator is more challenging since the aerial manipulator has more degrees of freedom and is required to manipulate the objects. Different from the motion planning of a ground mobile manipulator, the motion planning of an aerial manipulator is more challenging since the aerial manipulator flies in a 3D environment rather than a 2D environment. In addition, the robotic arm and the multirotor base are dynamically coupled, which means their movements mutually affect each other. Existing approaches for motion planning for aerial manipulation can be classified into two categories based on the space in which planners calculate trajectories. The first category is to plan the motion of the aerial manipulator in the configuration space. In early works, the RRT* method has been used to plan the path of the aerial manipulator in the configuration space without considering the dynamics [11, 12]. As a consequence, the resulting trajectory may not be executable for the aerial manipulator when its movement is fast. To address this issue, the dynamics of the aerial manipulator must be considered in motion planning. The existing methods that consider the dynamics in motion planning can be classified into three types. The first type uses a kinematics controller as a local plan Fig. 1: Aerial pick-and-place by an aerial manipulator. The experimental video is available at [https://youtu.be/q709v7l2Oho](https://youtu.be/q709v7l2Oho). ner in the sampling-based global planner [13]. It guarantees the feasibility of the trajectory for the real system and also enables searching for a solution directly in the reduced and more relevant task space. However, collision avoidance is not inherently embedded in the local planning, which may cause its result is not collision-free. The second type uses the differential flatness principle to ensure the dynamical feasibility [18]. In particular, motion planning methods for a special long-reach aerial manipulator have been proposed in [19, 20] based on this point of view. The platform in these works consists of a multirotor with a long bar extension that incorporates a lightweight dual arm in the tip. Since the dynamical feasibility constraints represented by the differential flatness are nonlinear, this type of method may be computationally expensive. The third type uses trajectory generation to ensure the dynamical feasibility [21]. In the trajectory generation, the trajectories are represented by spline curves. The dynamical feasibility constraints are considered in the trajectory generation problem by utilizing the derivative property of the spline curves. Planning in the configuration space, however, suffers high computation costs when the dimension of the space is high [22]. Unfortunately, aerial manipulators generally have high degrees of freedom (DoFs), which therefore motivates researchers to study other approaches to solve the motion planning problem. The second category of approaches directly plans the trajectory of the end-effector in the Cartesian space. The motion planning of the whole aerial manipulator is often solved practically by decoupling the flying base and the manipulator [23]. Firstly, the trajectory of the flying base approaching the manipulation position is calculated. Then, the motion of the end-effector is planned by assuming that the flying base stays in the same pose during manipulation. However, this method is conservative and inefficient in terms of energy and execution time [1]. To address this issue, the dynamic feasibility constraint must be considered in the trajectory planning of the end-effector. Therefore, a dynamically feasible task space planning method for underactuated aerial manipulators based on the differential flatness principle has been proposed in [24]. However, this method does not consider obstacle avoidance which is generally required in real scenarios. The above analysis reveals the limitations of the existing motion planning approaches for aerial manipulators. Planning in the configuration space incurs high computational costs due to the high DoF of aerial manipulators, while the existing methods of planning in Cartesian space do not consider obstacle avoidance, a crucial factor in real-world scenarios. To address these limitations, this paper proposes a novel framework that integrates the motion planning of both the flying base and the manipulator in a constrained workspace. The proposed algorithm is designed for an aerial manipulator consisting of a quadcopter and a Delta arm. The novelty of our approach is outlined below: 1) We propose a novel partially decoupled motion planning method for the aerial pick-and-place task. This method calculates the dynamically feasible and collision-free trajectories of the flying base and the manipulator in Cartesian space, respectively. The resulting trajectories are coordinated for successful execution. By solving the motion planning problem in Cartesian space, the high DoF of the aerial manipulator can be handled more efficiently than planning in the configuration space with a much lower computational load. Compared with the existing methods that plan trajectories in the configuration space, this method does not suffer from the problem of increased computation in high-dimensional configuration spaces. Compared with the existing methods that plan trajectories in Cartesian space, the proposed method ensures that the trajectories are collision-free. 2) We propose novel geometric feasibility constraints to ensure the trajectories of the quadcopter and the end-effector can be successfully executed. Our proposed constraints are linearly represented by the positions of the quadcopter and the end-effector, whereas the original geometry constraints are nonlinearly represented by the configuration of the aerial manipulator. By using the constraints, our method ensures that the resulting trajectories satisfy the geometry of the aerial manipulator. This is particularly important for motion planning of the aerial manipulator in Cartesian space. 3) Collision avoidance for the Delta arm is achieved through an efficient iterative approach based on a pinhole mapping method. At each iteration, a quadratic programming (QP) problem is solved to determine the collision-free trajectory for the end-effector. A collision avoidance term, designed based on the pinhole mapping method and collision check results, is formulated into the QP problem, so that the aerial manipulator is driven away from the obstacles in the local environment. Compared to collision avoidance in the configuration space [18, 21], the proposed iterative approach is faster as it is computed in Cartesian space. The proposed algorithms are verified by three experiments on an aerial manipulator platform in the real aerial pick-and-place task. Unlike the traditional Delta arm, the Delta arm used in this paper drives the joint angles by three four-bar linkages to magnify the control forces [25]. Experiments including collision avoidance, aerial retrieval, and aerial transport are conducted to validate the novelties. The rest of this paper is structured as follows. The problem statement and preliminaries are given in Section II. Kinematics and geometric feasibility constraints of the aerial manipulator are presented in Section III. The motion planning of the quadcopter base is proposed in Section IV. Section V gives the motion planning of the Delta arm. Then, the experimental verification is given in Section VI. Conclusions are drawn in Section VII. ## II Problem Statement and Preliminaries ### _Problem statement_ The platform is an aerial manipulator that consists of a quadcopter and a Delta arm (see Fig. 2(a)). The base of the Delta arm is attached underneath the quadcopter. The end-effector used in this paper is a gripper and it is mounted on the end of the Delta arm, whose position can be controlled by the three actuators attached to the base of the Delta arm. The orientation of the end-effector is set the same as the orientation of the quadcopter [26]. The aerial manipulator has three reference frames: the inertial frame \(\Sigma_{I}\), the quadcopter body-fixed frame \(\Sigma_{B}\), and the Delta arm frame \(\Sigma_{D}\) (see Fig. 2(a)). \(\Sigma_{I}\) is an inertial frame where the \(z\)-axis is in the direction of the gravity vector. \(\Sigma_{B}\) is rigidly attached to the quadcopter base. Its origin coincides with the center of gravity of the quadcopter. \(\Sigma_{D}\) is rigidly attached to the Delta arm base at its geometric center \(\mathbf{p}_{C}\). Let \(\mathbf{p}_{B}\in\mathbb{R}^{3}\) and \(\mathbf{R}_{B}\in SO(3)\) denote the position of the quadcopter in \(\Sigma_{I}\) and the rotation matrix from \(\Sigma_{B}\) to \(\Sigma_{I}\), respectively. Let \(\mathbf{p}_{E}\in\mathbb{R}^{3}\) denote the position of the end-effector in \(\Sigma_{I}\). Then, the geometric relationship between \(\mathbf{p}_{E}\) and \(\mathbf{p}_{B}\) can be represented as \[\mathbf{p}_{E}-\mathbf{p}_{B}=\mathbf{R}_{B}\mathbf{p}_{E}^{B}, \tag{1}\] where \(\mathbf{p}_{E}^{B}\in\mathbb{R}^{3}\) is a function of the Delta arm's actuated joint angles \(q_{1},q_{2},q_{3}\). For a pick-and-place task, denote \(\mathbf{p}_{O}\in\mathbb{R}^{3}\) and \(\psi_{O}\in\mathbb{R}\) as the position and the orientation of the target object in \(\Sigma_{I}\), respectively, whereas \(\psi_{E}\in\mathbb{R}\) denotes the orientation angle of the end-effector. Let \(t_{G}\) denote the time that \(\mathbf{p}_{E}\) arrives at \(\mathbf{p}_{O}\) and \(t_{\text{grip}}\) the closing time of the gripper. The goal of the motion planning for the aerial pick-and-place is to calculate the collision-free trajectories for the quadcopter and the Delta arm to move from a starting position to a feasible grasping configuration and from that grasping configuration to the end position. Given the geometric relationship, dynamical feasibility constraints, obstacles in the environment, the start position \(\mathbf{p}_{B,\text{start}}\), and the end position \(\mathbf{p}_{B,\text{end}}\), the resulting trajectories must be collision-free and satisfy that \(\mathbf{p}_{E}(t)=\mathbf{p}_{O}\) and \(\psi_{E}(t)=\psi_{O}\) when \(t\in[t_{G},t_{G}+t_{\text{grip}}]\). ### _Overview of the proposed motion planning method_ The proposed motion planning method is partially decoupled, which calculates the trajectories of the quadcopter base \(\mathbf{p}_{B}(t)\) and the end-effector \(\mathbf{p}_{E}(t)\) in Cartesian space, respectively. The geometric feasibility constraints are proposed to coordinate the trajectories to ensure successful execution (see Section III-B for details). The overall architecture of the motion planning and control system is shown in Fig. 3. The system is decomposed into three components. 1) The first component is the motion planning of the quadcopter base. Its inputs are the positions of the object and the obstacles. Its output is the trajectory of the quadcopter base \(\mathbf{p}_{B,\text{ref}}(t)\). The motion planning of the quadcopter base can be further decomposed into four steps. The first step is feasible grasping position calculation. Its role is to find a suitable position for the quadcopter base to allow the aerial manipulator to grasp the object. The details of this step can be seen in Section IV-A. The second step is path planning. Its role is to find a path for the quadcopter base to move from a given starting position to a feasible grasping position and from that grasping position to a given end position. In this paper, we use the A* method to calculate the path [27, Section 12.1.1]. The third step is flight corridor generation. Its role is to generate a safe flight corridor for the quadcopter base which constrains the motion of the quadcopter base to avoid collisions. The details of this step can be seen in Section IV-B. The fourth step is trajectory generation. Its role is to calculate the trajectory of the quadcopter base based on the piecewise Bezier curve. We use the method proposed in [28] to ensure the resulting trajectory satisfies the safety, the dynamical feasibility, and the waypoints constraints. Compared with the existing methods of aerial manipulators (e.g., [21]), the proposed method calculates the trajectory of the quadcopter base in Cartesian space. Compared with the existing methods of the standard quadcopter (e.g., [28]), the proposed method guarantees the aerial manipulator arrives at the feasible grasping configuration without collisions. 2) The second component is the motion planning of the Fig. 2: Coordinates, revised workspace, and shape polyhedron of the aerial manipulator. Delta arm. Its inputs are the position of the object \(\mathbf{p}_{O}\) and \(\mathbf{p}_{B,\text{ref}}(t)\). Its output is the trajectory of the end-effector \(\mathbf{p}_{E,\text{ref}}(t)\). This motion planning method can be further decomposed into three steps. The first step is the initial condition calculation. Its role is to calculate the position, velocity, and acceleration of the end-effector at the beginning of the manipulation stage. The second step is the optimal trajectory planning of the end-effector based on the Bezier curve. Its role is to calculate the trajectory of the end-effector from the initial position to the object with several constraints. The trajectory planning of the end-effector is represented as a QP problem form. In particular, we propose geometric feasibility constraints of the aerial manipulator and encode this constraint into the QP problem to ensure the trajectories satisfy the geometry of the aerial manipulator. The third step is collision avoidance. It is important and its role is to ensure the trajectory of the end-effector is collision-free. In this step, the collisions between the aerial manipulator and the obstacles in a local map are detected based on the GJK method. The second and third steps are run iteratively. If there is a collision, then the objective function of the QP problem in the second step is updated by a pinhole mapping method. Repeat the second and third steps until no collision occurs. All the corresponding sections introducing these steps are listed in Fig. 3. Compared with the existing methods [21, 29], the proposed method requests lower computational power since the collision avoidance of the Delta arm is achieved by an iterative approach in Cartesian space. 3) The third component is the controller of the aerial manipulator. Its inputs are the trajectories of the quadcopter base and the end-effector. Its outputs are total force \(f\in\mathbb{R}\) of the rotors, torque vector \(\mathbf{\tau}\in\mathbb{R}^{3}\) of the rotors, torque \(\tau_{G}\in\mathbb{R}\) of the gripper, and the torque vector that each actuator should generate \(\mathbf{\tau}_{M}\in\mathbb{R}^{3}\). The controller consists of three subcomponents. The first step is an extended state observer (ESO) -based flight controller. It was proposed in our previous work [30] and uses ESOs to estimate dynamic coupling between the aerial manipulator and the Delta arm. Its role is to generate the force \(f\) and torque vector \(\mathbf{\tau}\) for the quadcopter base so that the trajectory of the quadcopter can be tracked. The second step is the end-effector controller. Its role is to control the gripper to grasp or release objects. The third step is the Delta arm controller. Its role is to generate the torque vector \(\mathbf{\tau}_{M}\) for the Delta arm so that the trajectory of the end-effector can be tracked. The details of the Delta arm controller can be seen in our previous work [30]. The steps can be classified into offboard processes and onboard processes. In Fig. 3, steps in the small grey rectangle are done on an offboard computer, while processes in the white rectangle run onboard the aerial manipulator during flights. ### _preliminaries to Bezier curves_ A \(n\)-th degree Bezier curve is defined by a set of control points and Bernstein polynomial bases. Let \(\mathbf{c}_{i}\in\mathbb{R}^{3},b_{i,n}(\tau)\in\mathbb{R}\) denote the \(i\)-th control point and Bernstein polynomial basis, respectively. Then, the \(n\)-th degree 3D Bezier curve is written as \(\mathbf{B}(\tau)=\sum_{i=0}^{i=n}\mathbf{c}_{i}^{T}b_{i,n}(\tau)\), where \[b_{i,n}(\tau)=\left(\begin{array}{c}n\\ i\end{array}\right)\tau^{i}(1-\tau)^{n-i}, \tag{2}\] where \(\tau\in[0,1]\), \(\binom{n}{i}\) is the binomial coefficient. According to [31, Section 2.4], the derivative of the Bezier curve can be obtained by Lemma 1. In addition, the Bezie curve B(t) is entirely confined within the convex hull defined by all these control points, which is referred to as the convex hull property (see Lemma 2). **Lemma 1** (Derivative [31]): _Let \(\mathbf{B}^{(k)}(\tau)=\sum_{i=0}^{n-j}\mathbf{c}_{i}^{(k)}b_{i,n-j}(\tau)\) denote the \(k\)-th derivative of \(\mathbf{B}(\tau)\), then the control points of \(\mathbf{B}^{(k)}(\tau)\) can be calculated iteratively by \(\mathbf{c}_{i}^{(k)}=(n-k+1)(\mathbf{c}_{i+1}^{(k-1)}-\mathbf{c}_{i}^{(k-1)})\), where \(i=0,1,\cdots,n-j\)._ **Lemma 2** (Convex hull property [31]): _Let \(\mathbb{H}=\{a_{0}\mathbf{c}_{0}+a_{1}\mathbf{c}_{1}+\cdots+a_{n}\mathbf{c}_{n}|a_{0}+a_{1 }+\cdots+a_{n}=1,a_{i}\geq 0\}\) denote the Fig. 3: Structure of the proposed motion planning method for aerial pick-and-place. convex hull defined by all the control points, then \(\mathbf{B}(\tau)\in\mathbb{H}\) for all \(\tau\in[0,1]\). ## III Kinematics and Geometric Feasibility Constraints This section proposes the kinematics and geometric feasibility constraints of the aerial manipulator. ### _Kinematics of the aerial manipulator_ According to (1), the time derivative of \(\mathbf{p}_{E}\) is \[\dot{\mathbf{p}}_{E}= \dot{\mathbf{p}}+\dot{\mathbf{R}}_{B}\mathbf{p}_{E}^{B}+\mathbf{R}\dot{\mathbf{p}}_{E}^ {B} \tag{3}\] \[= \dot{\mathbf{p}}+\mathbf{R}_{B}\mathbf{R}_{D}^{B}\mathbf{p}_{E}^{D}-[\mathbf{R}_{B}\bm {p}_{E}^{B}]_{\times}\mathbf{\omega},\] where \(\mathbf{\omega}\in\mathbb{R}^{3}\) is the angular velocity vector of the quadcopter expressed in \(\Sigma_{B}\), and \([\cdot]_{\times}\) denotes the skew-symmetric matrix. Let \(\mathbf{p}_{C}^{B}\in\mathbb{R}^{3}\) denote the position of the center of the base in \(\Sigma_{B}\). Let \(\mathbf{p}_{E}^{B}\in\mathbb{R}^{3}\) and \(\mathbf{p}_{E}^{D}\in\mathbb{R}^{3}\) denote the positions of the end-effector in \(\Sigma_{B}\) and \(\Sigma_{D}\), respectively. The relationship between \(\mathbf{p}_{E}^{B}\) and \(\mathbf{p}_{E}^{D}\) is \[\mathbf{p}_{E}^{B}=\mathbf{R}_{D}^{B}\mathbf{p}_{E}^{D}+\mathbf{p}_{C}^{B}, \tag{4}\] where \(\mathbf{R}_{D}^{B}\in SO(3)\) is the rotation matrix from \(\Sigma_{D}\) to \(\Sigma_{B}\). The lengths for the upper and lower arms are represented by \(l_{U}\) and \(l_{L}\) as illustrated in Fig. 2(a). Circumradius of the top base and the bottom end-effector base are, respectively, defined as \(r_{F}\) and \(r_{M}\). The length of the gripper is denoted as \(l_{g}\). The relationship between the end-effector position \(\mathbf{p}_{E}^{D}\) and the joint vector \(\mathbf{q}=[q_{1},q_{2},q_{3}]^{T}\in\mathbb{R}^{3}\) is \[\left\|\mathbf{p}_{E}^{D}+\mathbf{l}_{G}-\mathbf{h}_{i}\right\|^{2}=l_{L}^{2},\quad i=1,2,3, \tag{5}\] where \(\mathbf{l}_{G}=[0,0,l_{g}]^{T}\), and \[\mathbf{h}_{i}=\left[\begin{array}{c}-(r_{F}-r_{M}+l_{U}\cos q_{i})\cos[(i-1) \pi/3]\\ (r_{F}-r_{M}+l_{U}\cos q_{i})\sin[(i-1)\pi/3]\\ l_{U}\sin q_{i}\end{array}\right]. \tag{6}\] On the one hand, given a joint vector \(\mathbf{q}\), the position \(\mathbf{p}_{E}^{D}\) can be solved from (5) based on the forward kinematics. On the other hand, given a position \(\mathbf{p}_{E}^{D}\), the joint vector \(\mathbf{q}\) can be solved from (5) by the inverse kinematics. Details can be found in [32, 33]. As can be seen from Fig. 2(a), the joint angles of the Delta arm are driven by planar four-bar linkages. The relationship between the joint angles and the crank position angles can be calculated by the kinematics of the planar four-bar linkage [34, Section 3.6]. ### _Geometric feasibility constraints_ Combining (1) and (4), the geometric relationship between the end-effector and the quadcopter is \[\mathbf{p}_{E}-\mathbf{p}_{B}=\mathbf{R}_{B}(\mathbf{R}_{D}^{B}\mathbf{p}_{E}^{D}+\mathbf{p}_{C}^{B}),\quad\mathbf{p}_{E}^{D}\in\mathbb{W}, \tag{7}\] where \(\mathbb{W}\) is the workspace of the Delta and can be calculated by the forward kinematics of the Delta arm. The workspace is approximated as a convex polyhedron [35]. Therefore, the expression of the workspace is \[\mathbb{W}=\{\mathbf{p}|\mathbf{A}_{D}\mathbf{p}\leq\mathbf{b}_{D}\}. \tag{8}\] According to (7), the range of \(\mathbf{p}_{E}-\mathbf{p}_{B}\) is determined by \(\mathbb{W}\) and \(\mathbf{R}_{B}\). We define \(\mathbf{R}_{B}=\mathbf{R}_{\psi}\mathbf{R}_{\theta,\phi}\), where \(\mathbf{R}_{\psi}\) is the rotation matrix determined by the yaw angle \(\psi\), \(\mathbf{R}_{\theta,\phi}\) are the rotation matrix determined by the pitch angle \(\theta\) and roll angle \(\phi\). The yaw angle of the quadcopter is constant, i.e., \(\psi=\psi_{O}\), when the aerial manipulator is grasping or placing an object. Then, (7) is rewritten as \[\mathbf{R}_{\psi_{O}}^{T}(\mathbf{p}_{E}-\mathbf{p}_{B})=\mathbf{R}_{\theta,\phi}(\mathbf{R}_{D}^{ B}\mathbf{p}_{E}^{D}+\mathbf{p}_{C}^{B}),\quad\mathbf{p}_{E}^{D}\in\mathbb{W}. \tag{9}\] To make the above equation more concise, we define \[\mathbb{W}_{\theta,\phi}=\{\mathbf{R}_{\phi,\phi}(\mathbf{R}_{D}^{B}\mathbf{p}_{E}^{D}+\bm {p}_{C}^{B})|\mathbf{p}_{E}^{D}\in\mathbb{W}\}. \tag{10}\] Therefore, (9) is rewritten as \(\mathbf{R}_{\psi_{O}}^{T}(\mathbf{p}_{E}-\mathbf{p}_{B})\in\mathbb{W}_{\theta,\phi}\). To linearize the geometric relationship (9), we define \(\mathbb{W}_{R}=\{\mathbf{p}|\mathbf{w}_{\min}\leq\mathbf{p}\leq\mathbf{w}_{\max}\}\) as the revised workspace, and it satisfy that \(\mathbb{W}_{R}\subset\mathbb{W}_{\theta,\phi}\). Since the roll and pitch angles of the quadcopter are small when the aerial manipulator is manipulating, the bounds of \(\theta\) and \(\phi\) can be determined with several experiments. Let \(\theta_{\min},\theta_{\max}\) denote the minimum and maximum of \(\theta\). Let \(\phi_{\min},\phi_{\max}\) denote the minimum and maximum of \(\phi\). We calculate \(\mathbb{W}_{R}\) by two steps. The first step is calculating boundaries of \(\mathbb{W}_{\theta,\phi}\). Combining (8) and (10), the expression of \(\mathbb{W}_{\theta,\phi}\) can be rewritten as \[\mathbb{W}_{\theta,\phi}=\{\mathbf{p}|\mathbf{A}_{D}(\mathbf{R}_{\theta,\phi}\mathbf{R}_{D}^{B} )^{T}\mathbf{p}\leq\mathbf{b}_{D}+\mathbf{A}_{D}\mathbf{R}_{D}^{BT}\mathbf{p}_{C}^{B}\}. \tag{11}\] According to (11), we obtain the boundaries \(\mathbb{W}_{\theta=\theta_{\min},\phi=0}\), \(\mathbb{W}_{\theta=\theta_{\max},\phi=0}\), \(\mathbb{W}_{\theta=0,\phi=\phi_{\min}}\), \(\mathbb{W}_{\theta=0,\phi=\phi_{\max}}\). The second step is calculating the intersection of these sets \(\mathbb{W}_{I}\). According to the definition of the intersection, we have \[\mathbb{W}_{I}=\{\mathbf{p}|\mathbf{A}_{D}(\mathbf{R}_{\theta_{\min},0}\mathbf{R}_{D}^{B})^{T} \mathbf{p}\leq\mathbf{b}_{D}+\mathbf{A}_{D}\mathbf{R}_{D}^{BT}\mathbf{p}_{C}^{B},\] \[\mathbf{A}_{D}(\mathbf{R}_{\theta_{\max},0}\mathbf{R}_{D}^{B})^{T}\mathbf{p}\leq\bm {b}_{D}+\mathbf{A}_{D}\mathbf{R}_{D}^{BT}\mathbf{p}_{C}^{B},\] \[\mathbf{A}_{D}(\mathbf{R}_{0,\phi_{\min}}\mathbf{R}_{D}^{B})^{T}\mathbf{p}\leq\bm {b}_{D}+\mathbf{A}_{D}\mathbf{R}_{D}^{BT}\mathbf{p}_{C}^{B},\] \[\mathbf{A}_{D}(\mathbf{R}_{0,\phi_{\max}}\mathbf{R}_{D}^{B})^{T}\mathbf{p}\leq\bm {b}_{D}+\mathbf{A}_{D}\mathbf{R}_{D}^{BT}\mathbf{p}_{C}^{B}\}. \tag{12}\] Since the expression of the intersection (12) is complicated, it may be inconvenient when applied to real systems. We let the largest cuboid that can be inscribed within the intersection as \(\mathbb{W}_{R}\). The cuboid can be calculated by the method proposed in [36]. Then, \(\mathbf{w}_{\min}=[w_{x,\min},w_{y,\min},w_{z,\min}]^{T}\) and \(\mathbf{w}_{\max}=[w_{x,\max},w_{y,\max},w_{z,\max}]^{T}\) are subsequently determined by the size of the cuboid. Fig. 2(b) gives an illustration for calculating the revised workspace in the pitch direction. Then, the geometric feasibility constraints are \[\mathbf{w}_{\min}\leq\mathbf{R}_{\psi_{O}}^{T}(\mathbf{p}_{E}-\mathbf{p}_{B})\leq\mathbf{w}_{\max}, \tag{13}\] where \[\mathbf{R}_{\psi_{O}}=\left[\begin{array}{ccc}\cos\psi_{O}&-\sin\psi_{O}&0\\ \sin\psi_{O}&\cos\psi_{O}&0\\ 0&0&1\end{array}\right]. \tag{14}\] ## IV Motion Planning for the Quadcopter base This section presents a method to generate the trajectory of the quadcopter for the aerial pick-and-place task. This method consists of four steps: feasible grasping position, path planning, flight corridor generation, and Bezier curve-based trajectory generation. The purposes and relationships of these steps are given in Section II-B. In the algorithm, the path planning can be achieved by the existing method. In our work, we use the A* method to obtain the path in the 3D grid map which is used to represent the environment of the task. The Bezier curve-based trajectory generation is achieved by an existing method proposed in [28]. It bounds positions and higher order dynamics of the trajectory entirely within safe regions by using Bernstein polynomial basis and formulating the trajectory generation problem as typical convex programs. Compared to the existing methods for standard quadcopters [28], the proposed method for the quadcopter base of the aerial manipulator has two novelties. First, the feasible grasping position is calculated to ensure the aerial manipulator can manipulate the object. Second, the volume of the aerial manipulator changes with the movement of the Delta arm. To address this issue, the aerial pick-and-place task is divided into two stages: moving and manipulation stages. The flight corridors in the two stages are obtained, respectively. The details are shown as follows. ### _Feasible grasping position_ To grasp the object, the position of the end-effector must arrive at \(\mathbf{p}_{O}\) with an orientation angle of \(\psi_{O}\). The feasible grasping position of the quadcopter is constrained by the geometric shape of the aerial manipulator. Let \(\mathbf{p}_{B,f}\in\mathbb{R}^{3}\) denote the feasible grasping position. Let \(\mathbf{R}_{B,f}\in SO(3)\) denote the desired rotation matrix of the quadcopter base at the feasible grasping position. According to (1), the feasible grasping position of the quadcopter is \[\mathbf{p}_{B,f}=\mathbf{p}_{O}-\mathbf{R}_{B,f}\mathbf{p}_{E}^{B}, \tag{15}\] According to (15), one can conclude that \(\mathbf{R}_{B,f}\) and \(\mathbf{p}_{E}^{B}\) need to be determine before calculating \(\mathbf{p}_{B,f}\). In the manipulation stage, the yaw angle of the quadcopter is set as \(\psi_{O}\) to satisfy the grasp angle constraint of the end-effector. We assume that the roll and pitch angles of the quadcopter are small when the quadcopter base is around \(\mathbf{p}_{B,f}\). This assumption is reasonable since the motion of the quadcopter is conservative. According to the assumption, we have \(\mathbf{R}_{B,f}=\mathbf{R}_{\psi_{O}}\). To ensure the manipulability of the Delta arm, we let the end-effector stay at the center of \(\mathbb{W}_{R}\) when the aerial manipulator picks up the object. Then, (15) is rewritten as \[\mathbf{p}_{B,f}=\mathbf{p}_{O}-0.5\mathbf{R}_{\psi_{O}}(\mathbf{w}_{\min}+\mathbf{w}_{\max}). \tag{16}\] For an aerial pick-and-place task, we already have the start position \(\mathbf{p}_{B,\text{start}}\), feasible grasping position \(\mathbf{p}_{B,f}\), and the end position \(\mathbf{p}_{B,\text{start}}\). Then, the path of the quadcopter can be obtained by the A* method. ### _Flight corridor generation_ The flight corridor is a collection of convex overlapping polyhedra that models free space and provides a connected corridor containing the resulting path. A convex decomposition method proposed in [37] is adopted to generate the flight corridor by inflating the resulting path. However, this method was originally designed for a traditional quadcopter with a fixed volume, while the volume of the aerial manipulator changes with the movement of the Delta arm. Therefore, the method cannot be directly used for the aerial manipulator. To address this issue, we calculate the flight corridors in the moving and the manipulation stages, respectively. In the moving stage, the position of the end-effector is set to stay at the top point \(\mathbf{p}_{\text{top}}^{B}\in\mathbb{R}^{3}\) of the Delta arm's workspace \(\mathbb{W}_{R}\). From the definition of \(\mathbb{W}_{R}\), we have \[\mathbf{p}_{\text{top}}^{B}=\left[\begin{array}{c}0.5(w_{x,\min}+w_{x,\max})\\ 0.5(w_{y,\min}+w_{y,\max})\\ w_{z,\min}\end{array}\right]. \tag{17}\] The shape of the aerial manipulator now can be approximated as a sphere with a radius \(r_{S}\). Then, we can use the convex decomposition method to generate the flight corridor in the moving stage. In the manipulation stage, we use a designed polyhedron as the flight corridor to ensure the object is reachable for the aerial manipulator (see Fig. 4). The designed polyhedron is designed based on the geometric feasibility constraints (13) and it is represented as \[\mathbf{w}_{\min}\leq\mathbf{R}_{\psi_{O}}(\mathbf{p}-\mathbf{p}_{B,d})\leq\mathbf{w}_{\max}. \tag{18}\] The duration time in this polyhedron is determined by the mechanical behavior of the gripper. We set the duration time as the closing time of the gripper \(t_{\text{grip}}\). ## V Motion Planning for The Delta Arm In this section, we calculate the collision-free trajectory of the Delta arm in the Cartesian coordinate. The proposed method for the Delta arm utilizes the resulting trajectory of the quadcopter. In the moving stage, the Delta arm stays at an initial state and its end-effector stays at a fixed position \(\mathbf{p}_{\text{top}}^{B}\) relative to the quadcopter base. The position \(\mathbf{p}_{\text{top}}^{B}\) can be calculated by (17). Therefore, the Delta arm does not require additional motion planning calculations in the moving stage. Let \(t_{B}\) denote the time at which the quadcopter base enters the designed polyhedron. Let \(t_{B}\) denote the beginning time of the manipulation stage. The time \(t_{B}\) can be determined by \(t_{B}=t_{G}-2v_{E,\max}/a_{E,\max}\), where \(v_{E,\max}\) and \(a_{E,\max}\) are maximum velocity and acceleration of the end-effector, respectively. The procedure of the manipulation stage is given here. From \(t_{B}\) to \(t_{G}\), the end-effector moves to the object. Then, the aerial manipulator keeps the position of the end-effector for the duration time \(t_{\text{grip}}\) to pick up or place the object. After picking up or placing the object, the Delta arm returns to its initial state. According to the procedure, one can find that the trajectory of the end-effector from \(t_{B}\) to \(t_{G}\) needs to be calculated. The details to calculate the trajectory from \(t_{B}\) to \(t_{G}\) are shown as follows. ### _Initial condition_ The initial condition for the end-effector consists of the initial position \(\mathbf{p}_{E,t_{B}}\), the initial velocity \(\dot{\mathbf{p}}_{E,t_{B}}\), and the initial acceleration \(\ddot{\mathbf{p}}_{E,t_{B}}\). They are calculated as follows. #### Iii-A1 Initial position According to (1), the initial position \(\mathbf{p}_{E,t_{B}}\) is calculated by \[\mathbf{p}_{E,t_{B}}=\mathbf{p}_{B,t_{B}}+\mathbf{R}_{B,t_{B}}\mathbf{p}_{\text{top}}^{B}, \tag{19}\] where \(\mathbf{p}_{B,t_{B}},\mathbf{R}_{B,t_{B}}=[\mathbf{r}_{1,t_{B}},\mathbf{r}_{2,t_{B}},\mathbf{r}_{3, t_{B}}]\) denote \(\mathbf{p}_{B},\mathbf{R}_{B}\) at the time \(t_{B}\), respectively. According to (19), we calculate \(\mathbf{p}_{B,t_{B}}\) and \(\mathbf{R}_{B,t_{B}}\) to obtain \(\mathbf{p}_{E,t_{B}}\). \(\mathbf{p}_{B,t_{B}}\) can be directly obtained by the trajectory of the quadcopter. The matrix \(\mathbf{R}_{B,t_{B}}\) is calculated based on the differential flatness of the quadcopter. At the time \(t_{B}\), the yaw angle of the quadcopter base is \(\psi_{O}\) to ensure the orientation angle of the end-effector equals \(\psi_{O}\). The unit orientation vector in the ground plane is \(\mathbf{r}_{g}=[\cos\psi_{O},\sin\psi_{O},0]^{T}\). According to [38], we have \[\mathbf{r}_{3,t_{B}}=\frac{\ddot{\mathbf{p}}_{t_{B}}+g\mathbf{e_{3}}}{\|\ddot{\mathbf{p}}_{t_{ B}}+g\mathbf{e_{3}}\|}, \tag{20}\] and vectors \(\mathbf{r}_{1,t_{B}}\) and \(\mathbf{r}_{2,t_{B}}\) can be determined by \[\mathbf{r}_{2,t_{B}}=\frac{\mathbf{r}_{3,t_{B}}\times\mathbf{r}_{g}}{\|\mathbf{r}_{3,t_{B}} \times\mathbf{r}_{g}\|},\mathbf{r}_{1,t_{B}}=\mathbf{r}_{2,t_{B}}\times\mathbf{r}_{3,t_{B}}. \tag{21}\] #### Iii-A2 Initial velocity and acceleration Let \(\delta_{I}\) denote a small time step. We can calculate \(\mathbf{p}_{E,t_{B}-\delta_{I}}\) and \(\mathbf{p}_{E,t_{B}+\delta_{I}}\) according to the above method. Then, the derivatives are approximated by the differences. The initial velocity and acceleration are \[\begin{split}\dot{\mathbf{p}}_{E,t_{B}}&=(\mathbf{p}_{E,t_ {B}}-\mathbf{p}_{E,t_{B}-\delta_{I}})/\delta_{I},\\ \ddot{\mathbf{p}}_{E,t_{B}}&=(\mathbf{p}_{E,t_{B}+\delta_{I }}-2\mathbf{p}_{E,t_{B}}+\mathbf{p}_{E,t_{B}-\delta_{I}})/\delta_{I}^{2}.\end{split} \tag{22}\] ### _Optimal trajectory planning_ The trajectory of the end-effector is calculated by an iterative approach. The trajectory planning of the end-effector is formulated as a QP problem. At each iteration, the objective function of the QP problem is updated and the QP problem is solved to calculate the collision-free trajectory of the end-effector. Let \(\mathbf{p}_{E,\text{ref}}(t),t\in[t_{B},t_{G}]\) denote the trajectory. A \(n_{E}\)-th order Bezier curve is adopted to represent the trajectory and it is \[\mathbf{p}_{E,\text{ref}}(\tau_{M})=\sum_{i=0}^{n_{E}}\mathbf{c}_{E,i}b_{i,n_{E}}( \tau_{M}), \tag{23}\] where \(\mathbf{c}_{E,i}=[c_{E,x,i},c_{E,y,i},c_{E,z,i}]^{T}\in\mathbb{R}^{3}\) and \(b_{i,n_{E}}(\tau_{M})\) are the \(i\)-th control point and Bernstein polynomial basis of the Bezier curve, respectively, and \(\tau_{M}=(t-t_{B})/(t_{G}-t_{B})\). We denote the parameter vector of the trajectory as \(\underline{\mathbf{c}}_{E}=[c_{E,x,0},\ldots,c_{E,x,n_{E}},c_{E,y,0},\ldots,c_{E,y, n_{E}},c_{E,z,0},\ldots,c_{E,z,n_{E}}]^{T}\). The QP problem is then formulated as \[\begin{split}\min& J=\underline{\mathbf{c}}_{E}^{T}\mathbf{Q}_{O,E}\underline{\mathbf{c}}_{E}+\mathbf{q}_{O,E}^{T}\underline{\mathbf{c}}_{E}\\ \text{s.t.}&\mathbf{A}_{E,eq}\underline{\mathbf{c}}_{E}=\mathbf{b }_{E,eq},\\ &\mathbf{A}_{E,eq}\underline{\mathbf{c}}_{E}\leq\mathbf{b}_{E,ie},\end{split} \tag{24}\] where \(\mathbf{Q}_{O,E}\in\mathbb{R}^{3(n_{E}+1)\times 3(n_{E}+1)}\) is the Hessian matrix of the objective function and it is semidefinite, \(\mathbf{q}_{O,E}\in\mathbb{R}^{3(n_{E}+1)}\) is a vector, \(\mathbf{A}_{E,eq}\in\mathbb{R}^{18\times 3(n_{E}+1)}\) and \(\mathbf{A}_{E,ie}\in\mathbb{R}^{(n_{E}+4)\times 3(n_{E}+1)}\) are constraint matrices, \(\mathbf{b}_{eq}\in\mathbb{R}^{18}\) and \(\mathbf{b}_{ie}\in\mathbb{R}^{9n_{E}+4}\) are constraint vectors. The linear equality constraint (\(\mathbf{A}_{E,eq}=\mathbf{b}_{E,eq}\)) is endpoint constraints. The linear inequality constraint (\(\mathbf{A}_{ie}=\mathbf{b}_{ie}\)) consists of dynamical feasibility, geometric feasibility, and grasp constraints. These constraints are adopted to ensure the solution of the problem (24) is collision-free and can be executed successfully. Definitions and roles of the objective and the constraints are as follows. #### Iii-B1 Objective function The objective function is denoted as \(J=J_{J}+J_{O}\), where \(J_{J}\) is the cost function to minimize the jerk along the trajectory, and \(J_{O}\) is a penalty function for the collision. The details of the two terms are \[J_{J}=\sum_{i=1}^{m}\int_{T_{i-1}}^{T_{i-1}}(j_{x}^{2}(t)+j_{y}^{2}(t)+j_{z}^{2 }(t))dt, \tag{25}\] Fig. 4: An illustration for the motion planning of the aerial pick-and-place. \[J_{O}=\sum_{k=1}^{n_{O}}\lambda_{k}\sum_{i=0}^{n_{E}}(c_{i,x}-x_{M,k})^{2}+(c_{i, y}-y_{M,k})^{2}+(c_{i,z}-z_{M,k})^{2}, \tag{26}\] where \(j_{x},j_{y},j_{z}\) denote the jerks of the trajectory in the corresponding three dimensions, respectively, \(\lambda_{k}\) is a changing weighting factor, \(x_{M,k},y_{M,k},z_{M,k}\) are corresponding elements of the obstacle mirror position \(\mathbf{p}_{M,k}\). We define the obstacle mirror set as \(\mathbb{O}_{M}=\{\mathbf{p}_{M,1},\mathbf{p}_{M,2},\ldots,\mathbf{p}_{M,n_{O}}\}\), where \(n_{O}\) is the number of the obstacles that collide with the aerial manipulator during the whole iteration process. The obstacle mirror position \(\mathbf{p}_{M,k}\) can be obtained through pinhole mapping of the corresponding obstacle position. As the iterations progress, the algorithm can guide the trajectory of the end-effector towards the obstacle mirror positions while ensuring that it is collision-free for the obstacles. See Section V-C for details of calculating the changing weighting factors \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n_{O}}\) and the obstacle mirror set \(\mathbb{O}_{M}\). By using the Lemma 1 into (25), we can obtain \(J=\underline{\mathbf{c}}_{E}^{T}\mathbf{Q}_{o},\underline{\mathbf{c}}_{E}\underline{\mathbf{ c}}_{E}+\mathbf{q}_{o,E}^{T}\underline{\mathbf{c}}_{E}\). We leave the details of the \(\mathbf{Q}_{o,E}\) and \(\mathbf{q}_{o,E}\) for brevity. #### Iii-B2 Constraints The constraints for the trajectory planning problem of the end-effector consist of endpoint, dynamical feasibility, geometric feasibility, and grasp constraints. The details of the constraints are given as follows: The endpoint constraints are introduced to ensure the trajectory of the end-effector starts at \(\mathbf{p}_{E,t_{B}}\) and ends at \(\mathbf{p}_{O}\) with desired velocities and accelerations. The endpoint constraints are given as \[c_{0,\mu,E} =\mu_{E,t_{B}},s_{E}^{-1}c_{0,\mu,E}^{(1)}=\dot{\mu}_{E,t_{B}},s_ {E}^{-2}c_{0,\mu,E}^{(2)}=\ddot{\mu}_{E,t_{B}}, \tag{27}\] \[c_{n_{E},\mu,E} =\mu_{O},s_{E}^{-1}c_{n_{E},\mu,E}^{(1)}=\dot{\mu}_{E,t_{G}},s_{E} ^{-2}c_{n_{E},\mu,E}^{(2)}=\ddot{\mu}_{E,t_{G}},\] where \(c_{i,\mu,E}^{(k)}\) denotes the \(i\)-th control point of \(d^{k}f_{\mu}(\tau)/d\tau^{k}\) and can be calculated by Lemma 1, \(\mu\in[x,y,z]\), \(s_{E}=t_{G}-t_{B}\), \(\mathbf{p}_{E,t_{B}},\dot{\mathbf{p}}_{E,t_{B}},\ddot{\mathbf{p}}_{E,t_{B}}\) can be obtained from the Section V-A and \(\mathbf{p}_{E,t_{B}}=0,\ddot{\mathbf{p}}_{E,t_{B}}=0\) The dynamical feasibility constraints consist of velocity and acceleration constraints to ensure the generated trajectory is dynamically feasible. The dynamical feasibility constraints are \[\dot{\mu}_{\min}\leq s_{E}^{-1}c_{i,\mu,E}^{(1)}\leq\dot{\mu}_{\max},i=0,1,\ldots,n_{E}-1, \tag{28}\] \[\ddot{\mu}_{\min}\leq s_{E}^{-2}c_{i,\mu,E}^{(2)}\leq\ddot{\mu}_{\max},i=0,1,\ldots,n_{E}-2,\] where \(\mu\in[x,y,z]\), the subscript \(\min\) denotes the lower bound of the corresponding variable, and the subscript \(\max\) denotes the upper bound of the corresponding variable. The geometric feasibility constraints are introduced to ensure the trajectories of the quadcopter and the end-effector are geometrically feasible for the Delta arm. According to (13), it can be describe as \[\mathbf{R}_{\psi_{O}}\mathbf{w}_{\min}\leq\mathbf{p}_{E,\text{ref}}(t)-\mathbf{p}_{B,\text{ref }}(t)=\mathbf{R}_{\psi_{O}}\mathbf{w}_{\max}. \tag{29}\] As stated above, the trajectory of the end-effector is represented by a \(n_{E}\)-th order Bezier curve. The trajectory of the quadcopter is part of a \(n_{B}\)-th order Bezier curve. To reveal the geometric feasibility constraints on the parameters, we use a \(n_{E}\)-th order Bezier curve to fit the trajectory of the quadcopter from \(t_{B}\) to \(t_{G}\). Then, the geometric feasibility constraints on the parameters can be formulated as linear algebraic equations. Let \(\mathbf{p}_{B,0},\mathbf{p}_{B,1},\ldots,\mathbf{p}_{B,n_{E}}\) denote \(n_{E}+1\) points of the trajectory \(\mathbf{p}_{B,\text{ref}}(t),t\in[t_{B},t_{G}]\). These points divide the trajectory into \(n_{E}\) segments. The time interval between each two adjacent points is the same. The \(n_{E}\)-th order Bezier curve is denoted as \(\mathbf{h}(t)=\sum_{i=0}^{n_{E}}\mathbf{c}_{B,i}b_{i,n_{E}}(\tau_{M})\), where \(\mathbf{c}_{B,i}=[c_{B,x,i},c_{B,y,i},c_{B,z,i}]^{T}\) is the \(i\)-th control point of \(\mathbf{h}(t)\). The control points can be obtained by fitting \(\mathbf{h}(t)\) to the points \(\mathbf{p}_{B,0},\mathbf{p}_{B,1},\ldots,\mathbf{p}_{B,n_{E}}\). Then, the geometric feasibility constraints can be rewritten as \[\mathbf{R}_{\psi_{O}}\mathbf{w}_{\min}\leq\sum_{i=0}^{n_{E}}(\mathbf{c}_{E,i}-\mathbf{c}_{B,i} )b_{i,n_{E}}(\tau_{M})\leq\mathbf{R}_{\psi_{O}}\mathbf{w}_{\max}, \tag{30}\] According to the convex hull property (see Lemma 2), the geometric feasibility constraints on the parameters is \[w_{r,\mu,\min}+c_{B,\mu,i}\leq c_{E,\mu,i}\leq w_{r,\mu,\max}+c_{B,\mu,i}, \tag{31}\] \[i=0,1,\ldots,n_{E},\] where \(w_{r,\mu,\min}\) is the element of \(\mathbf{R}_{\psi_{O}}\mathbf{w}_{\min}\), \(w_{r,\mu,\max}\) is the element of \(\mathbf{R}_{\psi_{O}}\mathbf{w}_{\max}\), and \(\mu\in\{x,y,z\}\). The grasp constraints are introduced to ensure the gripper does not collide with the object. To avoid such collision, we let the end of the trajectory in a cone. Let \(t_{C}\) denote the time to enter the cone. Let \(\mathbf{p}_{E,\text{ref}}(t_{C})=[x_{E,t_{C}},y_{E,t_{C}},z_{E,t_{C}}]^{T}\) denote the position of the end-effector at the time \(t_{c}\). Then, we have \[-\tan\gamma\leq\frac{x_{E,t_{C}}-x_{O}}{z_{E,t_{C}}-z_{O}}\leq\tan\gamma, \tag{32}\] \[-\tan\gamma\leq\frac{y_{E,t_{C}}-y_{O}}{z_{E,t_{C}}-z_{O}}\leq\tan\gamma,\] where \(\gamma\) is the angle of the cone. By substituting the defined (23) into (32), the grasp constraints (32) can be rewritten as a linear form \[\sum_{i=0}^{n_{E}}(c_{E,x,i}+c_{E,z,i}\tan\gamma)b_{i,n_{E}}(\tau_{C})\leq x_{O}+z_{O}\tan\gamma, \tag{33}\] \[\sum_{i=0}^{n_{E}}(c_{E,x,i}-c_{E,z,i}\tan\gamma)b_{i,n_{E}}(\tau_{C})\geq x_{O}-z_{O}\tan\gamma,\] \[\sum_{i=0}^{n_{E}}(c_{E,y,i}+c_{E,z,i}\tan\gamma)b_{i,n_{E}}(\tau_{C})\leq y_{O}+z_{O}\tan\gamma,\] \[\sum_{i=0}^{n_{E}}(c_{E,y,i}-c_{E,z,i}\tan\gamma)b_{i,n_{E}}(\tau_{C})\geq y_{O}-z_{O}\tan\gamma,\] where \(\tau_{C}=(t_{C}-t_{B})/(t_{G}-t_{B})\). ### _Collision avoidance_ The subsection proposes a method to detect collisions and calculate the changing weighting factors \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n_{O}}\) and the obstacle mirror set \(\mathbb{O}_{M}\) in (26). Before the iteration process, \(\mathbb{O}_{M}\) is set as an empty set, i.e., \(\mathbb{O}_{M}=\emptyset\), and \(n_{O}\) is set as zero. At each iteration, the collision is detected using the solution of the QP problem (24). If the solution is collision-free, the iteration process is terminated and the collision-free solution is outputted as the trajectory of the end-effector. If there are collisions between the aerial manipulator and the obstacles in the environment with the solution, we calculate the changing weighting factors and \(\mathbb{O}_{M}\) for the next iteration calculation. The collision detection method is proposed to detect if the solution of the QP problem (24) is collision-free. The method considers the collision of the Delta arm and the end-effector. The trajectory of the quadcopter is collision-free, which is ensured by the flight corridor. Therefore, we do not consider the collision of the quadcopter. The proposed collision detection method consists of three steps. We first use a shape polyhedron to represent the Delta arm and the end-effector in collision detection. The vertices of the shape polyhedron are \(\mathbf{p}_{U,i},\mathbf{p}_{L,i},i=1,2,3\) (see the blue points in Fig.2(c)). Then, we have \[\mathbf{p}_{U,i}=\mathbf{p}_{B}+\mathbf{R}_{\psi}\mathbf{R}_{D}^{B}\tilde{\mathbf{p}}_{U,i},i=1,2,3, \tag{34}\] where \[\tilde{\mathbf{p}}_{U,i}=\left[\begin{array}{c}r_{S}\cos(\frac{1+2i}{3}\pi)\\ r_{S}\sin(\frac{1+2i}{3}\pi)\\ 0\end{array}\right] \tag{35}\] In addition, the lower vertices can be calculated by \[\mathbf{p}_{L,i}=\mathbf{P}_{E}+\mathbf{R}_{\psi}\mathbf{R}_{D}^{B}\tilde{\mathbf{p}}_{L,i},i=1,2,3, \tag{36}\] where \[\tilde{\mathbf{p}}_{L,i}=\left[\begin{array}{c}l_{C}\cos(\frac{1+2i}{6}\pi)\\ l_{C}\sin(\frac{1+2i}{6}\pi)\\ l_{C}\end{array}\right] \tag{37}\] where \(l_{C}\) is a constant parameter for safety and is determined by the size of the gripper when the gripper is open. Second, we introduce a local map to reduce the computational cost of the collision detection. This is because detecting collisions in the entire environment can be computationally expensive, especially if the environment is large or if there are many obstacles. We decrease the number of obstacles to be calculated by adding a box around the end-effector and the object and thus only detecting collisions inside it. The size of the box is determined by \(\mathbf{p}_{B,t_{B}}\) and \(\mathbf{p}_{O}\). We let \(\mathbb{M}_{\text{local}}=\{\mathbf{p}\in\mathbb{R}^{3}|l_{\min}\leq\mathbf{p}\leq l_{ \max}\}\) denote the box. In addition, \(\mathbf{l}_{\min}\) and \(\mathbf{l}_{\max}\) can be calculated by \[\begin{split}& l_{\mu,\min}=\min\{\mu_{B,t_{B}},\mu_{O}\}-l_{s}, \\ & l_{\mu,\max}=\max\{\mu_{B,t_{B}},\mu_{O}\}+l_{s},\end{split} \tag{38}\] where \(\mu\in\{x,y,z\}\), \(l_{\mu,\min},l_{\mu,\max},\mu_{B,t_{B}},\mu_{O}\) are corresponding element of \(\mathbf{l}_{\min},\mathbf{l}_{\min},\mathbf{p}_{B,t_{B}},\mathbf{p}_{O}\), respectively, \(l_{s}\) is a constant parameter for safety. Third, we use the GJK method proposed in [39] to detect collisions. If the QP problem solution reveals a collision between the aerial manipulator and obstacle \(i\), then the two endpoints \(\mathbf{T}_{i,L}\) and \(\mathbf{T}_{i,R}\) of the solution within the collision area with respect to the obstacle can be determined (see Fig. 4). Let \(\mathbf{O}_{i}\) denote the center of the obstacle \(i\). Then, we calculate the obstacle mirror position \(\mathbf{p}_{M,i}\) by a pinhole mapping method. The position of the pinhole is \(\mathbf{p}_{P,i}=0.5(\mathbf{T}_{i,L}+\mathbf{T}_{i,R})\). And, we have \[\mathbf{p}_{M,i}=2\mathbf{p}_{P,i}-\mathbf{O}_{i}. \tag{39}\] The values of the weighting factors are updated by \(\lambda_{i}=\lambda_{i}+\alpha\Delta\lambda_{i}\), where \(i=1,2,\ldots,n_{O}\), and \(\alpha>0\) is a constant gain. The parameter \(\Delta\lambda_{i}\) is the step size used for updating the \(i\)-th weighting factor and is a critical factor that affects the computation time of the method. The expression for \(\Delta\lambda_{i}\) is given as \[\Delta\lambda_{i}=\|\mathbf{T}_{i,L}-\mathbf{T}_{i,R}\|. \tag{40}\] ## VI Experimental Verification This section presents experimental results to verify the effectiveness of the proposed motion planning algorithms. The experimental video is available at [https://youtu.be/q7O9v7l2Oho](https://youtu.be/q7O9v7l2Oho). First of all, we describe the experimental setup. The aerial manipulator platform used in the experiments consists of a Fig. 5: Results of the collision avoidance experiment. quadcopter and a Delta arm. The wheelbase of the quadcopter is \(0.65\) m. The mass of the quadcopter (including a battery) is \(3.60\) kg. The Delta arm consists of a mounting base (\(0.56\) kg), a movable robotic arm (\(0.44\) kg), and a gripper (\(0.32\) kg). A flight controller proposed in our previous work [30] runs on a Pixhawk 4 autopilot. This controller uses extended state observers (ESOs) to estimate dynamic coupling between the aerial manipulator and the Delta arm. The proposed motion planning method runs on an onboard Intel NUC i7 computer with ROS (an open-source robotics middleware suite). The experiments are conducted in a Vicon system, which provides accurate position measurements of the quadcopter base and the end-effector. The measurement data of the Vicon system is sent to a ground control station through an ethernet switch. Then, the ground control station sends the measurement data to the aerial manipulator with a frequency of 100 Hz through a 5 GHz wireless router. The perception of the aerial manipulator is not surveyed in this paper. We assume that the obstacles in the environment are already known. In particular, the locations of the obstacles and the object can be obtained by the Vicon system. Then, the environment can be previously built as a grid map which consists of a set of cubes. The size of each cube is set as 0.1 m. This map is used for the path planning of the quadcopter base. The description of the controllers is provided in Section II-B. In all the examples, we use the same set of parameters of the motion planner: \(\alpha=3.0\), \(r_{S}=0.50\) m, \(l_{C}=0.06\) m, \(l_{S}=0.20\) m. The velocity and the acceleration constraints for the quadcopter base are set as 0.5 m/s and 1.0 m/s\({}^{2}\), respectively. The velocity and the acceleration constraints for the end-effector are set as 0.5 m/s and 2.0 m/s\({}^{2}\), respectively. The bounds of the geometric feasibility constraints are set as \(\mathbf{w}_{\min}=[-0.06,-0.06,-0.60]^{T}\) and \(\mathbf{w}_{\max}=[0.06,0.06,-0.40]^{T}\). ### _Example 1: Collision avoidance_ We validate the effectiveness of the proposed method in the collision avoidance task. The environment of this example is illustrated in Fig. 5. There are two types of obstacles in the environment. The first type of obstacles restrict the motion of the quadcopter base and the size of the flight corridor. Fig. 6: Results of the aerial retrieval experiment. The collision avoidance for this type of obstacles is achieved by the flight corridor. The second type of obstacles restrict the motion of the Delta arm and must be avoided through the motion planning of the Delta arm. In Section V-C, we propose an iterative collision avoidance method to avoid the second type of obstacles. In order to show its effectiveness, the result of the motion planning with the collision avoidance method is calculated. Fig. 5 shows the results of the motion planning with the collision avoidance method and without the collision avoidance method. The generated flight corridor is shown in Fig. 5(a). As shown in Fig. 5(b), there are four obstacles near the object. In particular, the aerial manipulator collides with one of these obstacles in the resulting trajectory without the collision avoidance method. The collision area is shown as a red dot line in Fig. 5 and its length is 0.26 m. The resulting trajectory with the collision avoidance method is shown in Fig. 5 (see the blue line). As can be seen, the result of the proposed method is collision-free. The total computational time for calculating the path, flight corridor, and trajectory of the quadcopter in the collision avoidance task is 46.3 ms, while the computational time for calculating the trajectory of the end-effector is 36.4 ms. ### _Example 2: Aerial retrieval_ The goal of this experiment is to retrieve an object by the aerial manipulator. In the task, the aerial manipulator moves to and picks up the object. Then, the aerial manipulator returns to the start position. The start position is set as \([0,0,-2.00]\). The position and the orientation angle of the object are set as \([0,-2.00,-1.24]\) and \(0^{\circ}\) (see Fig. 6(a)), respectively. As shown in Fig. 6(b), screens are in the flight environment, which needs to be avoided by the aerial manipulator in the moving stage. The aerial manipulator is set to fly around the screens. As shown in Fig. 6(a), there are three obstacles near the object. The aerial manipulator has to avoid colliding with these obstacles. Fig. 6(b)-(d) shows the result of the aerial retrieval experiment. The generated flight corridor is shown in Fig. 6(c). The span time of the experiment is 58 s. The mean tracking error Fig. 7: Results of the aerial transport experiment. of the quadcopter base in the moving stage is 0.05 m, while that in the manipulation stage is 0.01 m. The quadcopter flies faster in the moving stage than in the manipulation stage. However, the higher velocity also causes a larger tracking error. The computational time for calculating the path, flight corridor, and trajectory of the quadcopter in the aerial retrieval task is 43.9 ms, while the computational time for calculating the trajectory of the end-effector is 25.7 ms. ### _Example 3: Aerial transport_ The goal of this experiment is to grasp an object and put it on the target location. In the task, the aerial manipulator first flies to and picks up the object. Then, the aerial manipulator flies to the target location and puts the object on the target location. Finally, the aerial manipulator returns to the start position. The start position is set as \([0,0,-2.00]\). The position and the orientation angle of the object are set as \([0,2.00,-1.22]\) and \(0^{\circ}\), respectively. The position of the target location is set as \([0,-2.00,-1.24]\). The whole process of the experiment is shown in Fig. 7(c). In the experiment, the aerial manipulator is also set to fly around the screens to utilize the experiment field. As shown in Fig. 7(a) and (b), there are two obstacles near the object and three obstacles near the target location. The aerial manipulator has to avoid colliding with these obstacles. Fig. 7(c)-(e) show the result of the aerial transport experiment. The generated flight corridor is shown in Fig. 7(d). The span time of the experiment is 88 s. The mean tracking error of the quadcopter base in the moving stage is 0.06 m, while that in the manipulation stage is 0.01 m. The computational time for calculating the trajectory of the quadcopter in the aerial transport task is 45.6 ms, while the computational time for calculating the trajectory of the end-effector is 32.8 ms. The experiment result validates the effectiveness of the proposed motion planning method in the aerial transport task. ## VII Conclusion This paper proposed a novel partially decoupled motion planning method of the aerial manipulator for the aerial pick-and-place task. This method calculates the dynamically feasible and collision-free trajectories of the flying base and the manipulator respectively in Cartesian space. The proposed geometric feasibility constraints can ensure the resulting trajectories are coordinated to complete tasks. The proposed method is verified by three experimental results. It is verified by these experiments that the proposed geometric feasibility constraints can ensure the trajectories of the quadcopter base and the end-effector satisfy the geometry of the aerial manipulator. The results also illustrate the ability of the proposed method to avoid obstacles. This ability is limited by the partially decoupled structure since the obstacles nearby the object are avoided by the Delta arm rather than the whole aerial manipulator. However, in order to avoid large obstacles nearby the object, both the quadcopter base and the Delta arm must be used. This will be one important research direction for future research.
2303.10103
Image comparison and scaling via nonlinear elasticity
A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.
John M. Ball, Christopher L. Horner
2023-03-17T16:26:20Z
http://arxiv.org/abs/2303.10103v2
# Image comparison and scaling via nonlinear elasticity ###### Abstract A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer. Keywords:Nonlinear elasticity image registration scaling. ## 1 Introduction In this paper we formulate and analyze a nonlinear elasticity model for comparing two images \(P_{1}=(\Omega_{1},c_{1}),\ P_{2}=(\Omega_{2},c_{2})\), regarded as bounded Lipschitz domains \(\Omega_{1},\Omega_{2}\) in \(\mathbb{R}^{n}\) with corresponding intensity maps \(c_{1}:\Omega_{1}\to\mathbb{R}^{m},c_{2}:\Omega_{2}\to\mathbb{R}^{m}\). The model is based on an integral functional \[I_{P_{1},P_{2}}(y)=\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx, \tag{1}\] depending on \(c_{1},c_{2}\) and a map \(y:\Omega_{1}\to\Omega_{2}\) with gradient \(Dy\), whose minimizers give optimal transformations \(y^{*}\) between images. The admissible transformations \(y\) between the images are orientation-preserving homeomorphisms with \(y(\Omega_{1})=\Omega_{2}\), and are not required to satisfy other boundary conditions. The use of nonlinear elasticity, rather than models based on linear elasticity that are more commonly used in the computer vision literature, provides a conceptually clearer and more general framework. A key advantage is that nonlinear elasticity (of which linear elasticity is not a special case) respects rotational invariance, so that rigidly rotated and translated images can be identified as equivalent. Further, nonlinear elasticity is naturally suited for discussing the global invertibility of maps between images (see, for example, [5], [29]), which in the context of mechanics describes non-interpenetration of matter. Our work is closest in spirit to that of Droske & Rumpf [14], Rumpf [27] and Rumpf & Wirth [26], who like us make use of the existence theory for polyconvex energies in [4], as also do Burger, Modersitski & Ruthotto [9], Debroux et al [12], Iglesias, Rumpf & Scherzer [16] and Iglesias [15]. Other nonlinear elasticity approaches are due to Lin, Dinov, Toga & Vese [19], Ozere, Gout & Le Guyader [22], Ozere & Le Guyader [23], Simon, Sheorey, Jacobs & Basri [28] and Debroux & Le Guyader[13]. Key differences with these works are: (i) that we minimize among homeomorphisms of the image domains rather than applying Dirichlet or other boundary conditions, (ii) technical improvements as regards the regularity of the intensity maps, and (iii) a novel analysis of linearly related images. Our model is described in Section 2, in which it is shown (Proposition 1) that invariance of the integral (1) under rotation and translation requires that the integrand \(\psi(c_{1},c_{2},\cdot)\) be isotropic. As described above, two images that are translated and rigidly rotated with respect to each other can reasonably be regarded as equivalent. In most applications the minimization algorithm should thus deliver this translation and rotation as the unique minimizer, and we give conditions on \(\psi\) under which this occurs. We also discuss symmetry with respect to interchange of images. Theorem 2.1 gives the existence of a minimizer for general pairs of images under polyconvexity and growth conditions on \(\psi\), assuming only that the intensity maps are \(L^{\infty}\). More generally we consider the case when two images are related by a linear transformation, and ask for which \(\psi\) the minimization algorithm delivers this linear transformation as the unique minimizer. We show (see Section 3) that \(\psi\) can be chosen such that for any pair of images related by a uniform magnification the unique minimizer is that magnification. However, for the functional to deliver as a minimizer the linear transformation between _any_ linearly related pair of images the integrand must have a special form (see Theorem 2.2), in which the integrand depends on the gradient \(Dy\) as a convex function of \(\det Dy\) alone. This degeneracy suggests that a better model might use an integrand depending also on the second gradient \(D^{2}y\), and this is briefly discussed, together with other issues, in Section 4. ## 2 Nonlinear elasticity model ### Comparing images We identify an image with a pair \(P=(\Omega,c)\), where \(\Omega\subset\mathbb{R}^{n}\) is a bounded Lipschitz domain, and \(c:\Omega\to\mathbb{R}^{m}\) is an _intensity map_ describing the greyscale intensity (\(m=1\)), the intensity of colour channels (\(m>1\)), and possibly other image characteristics. Our aim is to compare two images \(P_{1}=(\Omega_{1},c_{1}),\ P_{2}=(\Omega_{2},c_{2})\) by means of a nonlinear elasticity based functional, whose minimizers give optimal transformation maps between the images. To compare \(P_{1},P_{2}\) we minimize the functional \[I_{P_{1},P_{2}}(y)=\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx, \tag{2}\] over invertible maps \(y:\Omega_{1}\to\mathbb{R}^{n}\) such that \(y(\Omega_{1})=\Omega_{2}\), and which are _orientation-preserving_, that is \(\det Dy(x)>0\) for a.e. \(x\in\Omega_{1}\). Here \[\psi:\mathbb{R}^{m}\times\mathbb{R}^{m}\times M_{+}^{n\times n}\to[0,\infty),\] where \(M_{+}^{n\times n}=GL^{+}(n,\mathbb{R})=\{\text{real }n\times n\,\text{matrices }A\text{ with }\det A>0\}\). In (2), \(Dy(x)\) denotes the distributional gradient of \(y\) at \(x\). Throughout this section we assume that the maps \(y\) and their inverses \(y^{-1}\) have sufficient regularity; it is enough that \(y\in W^{1,p}(\Omega_{1},\mathbb{R}^{n}),y^{-1}\in W^{1,p}(\Omega_{2},\mathbb{R }^{n})\) for \(p>n\) (for the definition of \(W^{1,p}\) see Section 2.3), which is guaranteed by Theorem 2.1 below. Note that we do not specify \(y\) on \(\partial\Omega_{1}\), only that \(y(\Omega_{1})=\Omega_{2}\). Thus we allow'sliding at the boundary', in order to better compare images with important boundary features. This is not typically done in the computer vision literature, but is considered in the context of elasticity by Iwaniec & Onninen [17], though for elasticity such a boundary condition would be difficult to realize mechanically. ### Properties of \(\psi\) We now list some desirable properties of the integrand \(\psi\) in (2). (i) _Invariance under rotation and translation._ For two images \(P=(\Omega,c)\) and \(P^{\prime}=(\Omega^{\prime},c^{\prime})\) write \(P\sim P^{\prime}\) if \(P,P^{\prime}\) are related by a rigid translation and rotation, i.e. \[\Omega^{\prime}=E(\Omega),\;c^{\prime}(E(x))=c(x)\] for some proper rigid transformation \(E(x)=a+Rx\), \(a\in\mathbb{R}^{n}\), \(R\in SO(n)\). If \(P_{1}\sim P_{1}^{\prime}\), \(P_{2}\sim P_{2}^{\prime}\), with corresponding rigid transformations \(E_{1}(x)=a_{1}+R_{1}x\), \(E_{2}(x)=a_{2}+R_{2}x\), we require that \[I_{P_{1},P_{2}}(y)=I_{P_{1}^{\prime},P_{2}^{\prime}}(E_{2}\circ y\circ E_{1}^{ -1}), \tag{3}\] or, equivalently, \[\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),R_{2}Dy(x)R_{1}^{T})\,dx\] \[=\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx. \tag{4}\] Proposition 1: (4) _holds for all \(P_{1},P_{2}\) and orientation-preserving invertible \(y:\Omega_{1}\to\Omega_{2}\) with \(y(\Omega_{1})=\Omega_{2}\) iff \(\psi(c_{1},c_{2},\cdot)\) is isotropic, i.e._ \[\psi(c_{1},c_{2},QAR)=\psi(c_{1},c_{2},A) \tag{5}\] _for all \(c_{1},c_{2}\in\mathbb{R}^{n},A\in M_{+}^{n\times n}\), and \(Q,R\in SO(n)\)._ Proof: Setting \(c_{1},c_{2}\) constant, and \(y(x)=Ax\), (4) implies (5), and the converse is obvious. We denote by \(v_{i}(A)\) the singular values of \(A\) (that is, the eigenvalues of \(\sqrt{A^{T}A}\)). A standard result of nonlinear elasticity (see, for example, [30, Theorem 8.5.1]) gives that \(\psi(c_{1},c_{2},\cdot)\) is isotropic iff \[\psi(c_{1},c_{2},A)=H(c_{1},c_{2},v_{1}(A),...,v_{n}(A))\] with \(H\) symmetric with respect to permutations of the last \(n\) arguments. (ii) _Matching of equivalent images_. We also require that the functional (2) is zero iff the two images are related by a rigid transformation, i.e. for invertible \(y\) with \(y(\Omega_{1})=\Omega_{2}\) we have \[I_{P_{1},P_{2}}(y)=0\mbox{ iff }P_{1}\sim P_{2}\mbox{ with corresponding rigid transformation }y. \tag{6}\] Proposition 2: (6) _is equivalent to the condition_ \[\psi(c_{1},c_{2},A)=0\mbox{ iff }c_{1}=c_{2}\mbox{ and }A\in SO(n). \tag{7}\] Proof: The only nontrivial part of the proof of equivalence is to show that if (7) holds and \(I_{P_{1},P_{2}}(y)=0\) then \(P_{1}\sim P_{2}\) with corresponding rigid transformation \(y\). But if \(I_{P_{1},P_{2}}(y)=0\) then \(c_{2}(y(x))=c_{1}(x)\) and \(Dy(x)=R(x)\in SO(n)\) for a.e. \(x\in\Omega_{1}\). But this implies by [24] that \(R(x)=R\) is constant, from which the conclusion follows. (iii) _Symmetry with respect to interchanging images_. For applications in which both images are of the same type (but not, for example, when \(P_{1}\) is a template image) it is reasonable to require that \[I_{P_{1},P_{2}}(y)=I_{P_{2},P_{1}}(y^{-1}). \tag{8}\] Equivalently \[\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx =\int_{\Omega_{2}}\psi(c_{2}(y),c_{1}(x(y)),Dx(y))\,dy \tag{9}\] \[=\int_{\Omega_{1}}\psi(c_{2}(y(x)),c_{1}(x),Dy(x)^{-1})\det Dy(x) \,dx.\] Taking \(c_{1},c_{2}\) constant and \(y(x)=Ax\) this holds iff \[\psi(c_{1},c_{2},A)=\psi(c_{2},c_{1},A^{-1})\det A. \tag{10}\] Such a symmetry condition was introduced by Cachier & Rey [10] and subsequently used by Kolouri, Slepcev & Rohde [18] and Iglesias [15]. It is also implicit in the work of Iwaniec & Onninen [17]. A class of integrands \(\psi\) satisfying the above conditions (5), (7), (10) is given by \[\psi(c_{1},c_{2},A)=\Psi(A)+f(c_{1},c_{2},\det A), \tag{11}\] where (a) \(\Psi\geqslant 0\) is isotropic, \(\Psi(A)=\det A\cdot\Psi(A^{-1})\), \(\Psi^{-1}(0)=SO(n)\), (b) \(f\geqslant 0\), \(f(c_{1},c_{2},\delta)=\delta f(c_{2},c_{1},\delta^{-1})\), \(f(c_{1},c_{2},1)=0\) iff \(c_{1}=c_{2}\). In particular we can take \[f(c_{1},c_{2},\delta)=(1+\delta)|c_{1}-c_{2}|^{2}, \tag{12}\] or \[f(c_{1},c_{2},\delta)=|c_{1}-c_{2}\delta|^{2}+\delta^{-1}|c_{1}\delta-c_{2}|^ {2}, \tag{13}\] which are both convex in \(\delta\). ### Existence of minimizers Let \(p>n\) and define the set of admissible maps \[\mathcal{A}=\{y\in W^{1,p}(\Omega_{1},\mathbb{R}^{n}):y:\Omega_{1} \to\Omega_{2} \text{ an orientation-preserving} \tag{14}\] \[\text{homeomorphism},y^{-1}\in W^{1,p}(\Omega_{2},\mathbb{R}^{n})\}.\] Here, for a bounded domain \(\Omega\subset\mathbb{R}^{n}\) and \(1<p<\infty\), \(W^{1,p}(\Omega,\mathbb{R}^{n})\) is the Sobolev space of maps \(y:\Omega\to\mathbb{R}^{n}\) such that \[\|y\|_{1,p}:=\left(\int_{\Omega}\left(|y(x)|^{p}+|Dy(x)|^{p}\right)\,dx\right)^ {\frac{1}{p}}<\infty.\] We recall (see for example [1, 21]) that if \(\Omega\) is Lipschitz and \(p>n\) then any \(y\in W^{1,p}(\Omega,\mathbb{R}^{n})\) has a representative that is continuous on the closure \(\bar{\Omega}\) of \(\Omega\). We now make some other technical hypotheses on \(\psi\). (H1) (_Continuity_) \(\psi:\mathbb{R}^{m}\times\mathbb{R}^{m}\times M_{+}^{n\times n}\to[0,\infty)\) is continuous, (H2) (_Coercivity_) \(\psi(c,d,A)\geqslant C(|A|^{p}+\det A\cdot|A^{-1}|^{p})-C_{0}\) for all \(c,d\in\mathbb{R}^{m},A\in M_{+}^{n\times n}\), where \(C>0\) and \(C_{0}\) are constants, (H3) (_Polyconvexity_) \(\psi(c,d,\cdot)\) is polyconvex for each \(c,d\in\mathbb{R}^{s}\), i.e. there is a function \(g:\mathbb{R}^{m}\times\mathbb{R}^{m}\times\mathbb{R}^{\sigma(n)}\times(0, \infty)\to\mathbb{R}\) with \(g(c,d,\cdot)\) convex, such that \[\psi(c,d,A)=g(c,d,\mathbf{J}_{n-1}(A),\det A)\text{ for all }c,d\in\mathbb{R}^{m},A \in M_{+}^{n\times n},\] where \(\mathbf{J}_{n-1}(A)\) is the list of all minors (i.e. subdeterminants) of \(A\) of order \(\leqslant n-1\) and \(\sigma(n)\) is the number of such minors, (H4) (_Bounded intensities_) \(c_{1}\in L^{\infty}(\Omega_{1},\mathbb{R}^{s})\), \(c_{2}\in L^{\infty}(\Omega_{2},\mathbb{R}^{s})\). We note that (H2) implies that \(\psi(c,d,A)\geqslant C_{1}(\det A)^{1-\frac{p}{n}}\) for some constant \(C_{1}>0\), so that \(\psi(c,d,A)\to\infty\) as \(\det A\to 0+\). This follows from the Hadamard inequality \(|B|^{n}\geqslant n^{\frac{n}{2}}\det B\) for \(B\in M_{+}^{n\times n}\) applied to \(B=\text{cof}A\), noting that \(\text{det}\,\text{cof}A=(\det A)^{n-1}\). Theorem 2.1: _Suppose that \(\mathcal{A}\) is nonempty, and that the hypotheses_ (H1)-(H4) _hold. Then there exists an absolute minimizer \(y^{*}\) in \(\mathcal{A}\) of_ \[I_{P_{1},P_{2}}(y)=\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx.\] The proof, which will appear in [8], follows the usual pattern for proving existence of minimizers in nonlinear elasticity for a polyconvex stored-energy function using the direct method of the calculus of variations (see [4, 11, 30]). However there are some extra issues. In particular, as observed by Rumpf [27] care has to be taken for intensity maps \(c_{1},c_{2}\) that are discontinuous, for which it is not even immediately obvious that \(I_{P_{1},P_{2}}(y)\) is well defined, and we are able to weaken his hypotheses by assuming only (H4). Note that the hypotheses on \(\psi\) discussed in Section 2.2 are not needed to prove the existence of minimizers. ## 3 Linear scaling Suppose that the images \(P_{1}=(\Omega_{1},c_{1})\) and \(P_{2}=(\Omega_{2},c_{2})\) are linearly related, i.e. for some \(M\in M_{+}^{n\times n}\) we have \[\Omega_{2}=M\Omega_{1},\ \ c_{2}(Mx)=c_{1}(x). \tag{15}\] Can we choose \(\psi\) such that the unique minimizer \(y\) of \(I_{P_{1},P_{2}}\) is \(y(x)=Mx\)? For simplicity consider \(\psi\) of the form (11), (12) \[\psi(c_{1},c_{2},A)=\Psi(A)+(1+\det A)|c_{1}-c_{2}|^{2}.\] Thus we require that for all orientation-preserving invertible \(y\) with \(y(\Omega_{1})=M\Omega_{1}\) \[\int_{\Omega_{1}}\left(\Psi(Dy(x))+(1+\det Dy(x))|c_{1}(x)-c_{2}(y(x))|^{2} \right)\,dx\geqslant\int_{\Omega_{1}}\!\Psi(M)\,dx, \tag{16}\] with equality iff \(y(x)=Mx\). This holds for all \(c_{1},c_{2}\) iff \[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!\Psi(Dy)\,dx \geqslant\Psi(M) \tag{17}\] for \(y\) invertible with \(y(\Omega_{1})=M\Omega_{1}\), where \(\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!f\,dx:=\frac{1}{|\Omega_{1}|}\int_{ \Omega_{1}}\!f\,dx\) and \(|\Omega_{1}|\) is the \(n-\)dimensional Lebesgue measure of \(\Omega_{1}\). The inequality (17) is a stronger version of _quasiconvexity at \(M\)_, the central convexity condition of the multi-dimensional calculus of variations implied by polyconvexity (see, e.g. [25]), in which the usual requirement that \(y(x)=Mx\) for \(x\in\partial\Omega_{1}\) is weakened. We show that we can satisfy this condition if \(M=\lambda\mathbf{1}\), \(\lambda>0\), where \(\mathbf{1}\) denotes the identity matrix (or more generally if \(M=\lambda R\), \(R\in SO(n)\)), so that \(P_{2}\) is a uniform magnification (or reduction if \(\lambda\leqslant 1\)) of \(P_{1}\). Let \[\Psi(A)=\sum_{i=1}^{n}v_{i}^{\alpha}+(\det A)\sum_{i=1}^{n}v_{i}^{-\alpha}+h( \det A), \tag{18}\] where \(v_{i}=v_{i}(A)\) are the singular values of \(A\), \(\alpha>n\), and where \(h:(0,\infty)\to\mathbb{R}\) is \(C^{1}\), convex and bounded below with \(h(\delta)=\delta h(\delta^{-1})\) and \(h^{\prime}(1)=-n\). Then \(\Psi\) is isotropic, \(\Psi(A)=\det A\cdot\Psi(A^{-1})\), \(\Psi\geqslant 0\), \(\Psi^{-1}(0)=SO(n)\), and \(\psi\) satisfies (H1)-(H4). Let \(y\) be invertible with \(y(\Omega_{1})=\lambda\Omega_{1}\). By the arithmetic mean - geometric mean inequality we have that, since \(\det Dy=\prod_{i=1}^{n}v_{i}\), \[\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!\Psi(Dy)\,dx\geqslant \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!n\left((\det Dy)^{\frac{\alpha}{n}}+( \det Dy)^{1-\frac{\alpha}{n}}\right)+h(\det Dy)\,dx\] \[= \mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{ \hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!H(\det Dy(x))\,dx\] \[\geqslant H\left(\mathchoice{{\vbox{\hbox{$-$ }}\kern-13.499794pt}}{{\vbox{\hbox{$-$ }}\kern-12.149815pt}}{{\vbox{\hbox{$-$ }}\kern-9.899849pt}}{{\vbox{\hbox{$-$ }}\kern-8.999863pt}}\!\int_{\Omega_{1}}\!\det Dy(x)\,dx\right)\] \[= H(\lambda^{n})=\Psi(\lambda\mathbf{1}),\] as required, where we have set \(H(\delta):=n(\delta^{\frac{\alpha}{n}}+\delta^{1-\frac{\alpha}{n}})+h(\delta)\) and used Jensen's inequality, noting that \(H\) is convex and that \(\int_{\Omega_{1}}\det Dy(x)\,dx\) is the \(n\)-dimensional measure of \(y(\Omega_{1})\). We have equality only when each \(v_{i}=\lambda\), i.e. \(Dy(x)=\lambda R(x)\) for \(R(x)\in SO(n)\), which implies that \(R(x)=R\) is constant and \(a+\lambda R\Omega_{1}=\lambda\Omega_{1}\), for some \(a\in\mathbb{R}^{n}\), which for generic \(\Omega_{1}\) implies \(a=0\) and \(R=\mathbf{1}\), hence \(y(x)=\lambda x\). However, for (17) to hold for general \(M\) implies that \(\Psi\) has a special form: Theorem 3.1 (see [8]): \[\fint_{\Omega_{1}}\Psi(Dy)\,dx\geqslant\Psi(M)\] _for all orientation-preserving invertible \(y\) with \(y(\Omega_{1})=M\Omega_{1}\), and for every \(\Omega_{1}\) and \(M\in M_{+}^{n\times n}\), iff_ \[\Psi(A)=H(\det A)\] _for some convex \(H\)._ Sketch of proof.: If \(y=Mx\) is a minimizer, then we can construct a variation that slides at the boundary, so that the tangential component at the boundary of the 'Cauchy stress' is zero, i.e. \[D\Psi(M)M^{T}=p(M)\mathbf{1},\] for a scalar \(p(M)\), from which it follows that \(\Psi\) corresponds to an elastic fluid, i.e. \(\Psi(M)=H(\det M)\). But then \(H(\det M)\) is quasiconvex, and so \(H\) is convex. Conversely, if \(H\) is convex then \[\fint_{\Omega_{1}}H(\det Dy(x))\,dx \geqslant H\left(\fint_{\Omega_{1}}\det Dy(x)\,dx\right) \tag{19}\] \[=H(\det M). \tag{20}\] ## 4 Discussion Theorem 4.1 gives conditions under which a minimizer \(y^{*}\) of \(I_{P_{1},P_{2}}\) in \(\mathcal{A}\) exists, but says nothing about the regularity properties of \(y^{*}\). In the simpler problem of isotropic nonlinear elasticity essentially nothing is known. In particular it is an open question whether minimizers are smooth, or smooth outside some closed set of zero measure, or even if the usual weak form of the Euler-Lagrange equation holds (though some forms of the Euler-Lagrange equation can be established [7]). The presence of the (possibly discontinuous) lower order terms due to the intensity maps makes the problem for \(I_{P_{1},P_{2}}\) even more challenging. Theorem 4.2 shows that if the desirable property holds that for linearly related images \(y^{*}\) is the corresponding linear map, then \(\psi\) depends on \(Dy\) only through \(\det Dy\), that is only on local volume changes, so that in particular the hypothesis (H2) of Theorem 4.1 does not hold. This suggests that a better model might be to minimize a functional such as \[E_{P_{1},P_{2}}(y)=\int_{\Omega_{1}}\left(\psi(c_{1}(x),c_{2}(y(x)),\det Dy(x) )+|D^{2}y(x)|^{2}\right)\,dx, \tag{21}\] for which existence of a minimizer can be proved for low dimensions \(n\), and for which minimizers of linearly related images could be proved under suitable hypotheses to be linear. This idea is explored in [8]. We remark that it is straightforward to prove variants of Theorem 1 (a) for the case when \(y\) is required to map a finite number of landmark points in \(\Omega_{1}\) to corresponding points in \(\Omega_{2}\) (see e.g. [20]), and (b) for the case when \(P_{1}\) is a template image that is to be compared to an unknown part of \(P_{2}\) (such as in image registration). In the case (b), for example, one can minimize \[I(y)=\int_{\Omega_{1}}\psi(c_{1}(x),c_{2}(y(x)),Dy(x))\,dx \tag{22}\] subject to the constraint that \(y:\Omega_{1}\to\Omega\) is a homeomorphism for some (unknown) subdomain \(\Omega=a+\lambda R\Omega_{1}\subset\Omega_{2}\), where \(a\in\mathbb{R}^{n}\), \(R\in SO(n)\), \(\alpha\leqslant\lambda\leqslant\beta\) and \(0<\alpha<\beta\) are given. Here we consider the case when the unknown part of \(P_{2}\) is to be compared to a rigid transformation, rotation and uniform magnification of the template, but one can equally handle the case of more general affine transformations, which may be appropriate for images viewed in perspective. Such variants are also explored in [8]. Of course this work needs to be supplemented with appropriate numerical experiments on images. The numerical minimization of integrals such as (1) is not straightforward even without the presence of the (possibly discontinuous) intensity functions, and the fact that the minimization is to be carried out in the admissible set \(\mathcal{A}\) of homeomorphisms, rather than, say, maps satisfying Dirichlet boundary conditions, presents additional difficulties. From a rigorous perspective, the numerical method should take into account the possible occurrence of the Lavrentiev phenomenon, whereby the infimum of the energy in \(\mathcal{A}\) might be strictly less than the infimum among Lipschitz maps in \(\mathcal{A}\) (such as those generated by a finite-element scheme). For discussions see [6, 2, 3]. ###### Acknowledgements. We are grateful to Alexander Belyaev, Jose Iglesias, David Mumford, Ozan Oktem, Martin Rumpf, Carola Schonlieb and Benedikt Wirth for their interest and helpful suggestions. CLH was supported by EPSRC through grant EP/L016508/1.
2305.02851
Two-component WIMP $-$ FIMP dark matter
The document discusses a proposed extension to the Standard Model that aims to explain the presence of neutrino masses and the existence of dark matter. The model includes two potential candidates for dark matter, a vector WIMP and a fermion FIMP, and their combined presence accounts for the total amount of observed dark matter. This study examines the various ways in which dark matter could be produced within this model and explores the connections between the dark matter and neutrino sectors. It also examines various constraints from existing and future experiments. Additionally, the model includes a scalar field that can play a role in a first-order phase transition in the early universe, and the article looks at the potential for the production of gravitational waves as a result of this phase transition and their detectability. This study also assesses the possibility for this phase transition to be strong enough to drive the electroweak baryogenesis.
Francesco Costa
2023-05-04T14:11:45Z
http://arxiv.org/abs/2305.02851v2
# Two-component vector WIMP - fermion FIMP dark matter model with an extended seesaw mechanism ###### Abstract: The document discusses a proposed extension to the Standard Model that aims to explain the presence of neutrino masses and the existence of dark matter. The model includes two potential candidates for dark matter, a vector WIMP and a fermion FIMP, and their combined presence accounts for the total amount of observed dark matter. This study examines the various ways in which dark matter could be produced within this model and explores the connections between the dark matter and neutrino sectors. It also examines various constraints from existing and future experiments. Additionally, the model includes a scalar field that can play a role in a first-order phase transition in the early universe, and the article looks at the potential for the production of gravitational waves as a result of this phase transition and their detectability. This study also assesses the possibility for this phase transition to be strong enough to drive the electroweak baryogenesis. ## 1 Introduction The Standard Model (SM) of particle physics has been successful in the past decades with experiments matching its predictions in particular with the discovery of the Higgs boson. However, it does not explain certain observations such as the non-zero mass of neutrinos [1] and the existence of dark matter [2]. One proposed mechanism to generate the mass of neutrinos is the type-I seesaw mechanism [3], which involves introducing heavy singlet leptons. An extended version of this mechanism, called the extended double seesaw [4, 5], has also been proposed to achieve a low-scale leptogenesis without fine-tuning the heavy neutrino masses. This mechanism also allows for the possibility of detection at future collider experiments. Dark matter is another missing piece of the SM and recent studies have explored alternative mechanisms to the standard freeze-out [6] such as the freeze-in mechanism [7, 8] in which the dark matter particle is called a Feebly Interacting Massive Particle (FIMP) because its interaction with the SM is much smaller than the electroweak scale. The relic abundance is then produced by out-of-equilibrium scattering or decay processes. A more general multi-component DM scenario is also possible where both freeze-out and freeze-in could have been active in the early universe, contributing to the total DM relic density [9, 10, 11, 12]. ### The model Here we explored a beyond the Standard Model (BSM) scenario that addresses the aforementioned problems [13]. The model introduces two sets of three-generation neutrinos, \(N_{L}^{i}\), and \(S_{L}^{i}\); the first two generations are used to explain the light neutrino masses in an extended double-seesaw mechanism, and their third generation serves as FIMP dark matter candidates. The scenario also includes a vector gauge boson \(W_{D}\) associated with an extra dark \(U(1)_{D}\) gauge symmetry, which also plays the role of a WIMP dark matter candidate. Additionally, the dark Higgs field \(\phi_{D}\) associated with the extra dark \(U(1)_{D}\) modifies the scalar sector, leading to a first-order phase transition (FOPT) [14] and we discussed the detection possibilities of its associated stochastic gravitational waves (GW) [15]. The symmetries and field content are summarized in Table 1. ## 2 Dark matter We considered the productions of \(S_{L}^{3}\) and \(N_{L}^{3}\) to be through dimension-5 operators, which are the only ones allowed considering the two FIMP particles to be odd under a \(Z_{2}\) symmetry. \begin{table} \begin{tabular}{||c|c|c|c|c|c||} \hline \hline \multirow{2}{*}{Gauge Group} & \multicolumn{3}{c|}{Baryon Fields} & \multicolumn{3}{c|}{Lepton Fields} & \multicolumn{1}{c||}{Scalar Fields} \\ \cline{2-5} & \(Q_{L}^{i}=(u_{L}^{i},d_{L}^{i})^{T}\) & \(u_{R}^{i}\) & \(d_{R}^{i}\) \\ \hline \(SU(2)_{L}\) & 2 & 1 & 1 \\ \hline \(U(1)_{Y}\) & 1/6 & 2/3 & \(-1/3\) \\ \hline \(U(1)_{D}\) & 0 & 0 & 0 \\ \hline \hline \end{tabular} \begin{tabular}{||c|c|c|c|c|c||} \hline \hline \multicolumn{2}{|c||}{L\({}_{L}^{i}=(v_{L}^{i},e_{L}^{i})^{T}\)} & \(e_{R}^{i}\) & \(N_{L}^{i}\) & \(S_{L}^{i}\) \\ \hline \(2\) & 1 & 1 & 1 \\ \hline \(-1/2\) & \(-1\) & 0 & 0 \\ \hline \(0\) & 0 & 0 & 0 & 0 \\ \hline \end{tabular} \begin{tabular}{||c|c|c|c|c|c||} \hline \hline \multicolumn{2}{|c||}{Scalar Fields} \\ \hline \(\phi_{h}\) & \(\phi_{D}\) \\ \hline \(2\) & 1 & 1 & 1 \\ \hline \(1/2\) & 0 \\ \hline \(0\) & 1 \\ \hline \end{tabular} \end{table} Table 1: Particle contents and their corresponding charges under gauge groups. These operators get naturally suppressed when the scale of new physics \(\Lambda\) is large, ensuring feeble interactions with the rest of the particle spectra; we considered \(\Lambda\geq 10^{14}\) GeV. \[\mathcal{L}_{\rm DM} = \frac{\kappa}{\Lambda}S_{L}^{3}S_{L}^{3}(\phi_{h}^{\dagger}\phi_{h })+\frac{\kappa^{\prime}}{\Lambda}S_{L}^{3}S_{L}^{3}(\phi_{D}^{\dagger}\phi_{D })+\frac{\xi}{\Lambda}N_{L}^{3}N_{L}^{3}(\phi_{h}^{\dagger}\phi_{h})+\frac{\xi^ {\prime}}{\Lambda}N_{L}^{3}N_{L}^{3}(\phi_{D}^{\dagger}\phi_{D}) \tag{1}\] \[+\frac{\alpha}{\Lambda}N_{L}^{3}S_{L}^{3}(\phi_{h}^{\dagger}\phi_ {h})+\frac{\alpha^{\prime}}{\Lambda}N_{L}^{3}S_{L}^{3}(\phi_{D}^{\dagger}\phi_ {D})+\mathrm{h.c.}\;.\] We consider \(S_{m}\) as the FIMP DM candidate, which has the lighter mass eigenvalue among the FIMP particles, \(N_{m}\) is the other eigenstate. In a scenario where only the coupling \(\kappa\) is active and the mixing between \(S_{L}^{3}\) and \(N_{L}^{3}\) is negligible, \(S_{m}\sim S_{L}^{3}\). Additionally, if the mixing between the Higgses is small, meaning \(\cos\theta\) is close to 1, the FIMP DM \(S_{m}\) is mainly produced through Higgs scattering, and the analytical solution for the yield is given by [8]: \[Y_{S_{m}}=\int_{T_{\rm end}}^{T_{R}}\frac{1}{S\mathcal{H}T}\left(\frac{4\kappa }{\Lambda}\right)^{2}\frac{1}{16\pi^{5}}\;\mathrm{T}^{6}\,, \tag{2}\] where \(T_{R}\) is the reheating temperature. To obtain the correct relic abundance and asking for \(T_{R}>T_{EWSB}\), the electroweak symmetry breaking temperature, we found that the range of the preferred parameter is: \(10^{13}<\Lambda<10^{6}\) and \(10^{2}<T_{R}<10^{5}\). In particular, we shall choose \(T_{R}=3\) TeV throughout this study. To obtain the final plot we numerically evolve the full Boltzmann equations using micrOMEGAs to obtain the DM relic densities. Figure 1: Left panel: Allowed parameter spaces in the \(y_{e1}-\Omega_{\rm DM}^{\nu}h^{2}\). Right panel: DM production by the freeze-out and freeze-in mechanisms and its evolution in terms of \(z\). The model parameters are chosen as \(M_{N_{m}}=300\) GeV, \(M_{S_{m}}=20\) GeV, \(M_{W_{D}}=1.04628\) GeV, \(\Lambda=5.5\times 10^{14}\) GeV, \(\kappa=\kappa^{\prime}=\xi=\xi^{\prime}=\alpha=\alpha^{\prime}=1\), \(M_{H_{2}}=2.212\) GeV, \(g_{D}=3.1\times 10^{-4}\), where \(H_{2}\) is the dark Higgs mass eigenstate and \(g_{D}\) the dark gauge coupling. The green double-dot-dashed (purple dot-dashed) line corresponds to the WIMP (FIMP) DM relic density. The cyan dashed line represents the NLSP relic density. The sum of the WIMP and FIMP DM relic densities is depicted by the black solid line, while the grey solid line shows the present DM relic density measured by the Planck, \(\Omega_{\rm DM}h^{2}=\Omega_{\rm Tot}h^{2}=0.12\). \(S_{m}\) can be produced also through the annihilations of active neutrinos and extra heavy neutrinos, mediated by Higgses, such as \(\nu_{i}+N_{j}\xrightarrow{H_{1,2}}S_{m}+S_{m}\) and \(\nu_{i}+S_{j}\xrightarrow{H_{1,2}}S_{m}+S_{m}\), where \(i=1,2,3\) and \(j=1,2\). These allowed channels come from the extended double seesaw neutrino sector \[\mathcal{L}_{N}\supset-\sum_{i,j=1,2}\mu_{ij}S_{L}^{i}S_{L}^{j}-\sum_{i,j=1,2}M _{S}^{ij}S_{L}^{i}N_{L}^{j}-\sum_{i,j=1,2}M_{R}^{ij}N_{L}^{i}N_{L}^{j}-\sum_{i=,\,\mu,\,\tau,\,j=1,2}y_{ij}\bar{L}_{i}\bar{\phi}_{h}N_{j}+\text{h.c.} \tag{3}\] In Figure 1, we present the allowed parameter space for the Yukawa coupling \(y_{e1}\) and the DM relic density that comes solely from the neutrino sector. We can see that when \(M_{N_{1}}\) is less than 500 GeV, there is a linear relationship between \(y_{e1}\) and the DM relic density coming from the active and heavy neutrinos' annihilations. This reflects the fact that \(\Omega_{\text{DM}}^{\nu}h^{2}\propto y_{e1}^{2}\). When \(M_{N_{1}}\) is larger than \(10^{3}\) GeV, the contribution to the DM relic density is small, as the mass is close to the chosen reheating temperature of \(T_{R}=3\) TeV, leading to suppression. We observe that for the chosen range of parameter values the contribution of the active and extra heavy neutrinos to the total DM relic density is at most about 3%. The results are obtained for points in the parameter space that are not excluded by lepton flavor violation bounds and are in agreement with the neutrino oscillation data. The WIMP candidate \(W_{D}\) is produced via the standard freeze-out mechanism and its mass should be close to the resonance to avoid overproduction. The right panel of Fig. 1 shows the production of dark matter (DM) through freeze-out and freeze-in mechanisms. The green double-dot-dashed line represents the WIMP DM produced by freeze-out, which occurs at \(T\simeq M_{W_{D}}/20\) or \(z\simeq 2500\). The cyan dashed line represents the production of the next-to-lightest-stable-particle (NLSP) \(N_{m}\), which later decays to the FIMP DM \(S_{m}\) at \(z\simeq 3500\). The NLSP is produced in the early Universe at \(T\simeq 3000\) GeV through \(2\to 2\) processes. The purple dot-dashed line indicates the FIMP DM production via the freeze-in mechanism, with initial production at \(z=0.03\) and additional production from the decay of the SM-like Higgs, \(H_{1}\), and the NLSP. The total DM relic density, shown by the black solid line, matches the Planck measurement of \(\Omega_{\text{Tot}}h^{2}=0.12\) today, with the WIMP and FIMP DM contributing equally. ### Experimental bounds In the next section, we see that a low-mass BSM dark Higgs is favored by FOPT. Therefore, in this section, we will focus on the range of \(1-200\) GeV for the dark Higgs. Additionally, to avoid potential issues with collider searches due to the low mass, we considered small mixing angles of \(|\sin\theta|<0.1\) to evade Higgs signal strength bounds. We will consider five main constraints for the discussion of DM phenomenology: 1) relic density, 2) direct detection bounds, 3) indirect detection bounds, 4) Higgs invisible decay, and 5) Higgs signal strength bound. The right panel of Figure 2 shows the allowed region in the \(M_{W_{D}}-(\Omega_{W_{D}}/\Omega_{\text{Tot}})\sigma_{\text{SI}}\) (LP) and \(M_{W_{D}}-(\Omega_{W_{D}}/\Omega_{\text{Tot}})\langle\sigma v\rangle_{b\bar{b}}\) planes, together with various direct and indirect detection bounds that are depicted by solid lines. Note that we have rescaled the \(y\)-axes by the amount of the WIMP DM relic density compared to the total DM in the Universe \(\Omega_{\text{Tot}}h^{2}=0.12\). A part of the \(M_{W_{D}}>7\) GeV region is already ruled out by the different direct detection experiments such as XENON-1T [16]. The region of DM mass below 7 GeV will be explored by future experiments like DarkSide-50 [17]. The region above the black solid line is already ruled out by the current bound on the branching of the Higgs invisible decay mode. The region of \(M_{W_{D}}\gtrsim 10\) GeV is constrained by the Fermi-LAT + MAGIC Segue 1 data [18]. We observe that part of the parameter space which contributes dominantly to the DM relic is already ruled out by the indirect detection bound. Future experiments will be able to further test the allowed parameter space. ## 3 First order phase transition The extra dark \(U(1)_{D}\) Higgs field not only gives a mass to the WIMP DM \(W_{D}\), but it also changes the vacuum evolution. To study the potential we consider only the temperature corrections and we neglect the Coleman Weinberg terms that would introduce renormalization scale and gauge dependence [19, 20]. Considering the VEV of the \(U(1)_{D}\) Higgs to be non-zero at zero temperature we have two options for the phase transition pattern: the one-step phase transition has the pattern \((\langle H\rangle,\langle H_{D}\rangle)=(0,0)\rightarrow(v,v_{D})\), while the two-step phase transition may occur via \((\langle H\rangle,\langle H_{D}\rangle)=(0,0)\rightarrow(0,v^{\prime}_{D}) \rightarrow(v,v_{D})\) or \((\langle H\rangle,\langle H_{D}\rangle)=(0,0)\rightarrow(v^{\prime},0) \rightarrow(v,v_{D})\). For the two-step phase transition of the pattern \((\langle H\rangle,\langle H_{D}\rangle)=(0,0)\rightarrow(0,v^{\prime}_{D}) \rightarrow(v,v_{D})\), the second step breaks the electroweak symmetry, giving [21] \[\frac{v_{c}}{T_{c}}=\frac{2E^{\rm SM}}{\lambda_{h}-\lambda_{hD}^{2}/(4\lambda _{D})}=\frac{4E^{\rm SM}v^{2}}{M_{H_{1}}^{2}}\left(1+\sin^{2}\theta\,\frac{M_ {H_{1}}^{2}-M_{H_{2}}^{2}}{M_{H_{2}}^{2}}\right)\,. \tag{4}\] Strong FOPTs, \(v_{c}/T_{c}\gtrsim 1\), are then achieved for small values of \(\lambda_{m}\), or equivalently, small values of the dark \(U(1)_{D}\) Higgs mass. Therefore if we consider this part of the parameter space the model satisfies one of the necessary conditions for successful electroweak baryogenesis. Figure 2: Allowed parameter space satisfying \(0.01\leq\Omega_{\rm DM}h^{2}\leq 0.12\) in the \(M_{W_{D}}-(\Omega_{W_{D}}/\Omega_{\rm Tot})\sigma_{\rm SI}\) (left) and \(M_{W_{D}}-(\Omega_{W_{D}}/\Omega_{\rm Tot})\langle\sigma v\rangle_{bB}\) (right) planes. Here, \(\Omega_{\rm Tot}h^{2}=0.12\) is total DM relic density today. The black solid line in the left panel indicates the Higgs invisible decay constraint. Various direct and indirect detection bounds are also overlaid with coloured solid lines; see text for detailed explanation. The colour of the points represents the value of the dark gauge coupling \(g_{D}\) (left) and the WIMP DM relic density (right). ## 4 Gravitational waves GWs produced by FOPTs have three main contributors: bubble wall collisions, sound waves in plasma, and magneto-hydrodynamic turbulence [15]. The sum of these three components can be computed using CosmoTransitions. Fig. 3 shows the GW signals for three benchmark points (BPs) and the sensitivity of future space-based GW experiments such as LISA, DECIGO, and BBO. The three BPs account not only for neutrino masses and DM relic density, but also for strong FOPTs, and have different DM compositions (mostly WIMP/FIMP, or a similar contribution). All three BPs are within the reach of BBO, DECIGO, and Ultimate-DECIGO's detectability threshold. ## 5 Conclusions This work discussed a model that extends the Standard Model to include dark matter and small neutrino masses using an extended seesaw framework. The model introduces two sets of three-generation neutrinos, with the third generation becoming FIMP-like particles. The heavier particle decays into the lighter one, making the lighter third-generation neutrino the FIMP dark matter candidate. The model also includes a WIMP dark matter candidate, the dark \(U(1)_{D}\) gauge boson, creating a two-component WIMP-FIMP dark matter scenario. This study explored allowed parameter spaces and discusses prospects for detection in future experiments. It also showed that a first-order phase transition is possible in the scalar sector and that the model has the potential to generate stochastic gravitational waves that could be detected by future experiments. We have demonstrated that the strength of the electroweak first-order phase transition, quantified by the quantity \(v_{c}/T_{c}\), where \(T_{c}\) is the critical temperature and \(v_{c}\) is the SM Higgs vacuum expectation value at \(T_{c}\), may become larger than unity for small values of the dark \(U(1)_{D}\) Higgs mass. Therefore, one of the essential ingredients for successful electroweak baryogenesis is achieved in our model. We presented benchmark points that demonstrate the model's potential detectability from GW observatories and DM experiments. Figure 3: Left panel: Numerically computed \(v_{c}/T_{c}\) values as a function of \(\lambda_{m}\equiv\lambda_{h}-\lambda_{hD}^{2}/(4\lambda_{D})\). Being in agreement with the analytical expression, strong FOPTs are achieved for small values of \(\lambda_{m}\), or equivalently, small values of the dark \(U(1)_{D}\) Higgs mass. Right panel: FOPT-associated GW spectra for our three BPs summarised in Table 2. The black dotted line corresponds to the first BP, the blue dashed line depicts the second BPs, and the brown dot-dashed line represents the third BP. The sensitivity curves of future space-base GW experiments, including LISA, BBO, DECIGO, and Ultimate-DECIGO, are shown as well ## 6 Acknowledgments The results presented in this document have been derived in collaboration with J. Kim and S. Khan. This project has received funding from the European Unions Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN.
2310.06881
CAFA-evaluator: A Python Tool for Benchmarking Ontological Classification Methods
We present CAFA-evaluator, a powerful Python program designed to evaluate the performance of prediction methods on targets with hierarchical concept dependencies. It generalizes multi-label evaluation to modern ontologies where the prediction targets are drawn from a directed acyclic graph and achieves high efficiency by leveraging matrix computation and topological sorting. The program requirements include a small number of standard Python libraries, making CAFA-evaluator easy to maintain. The code replicates the Critical Assessment of protein Function Annotation (CAFA) benchmarking, which evaluates predictions of the consistent subgraphs in Gene Ontology. Owing to its reliability and accuracy, the organizers have selected CAFA-evaluator as the official CAFA evaluation software.
Damiano Piovesan, Davide Zago, Parnal Joshi, M. Clara De Paolis Kaluza, Mahta Mehdiabadi, Rashika Ramola, Alexander Miguel Monzon, Walter Reade, Iddo Friedberg, Predrag Radivojac, Silvio C. E. Tosatto
2023-10-10T10:51:47Z
http://arxiv.org/abs/2310.06881v2
# CAFA-evaluator: A Python Tool for Benchmarking Ontological Classification Methods ###### Abstract We present CAFA-evaluator, a powerful Python program designed to evaluate the performance of prediction methods on targets with hierarchical concept dependencies. It generalizes multi-label evaluation to modern ontologies where the prediction targets are drawn from a directed acyclic graph and achieves high efficiency by leveraging matrix computation and topological sorting. The program requirements include a small number of standard Python libraries, making CAFA-evaluator easy to maintain. The code replicates the Critical Assessment of protein Function Annotation (CAFA) benchmarking, which evaluates predictions of the consistent subgraphs in Gene Ontology. Owing to its reliability and accuracy, the organizers have selected CAFA-evaluator as the official CAFA evaluation software. **Availability and implementation**: [https://pypi.org/project/cafaeval/1.0.0/](https://pypi.org/project/cafaeval/1.0.0/) **Contact**: [email protected] ## Introduction Translating experimental data into biological knowledge remains a slow process despite the rapid accumulation of data in modern biology. Manually curated databases are the primary source of such knowledge due to their thorough standardization of integrated information, often organized into ontological annotations (The Gene Ontology Consortium, 2019). The automated prediction of ontological annotations has become widely adopted in knowledge bases. As a result, ensuring a reliable evaluation of the predicted information remains crucial. The Critical Assessment of protein Function Annotation (CAFA) initiative provides a well-defined framework for managing hierarchical data and independently evaluates Gene Ontology (GO) prediction methods (Radivojac et al., 2013; Jiang et al., 2016; Zhou et al., 2019). Since its first edition, the CAFA experiment has stimulated a number of theoretical studies about GO prediction and its evaluation (Clark and Radivojac, 2013; Peng et al., 2018). Despite the significant impact of CAFA, the development of novel function prediction methods suffers from the lack of an easy-to-use tool for internal benchmarking. Existing solutions are problematic due to missing documentation, hampering their maintenance, portability, development, and use by the scientific community. Moreover, these solutions are tailored specifically for GO terms and the CAFA challenge, incorporating numerous hard-coded parameters. The CAFA-evaluator package addresses these issues by being easy to use and maintain, fully documented, fast, and generic. It can be used with any type of ontology and annotation, and the dataset processing is entirely separated from the evaluation stage. Additionally, the input format is straightforward. The software has been tested against CAFA2 and CAFA3 data, replicating the exact results provided in their corresponding publications (Jiang et al., 2016; Zhou et al., 2019). CAFA-evaluator has been recently adopted as the official evaluation tool for the CAFA5 challenge hosted on Kaggle. The CAFA-evaluator software is open source and freely available for download from GitHub and PyPI. ### Implementation The CAFA-evaluator repository includes a Python library and a user-friendly command-line interface for generating all evaluations and a Python notebook for plotting the results. The evaluation module calculates the F-measure, weighted F-measure, and semantic distance (S-score), as well as precision-recall and remaining uncertainty-misinformation curves, as described in (Jiang et al., 2016). The package requires only three standard Python libraries: Numpy, Pandas, and Matplotlib, with the latter being necessary only for generating plots. #### Input and calculation The CAFA-evaluator software requires three inputs: an ontology OBO file, a ground truth file, and the path to the folder containing the prediction file(s). Optionally, it also accepts an information accretion file, which triggers the generation of weighted measures such as weighted precision, recall, F-measure, and S-score. All metrics are calculated by micro-averaging; i.e., the predictions for all proteins are pooled into a single sample, from which all metrics are subsequently calculated. All input files undergo internal parsing, and predictions are filtered to include only those targets present in the ground truth and those terms that are part of the input ontology. When terms are associated with a "namespace", also called "aspect" or "sub-ontology", different namespaces are treated as independent ontologies, and both the ground truth and predictions are split accordingly. Namespaces with multiple roots are managed without problems and it is possible to exclude root terms from the evaluation. The algorithm stores three sparse matrices in memory: the ontology graph as an adjacency matrix, a boolean n x m matrix, where n is the number of targets and m is the number of ontology terms, representing the ground truth, and a matrix of the same size (or smaller if some targets are missing) including the prediction scores. Multiple prediction files, each corresponding to a different method, are processed one by one to release the memory associated with the third matrix. Both the predictions and the ground truth annotations are always propagated up to the ontology root(s). By default, however, prediction scores are propagated without overwriting parents' scores, as in CAFA. Optionally, the maximum score over all direct children terms can be propagated to their common parent term. The ontology graph is topologically sorted at the parsing time, allowing the propagation to be calculated in linear time, solely depending on the size of the ontology, which is always the same for all prediction files and is loaded in memory at the beginning. Confusion matrices are calculated per target and per threshold, i.e. separately by considering predicted terms with a score above the threshold. By default, one hundred evenly spaced cutoffs in the range [0-1) are considered, but more cutoffs can be set by the user; e.g. to capture all unique score predictions for a method. Calculation time depends on the number of threshold cutoffs. The software is parallelized so that blocks of thresholds can be calculated in different threads. The overall method evaluation is obtained by averaging target measures in different ways. The user can decide whether to normalize considering all ground truth targets, i.e. penalizing methods with low coverage, or considering only the predicted targets. By default, the program normalizes the recall by the number of ground truth targets and the precision by the number of predicted targets, as in CAFA. When the information accretion file is provided, the confusion matrix is calculated after the terms are weighted by their information accretion. This approach avoids returning the simple count as in the confusion matrix when calculating the graph intersection. Other options control the inclusion or exclusion of root (orphan) terms from the evaluation and limit the number of processed terms per protein and namespace. The latter is particularly useful when prediction methods include a large number of predicted terms per target and when the number of targets is large. In any case, the number of considered terms does not affect the computation or memory usage. #### Output The CAFA-evaluator software generates multiple output objects, including a table with an evaluation row for each method, namespace, and threshold. It also generates an object for F-measure, S-score, and weighted F-measure, reporting the rows with the corresponding best performance. The software also includes a function to store the output into TSV files. Finally, it creates a log file containing basic execution information, such as timestamps and statistics about the number of processed targets and terms. The evaluation output table can be used as input for the Python notebook to generate curve plots. The notebook accepts an optional file with the name of the team associated with each prediction file. When this information is provided, only one prediction per team and ontology is selected, as in CAFA. Additionally, prediction files can be associated with a different name, which will be displayed in the plots. ### Summary The CAFA-evaluator software is an easy-to-use, generic, and well-documented tool designed for benchmarking function prediction methods using any type of ontology and annotation. It requires an ontology OBO file, a ground truth file, a prediction file, and can optionally accept an information accretion file. The software uses internal parsing to filter predictions and generate multiple output files, including a TSV table with an evaluation row for each method, namespace, and threshold, as well as separate files for F-measure, S-score, and weighted F-measure. The software has been tested against CAFA2 and CAFA3 data and has been adopted as the official evaluation tool for the CAFA5 challenge hosted on Kaggle. **Funding** The European Union's Horizon 2020 research and innovation programme (778247, 823886, 952334); ELIXIR, the research infrastructure for life-science data; COST Action ML4NGP (CA21160); NextGenerationEU, PNRR (IR0000010). Funding for open access charge: University of Padova. Conflict of Interest: none declared.
2310.17754
Kinetic stability of Chapman-Enskog plasmas
In this paper, we investigate the kinetic stability of classical, collisional plasma - that is, plasma in which the mean-free-path $\lambda$ of constituent particles is short compared to the length scale $L$ over which fields and bulk motions in the plasma vary macroscopically, and the collision time is short compared to the evolution time. Fluid equations are typically used to describe such plasmas, since their distribution functions are close to being Maxwellian. The small deviations from the Maxwellian distribution are calculated via the Chapman-Enskog (CE) expansion in $\lambda/L \ll 1$, and determine macroscopic momentum and heat fluxes in the plasma. Such a calculation is only valid if the underlying CE distribution function is stable at collisionless length scales and/or time scales. We find that at sufficiently high plasma $\beta$, the CE distribution function can be subject to numerous microinstabilities across a wide range of scales. For a particular form of the CE distribution function arising in magnetised plasma, we provide a detailed analytic characterisation of all significant microinstabilities, including peak growth rates and their associated wavenumbers. Of specific note is the discovery of several new microinstabilities, including one at sub-electron-Larmor scales (the 'whisper instability') whose growth rate in some parameter regimes is large compared to other instabilities. Our approach enables us to construct the kinetic stability maps of classical, two-species collisional plasma in terms of $\lambda$, the electron inertial scale $d_e$ and $\beta$. This work is of general consequence in emphasising the fact that high-$\beta$ collisional plasmas can be kinetically unstable; for strongly magnetised CE plasmas, the condition for instability is $\beta > L/\lambda$. In this situation, the determination of transport coefficients via the standard CE approach is not valid.
Archie F. A. Bott, Steven C. Cowley, Alexander A. Schekochihin
2023-10-26T19:48:01Z
http://arxiv.org/abs/2310.17754v1
# Kinetic stability of Chapman-Enskog plasmas ###### Abstract In this paper, we investigate the kinetic stability of classical, collisional plasma - that is, plasma in which the mean-free-path \(\lambda\) of constituent particles is short compared to the length scale \(L\) over which fields and bulk motions in the plasma vary macroscopically, and the collision time is short compared to the evolution time. Fluid equations are typically used to describe such plasmas, since their distribution functions are close to being Maxwellian. The small deviations from the Maxwellian distribution are calculated via the Chapman-Enskog (CE) expansion in \(\lambda/L\ll 1\), and determine macroscopic momentum and heat fluxes in the plasma. Such a calculation is only valid if the underlying CE distribution function is stable at collisionless length scales and/or time scales. We find that at sufficiently high plasma \(\beta\), the CE distribution function can be subject to numerous microinstabilities across a wide range of scales. For a particular form of the CE distribution function arising in strongly magnetised plasma (viz., plasma in which the Larmor periods of particles are much smaller than collision times), we provide a detailed analytic characterisation of all significant microinstabilities, including peak growth rates and their associated wavenumbers. Of specific note is the discovery of several new microinstabilities, including one at sub-electron-Larmor scales (the 'whisper instability') whose growth rate in certain parameter regimes is large compared to other instabilities. Our approach enables us to construct the kinetic stability maps of classical, two-species collisional plasma in terms of \(\lambda\), the electron inertial scale \(d_{e}\) and the plasma \(\beta\). This work is of general consequence in emphasising the fact that high-\(\beta\) collisional plasmas can be kinetically unstable; for strongly magnetised CE plasmas, the condition for instability is \(\beta\gtrsim L/\lambda\). In this situation, the determination of transport coefficients via the standard CE approach is not valid. 2000 Mathematics Subject Classification: 14J05, 14 2.3 Kinetic stability of classical, collisional plasma * 2.3.1 Overview * 2.3.2 Existence of microinstabilities in classical, collisional plasma * 2.3.3 A simple example: the firehose instability in CE plasmas * 2.3.4 Which microinstabilities are relevant * 2.4 Linear stability calculation: overview * 2.4.1 General dispersion relation * 2.4.2 Simplifications of dispersion relation: overview of our approach * 2.5 Linear stability calculation: detailed methodology * 2.5.1 Low-frequency condition in a magnetised plasma * 2.5.2 Simplification I: non-relativistic electromagnetic fluctuations * 2.5.3 Simplification II: expansion of dielectric tensor in \(\omega\ll k_{\parallel}v_{\rm ths}\) * 2.5.4 Additional symmetries of low-frequency dielectric tensor \(\mathfrak{C}^{(0)}_{s}\) * 2.5.5 Consequences for dispersion relation * 2.5.6 Effect of multiple species on dispersion-relation derivations * 2.5.7 Modelling collisional effects on CE microinstabilities * 2.5.8 Caveats: microinstabilities in CE plasma where \(\omega/k_{\parallel}v_{\rm ths}\not\sim\eta_{s},\epsilon_{s}\) * 3. 4.1 Form of CE distribution function * 3.2 Stability * 3.3.1 Parallel whistler (heat-flux) instability * 3.3.2 Oblique whistler (heat-flux) instability * 3.3.3 Slow-(hydromagnetic)-wave instability * 3.3.4 Long-wavelength kinetic-Alfven-wave instability * 4. 5.1.1 Form of CE distribution function * 4.1.1.2 Stability * 4.2.1 Positive pressure anisotropy * 4.2.2 Negative pressure anisotropy * 4.2.3 Collisional stabilisation * 4.2.4 Outline of the rest of this section * 4.3 CES microinstability classification: positive pressure anisotropy (\(\epsilon_{i}>0\)) * 4.3.1 Mirror instability * 4.3.2 Whistler instability * 4.3.3 Parallel transverse instability * 4.3.4 Electron mirror instability * 4.4 CES microinstability classification: negative pressure anisotropy (\(\epsilon_{i}<0\)) * 4.4.1 Firehose instability * 4.4.2 Quasi-parallel firehose instability * 4.4.3 Oblique firehose instability * 4.4.4 Critical-line firehose instability * 4.4.5 Sub-ion-Larmor-scale firehose instability * 4.4.6 Parallel electron firehose instability * 4.4.7 Oblique electron firehose instability * 4.4.8 Electron-scale-transition (EST) instability * 4.4.9 Oblique transverse instability * 4.4.10 Whisper instability #### 4.4.11 Ordinary-mode instability **5. Discussion and conclusions** **Appendix A. Glossary of notation used in the paper** **Appendix B. Derivation of the Chapman-Enskog distribution function** B.1. The Chapman-Enskog expansion in strongly magnetised plasma B.1.1 Electrons B.1.2 Electrons in strongly magnetised limit B.1.3 Ions B.2 Deriving isotropic functions of velocity for the CE solution B.2.1 Krook collision operator B.2.2 Lorentz collision operator **Appendix C. Derivation of hot, magnetised plasma dispersion relation for arbitrary distribution functions** **Appendix D. Electrostatic instabilities of CE plasma** D.1. The electrostatic hot-plasma dispersion relation D.2. The electrostatic dielectric response at low frequencies D.3 Existence of electrostatic instabilities for a CE plasma D.4. Impossibility of electrostatic instabilities with 'fast' growth rates D.4.1 Low-frequency electrostatic modes: \(\omega\ll k_{\parallel}v_{\rm ths}\) D.4.2 Other electrostatic modes: \(\omega\gtrsim k_{\parallel}v_{\rm ths}\) **Appendix E. Weak growth of high-frequency perturbations** E.1 Deriving conditions for stability E.2 Evaluating conditions for stability **Appendix F. Properties of leading-order expansion \(\mathfrak{C}^{(0)}\) of dielectric tensor** **(2.73) in \(\tilde{\omega}_{s\parallel}\ll 1\) for a weakly anisotropic distribution function** F.1 Symmetries of \(\mathfrak{C}^{(0)}_{s}\) in coordinate basis \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) F.2 Evaluating the dielectric tensor in coordinate basis \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) **Appendix G. Dielectric tensor components for the CE distribution function (2.8)** G.1 Maxwellian distribution G.1.1 General dielectric tensor G.1.2 Dielectric tensor in low-frequency limit, \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) coordinate frame G.1.3 Dielectric tensor in low-frequency limit, \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) coordinate frame G.1.4 Asymptotic forms of \(\mathbf{M}^{(0)}_{s}\) and \(\mathbf{M}^{(1)}_{s}\) G.1.5 Unmagnetised Maxwellian dielectric response G.1.6 Validity of approximation \(\mathbf{M}_{s}\approx\mathbf{M}^{(0)}_{s}\) for large or small \(k_{\parallel}\rho_{s}\) and \(k_{\perp}\rho_{s}\) G.1.7 Calculation of second-order corrections to dispersion relation G.2 CE electron-friction term G.3 CE temperature-gradient-driven terms G.3.1 Dielectric tensor in low-frequency limit G.3.2 Asymptotic limits of \(\mathbf{\mathcal{P}}^{(0)}_{s}\) G.4 CE shear terms G.4.1 Dielectric tensor in low-frequency limit G.4.2 Asymptotic limits of \(\mathbf{\mathcal{P}}^{(0)}_{s}\) **Appendix H. Density perturbations for low-frequency modes** H.1 Derivation of general expressions H.2 Special case: sub-ion-Larmor scale modes in a two-species plasma ## Appendix I Calculating the electrostatic field from the transverse electric field ### Methodology for characterising CET microinstabilities ###### Contents * 1 Introduction * 2 The electrostatic field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the transverse electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the transverse electric field from the electric field from the electric field from the transverse electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from the electric field from electric field the electric from the electric field from the electric field from electric field the electric field from the electric field from the electric field from the electric field the electric field from the electric field from electric field the electric field ## 1 Introduction Answering the question of when a plasma can be described adequately by fluid equations is fundamental for a comprehensive understanding of plasma dynamics. It is well known that some physical effects in plasmas - for example, Landau damping - specifically require a fully kinetic description in terms of distribution functions of the plasma's constituent particles (Landau 1946). However, for many other plasma processes, a detailed description of the underlying particle distribution provides little additional understanding of the essential physics governing that process. Characterising such processes with fluid equations, which describe the evolution of macroscopic physical quantities such as density, fluid velocity and temperature, often simplifies the description and therefore aids understanding. Fluid equations are also easier to solve numerically than kinetic equations: the latter reside in six-dimensional phase space (and time), with three additional dimensions - the velocity space - when compared to the former. The underlying difficulty associated with determining when a plasma is a fluid is finding a closed set of equations in the macroscopic plasma variables. The derivation of fluid equations from the Maxwell-Vlasov-Landau equations governing the evolution of the plasma's distribution functions is carried out by taking moments (that is, integrating the governing equations and their outer products with velocity \(\boldsymbol{v}\) over velocity space). However, the resulting equations are not closed: the evolution equation of the zeroth-order moment (density) requires knowledge of the evolution of the first-order moment, the evolution equation for the first-order moment needs the second-order moment, and so on. For plasma fluid equations to be able to describe the evolution of a plasma without reference to that plasma's underlying distribution functions, a closure hypothesis or an approximation relating higher-order moments to lower ones is required. For a collisional plasma - i.e., one in which the mean free paths \(\lambda_{s}\) and collision times \(\tau_{s}\) of the ions and electrons (\(s=i,e\)) are much smaller than the typical length scale \(L\) and time scale \(\tau_{L}\) on which macroscopic properties of the plasma change - there is a procedure for achieving such a closure: the _Chapman-Enskog (CE) expansion_ (Chapman & Cowling 1970; Enskog 1917; Cercignani 1988). It is assumed that in a collisional plasma, the small perturbations of the distribution functions away from a Maxwellian equilibrium have typical size \(\epsilon\sim\lambda_{s}/L\sim\tau_{s}/\tau_{L}\ll 1\) (assuming sonic motions, and \(\lambda_{i}\sim\lambda_{e}\)). Since the perturbation is small, its form can be determined explicitly by performing an asymptotic expansion of the Maxwell-Vlasov-Landau equations. Once the underlying distribution is known, the relevant moments can be calculated - in particular, the momentum and heat fluxes are the second- and third-order moments of the \(\textit{O}(\epsilon)\) non-Maxwellian component of the distribution function. The CE expansion applied to a two-species magnetised plasma was worked out by Braginskii (1965). Subsequent studies have refined and extended various aspects of his calculation (Epperelein 1984; Mikhailovskii & Tsypin 1984; Epperlein & Haines 1986; Helander _et al._ 1994; Simakov & Catto 2004). In this paper, we will refer to the distribution functions associated with the CE expansion as CE distribution functions, and plasmas with particle distribution functions given by CE distribution functions as CE plasmas. However, the theory constructed as outlined above is incomplete. For the CE expansion to provide an adequate fluid closure, the resulting distribution functions must be stable to all kinetic instabilities with length scales shorter than the longest mean free path, and timescales shorter than the macroscopic plasma timescale \(\tau_{L}\). Such instabilities (if present) are known as microinstabilities. We emphasise that these microinstabilities should be distinguished conceptually from instabilities describable by the closed set of plasma-fluid equations: for example, Rayleigh-Taylor (Rayleigh 1883; Taylor 1950; Takabe _et al._ 1985; Kull 1991), magnetorotational (Balbus & Hawley 1991; Hawley & Balbus 1991), magnetoviscous (Quataert _et al._ 2002; Balbus 2004; Islam & Balbus 2005), or magnetothermal/heat-flux-driven buoyancy instabilities (Balbus 2000, 2001; Quataert 2008; Kunz 2011). Kinetic microinstabilities should also be distinguished from the small-scale instabilities that arise in solving higher-order (_O_(\(\epsilon^{2}\))) fluid equations obtained from the CE asymptotic expansion (for neutral fluids, these are called the _Burnett equations_ - see Garcia-Colin _et al._ 2008). Such instabilities are not physical because they arise at scales where the equations themselves do not apply (Bobylev 1982). Fluid instabilities do not call into question the validity of the fluid equations themselves; in contrast, if microinstabilities occur, the plasma-fluid equations obtained through the closure hypothesis are physically invalid, irrespective of their own stability. Microinstabilities have been studied in depth for a wide range of classical plasmas by many authors; see, for example, Davidson (1983), Gary (1993), and Hasegawa (2012) for three different general perspectives on microinstability theory. Although it can be shown that a Maxwellian distribution is always immune to such instabilities (Bernstein 1958; Krall & Trivelpiece 1973), anisotropic distribution functions are often not (Furth 1963; Kahn 1962; Kalman _et al._ 1968). A notable example is the Weibel instability, which occurs in counter-streaming unmagnetised plasmas (Weibel 1959; Fried 1959). The linear theory of such instabilities is generally well known (for modern reviews, see Lazar _et al._ 2009; Ibscher _et al._ 2012). Microinstabilities in magnetised plasma have also been comprehensively studied. The ion firehose and mirror instabilities are known to occur in plasmas with sufficient ion-pressure anisotropy and large enough plasma \(\beta\) (Chandrasekhar _et al._ 1958; Parker 1958; Vedenov & Sagdeev 1958; Hasegawa 1969; Hall 1981; Hellinger 2007), while electron-pressure anisotropy can also result in microinstabilities of various types (Kennel & Petschek 1966; Hollweg & Volk 1970; Gary & Madland 1985). A number of authors have noted that microinstabilities, if present, will have a significant effect on the macroscopic transport properties of plasmas (Kahn 1964; Schekochihin _et al._ 2005, 2008; Melville _et al._ 2016; Riquelme _et al._ 2016; Komarov _et al._ 2016, 2018; Roberg-Clark _et al._ 2018\(a\); Drake _et al._ 2021). Typically (although not always), once the small-scale magnetic and electric fields associated with microinstabilities have grown, they will start to scatter particles, which in turn will alter the plasma's distribution functions. This has micro- and macroscopic consequences for plasma behaviour. From the microscopic perspective, it changes the course of the evolution of the microinstabilities themselves - by, e.g., reducing the anisotropy of the underlying particle distribution functions (Hellinger _et al._ 2014; Riquelme _et al._ 2018). From the macroscopic perspective, the changes to the distribution functions will alter both heat and momentum fluxes in the plasma (which, as previously mentioned, are determined by non-Maxwellian terms in the distribution function). In this picture, a plasma subject to microinstabilities in some sense generates its own effective anomalous collisionality (Schekochihin _et al._ 2008; Mogavero & Schekochihin 2014; Kunz _et al._ 2014; Squire _et al._ 2017; Kunz _et al._ 2020). The typical values of the altered fluxes attained must depend on the saturated state of microinstabilities (Schekochihin _et al._ 2010). Exploring the mechanisms leading to saturation of both unmagnetised, Weibel-type instabilities (e.g., Davidson _et al._ 1972; Lemons _et al._ 1979; Califano _et al._ 1998, 2002; Kato 2005; Pokhotelov & Amariutei 2011; Ruyer _et al._ 2015) and magnetised instabilities (e.g., Kuznetsov _et al._ 2007; Pokhotelov _et al._ 2008; Rosin _et al._ 2011; Riquelme _et al._ 2015; Rincon _et al._ 2015) continues to be an active research area. Simulation results (Hellinger _et al._ 2009; Kunz _et al._ 2014; Guo _et al._ 2014; Riquelme _et al._ 2016; Melville _et al._ 2016; Guo _et al._ 2018; Bott _et al._ 2021_a_) support the claim that the saturation amplitude of such microinstabilities is typically such that the plasma maintains itself close to marginality of the relevant instability. Do these kinetic instabilities afflict the CE distribution function? Naively, it might be assumed not, since it is 'almost' Maxwellian. However, it turns out that, provided the plasma \(\beta\) is sufficiently high, small distortions from a Maxwellian can be sufficient to lead to instability. Instabilities of a CE distribution function in an unmagnetised plasma were first explored by Kahn (1964), who considered a collisional electron plasma (mean free path \(\lambda_{e}\)) with macroscopic variations in density, temperature and velocity (scale \(\sim\)\(L\)). He showed that the CE distribution function in such a plasma would have two non-Maxwellian terms of order \(\lambda_{e}/L\) - an antisymmetric term associated with heat flux, and another term associated with velocity shear - and that the latter term would result in the so-called _transverse instability_. Kahn (1964) also claimed that this instability would lead to a significant change in the plasma viscosity, and other transport coefficients. Albright (1970\(a\),_b_) further developed the theory of the transverse instability, including a quasi-linear theory resulting in isotropisation of the underlying electron distribution function. The stability of the CE distribution function was later considered by Ramani & Laval (1978). They found that in an initially unmagnetised two-species plasma supporting a fluid-scale electron-temperature gradient (scale \(L_{T}\), no flow shear), the second-order terms (in \(\lambda/L_{T}\)) in the electron distribution function could result in the formation of unstable waves, with typical real frequencies \(\varpi\propto\lambda_{e}/L_{T}\), and growth rates \(\gamma_{\rm RL}\propto\left(\lambda_{e}/L_{T}\right)^{2}\). Similarly to Kahn (1964), they argued that the presence of such instabilities would suppress the macroscopic heat flux in the plasma (which in a collisional plasma is carried predominantly by electrons). This particular instability has also been proposed as an explanation for the origin of the cosmic magnetic field (Okabe & Hattori, 2003). Subsequent authors have explored further the idea that non-Maxwellian components of the electron distribution function required to support a macroscopic heat flux can lead to kinetic instability. Levinson & Eichler (1992) considered the effect of introducing a uniform, macroscopic magnetic field into the same problem, and found that a faster instability feeding off first-order heat-flux terms in the CE distribution function - the _whistler instability_ - arose at the electron Larmor scale, with \(\gamma_{\rm whistler,T}\propto\lambda_{e}/L_{T}\). A quasi-linear theory of this instability was subsequently constructed by Pistinner & Eichler (1998). Both Levinson & Eichler (1992) and Pistinner & Eichler (1998) proposed that the instability at saturation would result in a suppressed heat flux (see also Gary & Li, 2000). More recently, the whistler instability has been studied in simulations of high-\(\beta\) plasma- with two groups independently finding both the onset of instability at electron scales, and evidence of a suppression of heat flux (Roberg-Clark _et al._, 2016; Roberg-Clark _et al._, 2018\(a\); Komarov _et al._, 2018; Roberg-Clark _et al._, 2018_b_). Drake _et al._ (2021) constructed a theoretical model for whistler-regulated heat transport based on a set of reasonable assumptions that were motivated by these prior simulations. The possibility of microinstabilities associated with the ion CE distribution function was also considered by Schekochihin _et al._ (2005), who found that weakly collisional, magnetised plasma undergoing subsonic, turbulent shearing motions can be linearly unstable to firehose and mirror instabilities at sufficiently high \(\beta_{i}\) (where \(\beta_{i}\) is the ion plasma beta). This is because the shearing motions give rise to an ion pressure anisotropy \(\Delta_{i}\sim\lambda_{i}^{2}/L_{V}^{2}\), where \(L_{V}\) is the length scale associated with the shearing motions. For \(|\Delta_{i}|\gtrsim\beta_{i}^{-1}\), the mirror and firehose instability thresholds can be crossed (the mirror instability is trigged by sufficiently positive pressure anisotropy, the firehose instability by negative pressure anisotropy). Beyond its threshold, the maximum firehose instability growth rate \(\gamma_{\rm fire}\) was found to satisfy \(\gamma_{\rm fire}\propto|\Delta_{i}+2/\beta_{i}|^{1/2}\), whilst for the mirror instability, the maximum growth rate was \(\gamma_{\rm mir}\propto\Delta_{i}-1/\beta_{i}\). Such destabilisation of shearing motions was confirmed numerically by Kunz _et al._ (2014), followed by many others (e.g., Riquelme _et al._, 2015, 2016, 2018; Melville _et al._, 2016). In this paper, we examine the criteria for the CE distribution function to be stable to microinstabilities at collisionless scales - i.e., at \(k\lambda_{s}\gg 1\) (where \(k\) is the microinstability wavenumber), and \(\gamma\tau_{L}\gg 1\). In a two-species plasma with a fixed mass ratio \(\mu_{e}\equiv m_{e}/m_{i}\) and a charge \(Z\) that is not very large, these criteria turn out to be relationships between three dimensionless parameters: \(\lambda/L\), \(d_{e}/L\), and \(\beta\), where \(\lambda\equiv\lambda_{e}=\lambda_{i}\) is the mean free path for both ions and electrons, and \(d_{e}\) is the electron inertial scale. The first criterion (which we refer to as the \(\beta\)_-stabilisation condition_) is that the ratio \(\lambda/L\) be much smaller than the reciprocal of the plasma \(\beta\), viz. \(\lambda\beta/L\ll 1\). This condition arises because the microinstabilities discussed in this paper are stabilised (usually by Lorentz forces) at sufficiently low \(\beta\). The second criterion (the _collisional-stabilisation condition_) is that the characteristic wavenumber \(k_{\rm peak}\) of the fastest-growing microinstability in the absence of collisional effects be comparable to (or smaller than) the reciprocal of the mean-free-path: \(k_{\rm peak}\lambda\lesssim 1\). Unlike the \(\beta\)-stabilisation condition, we do not justify this condition rigorously, because our calculations are only valid for wavenumbers \(k\) such that \(k\lambda\gg 1\); thus, we cannot say anything definitive about the \(k\lambda\lesssim 1\) regime. We do, however, show that another, more restrictive stabilisation condition that one might naively expect to exist on account of collisions - that microinstabilities cannot occur if their growth rate \(\gamma\) is smaller than the collision frequency (viz., \(\gamma\tau_{s}\lesssim 1\)) - does not, in fact, apply to the most significant microinstabilities in CE plasma. There are good physical reasons to believe that the CE distribution function is stable against collisionless microinstabilities if the collisional-stabilisation condition \(k_{\rm peak}\lambda\lesssim 1\) is satisfied: not least that the typical growth time of the fastest microinstability in CE plasma (calculated neglecting collisional damping of microinstabilities) becomes comparable to the macroscopic evolution time scale \(\tau_{L}\). We thus assume the validity of the collisional-stabilisation condition throughout this paper. How \(k_{\rm peak}\) relates to the other physical parameters is in general somewhat complicated; however, typically the collisional-stabilisation condition can be written as a lower bound on the ratio \(d_{e}/L\). For example, in the limit of very high \(\beta\), it is \(d_{e}/L>(m_{e}/m_{i})^{-1/6}(\lambda/L)^{2/3}\) (see section 4.2). If both the \(\beta\)-stabilisation and collisional-stabilisation conditions are violated, we demonstrate that CE plasma will be subject to at least one microinstability, and quite possibly multiple microinstabilities across a wide range of scales. Some of these microinstabilities are thresholdless - that is, without including collisional effects, they will occur for CE distributions departing from a Maxwellian distribution by an asymptotically small amount. Note that all significant microinstabilities associated with the CE distribution function are 'low frequency': their growth rate \(\gamma\) satisfies \(\gamma\ll kv_{\rm ths}\), where \(k\) is the typical wavenumber of the instability, and \(v_{\rm ths}\) the thermal velocity of the particles of species \(s\). This property enables a small anisotropy of the distribution function to create forces capable of driving microinstabilities (see section 2.5). In this paper, we characterise all significant microinstabilities that arise at different values of \(\lambda/L\), \(\beta\), and \(d_{e}/L\) for a particular form of the CE distribution function appropriate for a strongly magnetised plasma - that is, a plasma where the Larmor radii of ions and electrons are much smaller than the corresponding mean free paths of these particles. We treat this particular case because of its importance to astrophysical systems, which almost always possess macroscopic magnetic fields of sufficient strength to magnetise their constituent particles (Schekochihin & Cowley, 2006). Our characterisation of microinstabilities focuses on providing the maximum microinstability growth rates, as well as the wavenumbers at which this growth occurs. We find that there exist two general classes of microinstabilities: those driven by the non-Maxwellian component of the CE distribution associated with temperature gradients, and those driven by the non-Maxwellian component associated with bulk velocity gradients ('shear'). We refer to these two non-Maxwellian terms (which exist for both the ion and electron CE distribution functions) as the _CE temperature-gradient terms_ and the _CE shear terms_ respectively. Microinstabilities driven by the CE temperature-gradient terms are called the CE temperature-gradient-driven (CET) microinstabilities, while those driven by the CE shear terms are the CE shear-driven (CES) microinstabilities. As expected, within this general microinstability classification scheme, we recover a number of previously identified microinstabilities, including the (electron-shear-driven) transverse instability (which we discuss in sections 4.3.3 and 4.4.9), the whistler instability (section 4.3.2), the electron mirror instability (section 4.3.4), the electron firehose instability (sections 4.4.6 and 4.4.7), the ordinary-mode instability (section 4.4.11), the (electron-temperature-gradient-driven) whistler heat-flux instability (sections 3.3.1 and 3.3.2), and the (ion-shear-driven) mirror (section 4.3.1) and firehose (sections 4.4.1, 4.4.2, 4.4.3, 4.4.4, and 4.4.5) instabilities. We also find four microinstabilities that, to our knowledge, have not been previously discovered: two ion-temperature-gradient-driven ones at ion Larmor scales - the _slow-hydromagnetic-wave instability_ (section 3.3.3) and the _long-wavelength kinetic-Alfven wave instability_ (section 3.3.4) - and two electron-shear-driven ones - the _electron-scale-transition (EST) instability_ (section 4.4.8) and the _whisper instability_ (section 4.4.10) - at electron-Larmor and sub-electron-Larmor scales, respectively. Of these microinstabilities, the whisper instability seems to be of particular significance: it has an extremely large growth rate in certain parameter regimes, and is associated with a new high-\(\beta\) wave in a Maxwellian plasma, which also appears to have previously escaped attention. For convenience, a complete index of microinstabilities discussed in this paper is given in table 1, while the peak growth rates of these microinstabilities and the scales at which they occur (for a hydrogen CE plasma) are given in table 2. There do exist microinstabilities in CE plasma that are not represented in tables 1 and 2; however, we claim that the instabilities discussed in this paper are the most significant, on account of their large growth rates and/or low \(\beta\)-stabilisation thresholds compared to the unrepresented ones. Having systematically identified all significant microinstabilities, we can construct'stability maps' of strongly magnetised CE plasma using "phase diagrams" over a two-dimensional (\(\lambda/L\), \(d_{e}/L\)) parameter space at a fixed \(\beta\). An example of such a map (for a hydrogen plasma with equal ion and electron temperatures) is shown in figure 1. The entire region of the (\(\lambda/L\), \(d_{e}/L\)) space depicted in figure 1 could naively be characterised as pertaining to classical, collisional plasma, and thus describable by fluid equations, with transport coefficients given by standard CE theory. However, there is a significant region of the parameter space (which is demarcated by boundaries corresponding to the \(\beta\)-stabilisation and collisional-stabilisation conditions) that is unstable to microinstabilities. In fact, in strongly magnetised plasma, the collisional-stabilisation condition is never satisfied, because there exist microinstabilities whose characteristic length scales are the ion and electron Larmor radii, respectively; this being the case, only the \(\beta\)-stabilisation condition guarantees kinetic stability. The effect of microinstabilities being present in CE plasma would be to change the non-Maxwellian components of the distribution function, and therefore to alter the CE-prescribed resistivity, thermal conductivity and/or viscosity. Identifying the dominant microinstability or microinstabilities in such plasmas (as is done in figure 1 for a hydrogen plasma) is then necessary for calculating the true transport coefficients, which are likely determined by the effective collisionality associated with the saturated state of the dominant microinstability rather than by Coulomb collisions. Although such calculations are not undertaken in this paper, it seems possible that the modified transport coefficients could be determined self-consistently in terms of macroscopic plasma properties such as temperature gradients or velocity shears. We note that the calculation presented here assumes that the CE distribution function is determined without the microinstabilities and thus is only correct when the plasma is stable. Therefore, strictly speaking, the only conclusion one can make when the CE plasma is unstable is that the naive CE values of transport coefficients should not be taken as correct. We emphasise that kinetic instability of CE plasmas is a phenomenon of practical importance as well as academic interest. We illustrate this in tables 3 and 4, where the possibility of microinstabilities is considered for a selection of physical systems composed of classical, collisional plasma. We find that, while there exist some systems where CE plasmas are immune to microinstabilities - for example, the photosphere and chromosphere - there are many other astrophysical plasma systems that are likely susceptible \begin{table} \begin{tabular}{c|c|c|c} \hline Microinstability name & Section(s) & Other names & Driving CE term \\ \hline Mirror instability & 4.3.1 & – & Ion-velocity shear \\ & 4.4.1, 4.4.2, & Garden-hose & Ion-velocity shear \\ Firehose instability & 4.4.3, 4.4.4, & instability & Ion-velocity shear \\ Slow-hydromagnetic- & 4.4.5 & – & Ion-temperature gradient \\ wave instability* & 3.3.3 & – & Ion-temperature gradient \\ Long-wavelength kinetic Alfvén wave (KAW) instability* & 3.3.4 & – & Ion-temperature gradient \\ CES whistler instability & 4.3.2 & Electron-cyclotron & Electron-velocity shear \\ Electron & 4.3.4 & KAW, field-swelling & Electron-/ion- \\ mirror instability & 4.3.4 & instability & velocity shear \\ Electron & 4.4.6, 4.4.7 & KAW instability & Electron-/ion- \\ firehose instability & 4.4.8 & – & velocity shear \\ Electron-scale-transition & 4.4.8 & – & Electron-velocity shear \\ (EST) instability* & 4.4.10 & – & Electron-velocity shear \\ Whisper instability* & 4.3.3, 4.4.9 & Small-anisotropy & Electron-velocity \\ Transverse instability & 4.3.3, 4.4.9 & Weibel instability & shear \\ Ordinary-mode instability & 4.4.11 & – & Electron-velocity \\ CET whistler instability & 3.3.1, 3.3.2 & Whistler heat & Electron-temp. \\ \end{tabular} \end{table} Table 1: **Index of microinstabilities.** The microinstabilities listed here are those discussed in the main text, with the relevant sections indicated. We also indicate whether these microinstabilities are driven by macroscopic electron/ion temperature gradients associated with the CE distribution function, or by macroscopic electron/ion velocity gradients (shears): see section 2.2.1 for a discussion of this classification. Newly identified microinstabilities are indicated with an asterisk. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} Environment & \(\lambda_{e}/L\) & \(\lambda_{i}/L\) & \(d_{e}/L\) & \(\beta\) & \(\beta\lambda_{e}/L\) & \(\rho_{e}/\lambda_{e}\) & \(\rho_{i}/\lambda_{i}\) & \(k_{\rm peak}\lambda_{e}\) \\ \hline WIGM & \(2\times 10^{-3}\) & \(2\times 10^{-3}\) & \(2\times 10^{-17}\) & \(10^{4}\) & \(20\) & \(10^{-12}\) & \(10^{-11}\) & \(10^{12}\) \\ ICM & \(10^{-2}\) & \(10^{-2}\) & \(10^{-17}\) & \(10^{2}\) & \(1\) & \(10^{-14}\) & \(10^{-13}\) & \(10^{14}\) \\ Reion. IGM & \(10^{-7}\) & \(10^{-7}\) & \(10^{-16}\) & \(10^{22}\) & \(10^{15}\) & \(0.5\) & \(20\) & \(10^{5}\) \\ Photosphere & \(6\times 10^{-12}\) & \(6\times 10^{-12}\) & \(2\times 10^{-10}\) & \(30\) & \(10^{-10}\) & \(110\) & \(4\times 10^{3}\) & \(10^{-4}\) \\ Chromosphere & \(2\times 10^{-7}\) & \(2\times 10^{-7}\) & \(5\times 10^{-8}\) & \(1\) & \(10^{-7}\) & \(0.2\) & \(6\) & \(0.2\) \\ ICF hot spot & \(0.3\) & \(0.2\) & \(4\times 10^{-5}\) & \(4\times 10^{6}\) & \(10^{6}\) & \(0.1\) & \(10\) & \(10^{3}\) \\ Laser-abl. pl. & \(7\times 10^{-3}\) & \(3\times 10^{-4}\) & \(8\times 10^{-4}\) & \(200\) & \(1\) & \(0.4\) & \(800\) & \(2.5\) \\ NIF TDYNO & \(6\times 10^{-2}\) & \(2\times 10^{-2}\) & \(2\times 10^{-3}\) & \(45\) & \(2.5\) & \(0.1\) & \(200\) & \(10\) \\ \end{tabular} \end{table} Table 4: **Derived plasma parameters for systems composed of classical, collisional plasma.** All parameters are calculated using Huba (1994), except for \(k_{\rm peak}\lambda_{e}\). This is calculated by considering all possible instabilities, and then finding the magnitude of \(k_{\rm peak}\lambda_{e}\) for the fastest-growing instability satisfying \(k_{\rm peak}\lambda_{e}\gtrsim 1\). Depending on the values of other parameters, the fastest-growing instability varies between systems; in the WIGM, ICM, laser-ablation and TDYNO plasmas, the whistler heat-flux instability is the fastest-growing one, while in the reionised IGM or ICF hot spots, the transverse instability is. \begin{table} \begin{tabular}{c|c|c|c|c|c} Environment & \(T_{e}\) (eV) & \(T_{i}\) (eV) & \(n_{e}\) (cm\({}^{-3}\)) & \(B\) (G) & \(L\) (cm) \\ \hline Warm intergalatic medium (WIGM) & \(10^{2}\) & \(10^{2}\) & \(10^{-5}\) & \(10^{-8}\) & \(3\times 10^{24}\) \\ Intracluster medium (ICM) & \(10^{4}\) & \(10^{4}\) & \(10^{-2}\) & \(10^{-5}\) & \(3\times 10^{23}\) \\ IGM post reionisation & \(1\) & \(1\) & \(10^{-6}\) & \(10^{-19}\) & \(3\times 10^{24}\) \\ Solar photosphere & \(1\) & \(1\) & \(10^{17}\) & \(500\) & \(10^{7}\) \\ Solar chromosphere & \(1\) & \(1\) & \(10^{12}\) & \(10\) & \(10^{7}\) \\ ICF hot spot (NIF) & \(5\times 10^{3}\) & \(5\times 10^{3}\) & \(10^{25}\) & \(10^{7}\) & \(2\times 10^{-3}\) \\ Laser-ablated plasma (long pulse) & \(10^{3}\) & \(5\times 10^{2}\) & \(4\times 10^{21}\) & \(10^{6}\) & \(10^{-2}\) \\ NIF ‘TDYNO’ laser-plasma & \(10^{3}\) & \(10^{3}\) & \(5\times 10^{20}\) & \(10^{6}\) & \(10^{-2}\) \\ \end{tabular} \end{table} Table 3: **Plasma parameters for some physical systems composed of classical, collisional plasma.** The values of temperature and density of the WIGM given here are from Nicastro _et al._ (2008), while those of the ICM come from Fabian (1994). The estimates of the typical magnetic-field strengths and scale lengths for both the WIGM and the ICM are from Ryu _et al._ (2008). For simplicity, we have assumed equal ion and electron temperatures; however, we acknowledge that there is some uncertainty as to the validity of this assumption (see, e.g., Yoshida _et al._ 2005). Barkana & Loeb (2001) is the source of estimates for the IGM post reionisation. Estimates for typical solar parameters are from Wiegelmann _et al._ (2014) and Stix (2012). The values of ion temperature and electron density for ICF hot spots are from Hurricane _et al._ (2014), who reported DT experiments carried out on the National Ignition Facility (NIF); the estimates of magnetic-field strength, electron temperature and scale length come from numerical simulations of the same experiment (Walsh _et al._ 2017). The parameters for the laser-ablated CH plasma are from an experiment on the OMEGA laser facility, with a 1 ns, 500 J pulse with a 0.351 \(\mu\)m wavelength (Li _et al._ 2007); we assume that the measured fields are found in front of the critical-density surface when estimating the density. The ‘TDYNO’ laser-plasma is a turbulent CH plasma that was produced as part of a recent laboratory astrophysics experiment on the NIF which found evidence of suppressed heat conduction (Meinecke _et al._ 2022, see main text). Naturally, the systems described here often support a range of density, temperatures and magnetic fields, so the values provided should be understood as representative, but negotiable. to them. Similar considerations apply to a range of laser plasmas, including plasmas generated in inertial-confinement-fusion and laboratory-astrophysics experiments. Indeed, a recent experiment carried out on the National Ignition Facility (NIF) - part of a wider programme of work exploring magnetic-field amplification in turbulent laser-plasmas (Tzeferacos _et al._, 2018; Bott _et al._, 2021; Bott _et al._, 2021, 2022) - found evidence for the existence of large-amplitude local temperature fluctuations over a range of scales, a finding that was inconsistent with Spitzer thermal conduction (Meinecke _et al._, 2022). This claim was corroborated by MHD simulations (with the code FLASH) of the experiment that modelled thermal conduction either using the Spitzer model, or no explicit thermal conduction model: the latter simulations were found to be much closer to the actual experimental data. Because the plasma created in the NIF experiment is also anticipated by our theory to be susceptible to CE microinstabilities, observations of a discrepancy with CE-derived transport coefficients are tantalising. We note that the idea of microinstabilities emerging in both collisional astrophysical plasmas and laser Figure 1: _Stability map for the CE distribution function_. Idealised illustration of the stability of strongly magnetised, classical, collisional hydrogen plasma to microinstabilities for different (non-dimensionalised) values of the mean free path \(\lambda=\lambda_{e}=\lambda_{i}\) and the electron inertial scale \(d_{e}\). Here, the length scale \(L_{V}\) to which \(\lambda\) and \(d_{e}\) are normalised is the length scale of the CE plasma’s bulk fluid motions in the direction parallel to the guide magnetic field [see (2.13_d_)]; we assume scalings (2.55) to relate the magnitude of CE temperature-gradient-driven and CE shear-driven microinstabilities, so the CE expansion parameter is \(\epsilon=\text{Ma}\,\lambda/L_{V}\) (see the caption of table 2 for definitions). The white region of the \((d_{e}/L_{V},\text{Ma}\,\lambda/L_{V})\) stability map is stable; the coloured regions are not. In the unstable regions, the fastest-growing microinstability is indicated by colour according to the figure’s legend; in the regions where multiple microinstabilities could be operating simultaneously, multiple colours have been employed. The plasma beta \(\beta\) here was taken to be \(\beta=10^{4}\), and the Mach number \(\text{Ma}=1\). plasmas is not a new one: see, e.g. Schekochihin _et al._ (2005) or Hellinger & Travniceek (2015) in the former context; in the latter, Epperlein & Bell (1987) or Bell _et al._ (2020). However, to our knowledge there does not exist a systematic treatment of the general kinetic stability of CE plasmas. This is the gap that this paper attempts to fill. This paper has the following structure. In section 2, we discuss kinetic and fluid descriptions of classical plasma. We then describe the CE expansion in collisional plasma: we work out the CE distribution function arising in a two-species strongly magnetised plasma, evaluate the friction forces, heat and momentum fluxes necessary to construct a closed set of plasma-fluid equations, and systematically estimate the size of the non-Maxwellian components of this distribution. Next, we discuss qualitatively the existence and nature of microinstabilities potentially arising in CE plasma, before presenting the methodology that we later use to perform the full linear, kinetic stability calculation. We provide an overview of this methodology in section 2.4, and then a much more detailed exposition of it in section 2.5: in particular, we describe in the latter how a simple form of the dispersion relation for the fastest microinstabilities can be obtained by considering the low-frequency limit \(\gamma\ll kv_{\mathrm{th}s}\) of the hot-plasma dispersion relation, and how this simplified dispersion relation can be solved analytically. Readers who are uninterested in the technical details of this calculation are encouraged to pass over section 2.5; knowledge of its contents is not a pre-requisite for subsequent sections. In sections 3 and 4, we construct stability maps (analogous to figure 1) showing the parameter ranges in which the CE distribution function is stable, to CET and CES microinstabilities, respectively. The parameters are \(\beta\) and \(\lambda/L\), and we construct separate stability maps for CET and CES microinstabilities in order to take into account the fact that \(L\) is in general not the same in the situations where these two types of microinstabilities occur. In section 3, we also discuss the significant CET microinstabilities that can occur (or not) at different values \(\lambda/L\) and \(\beta\), and provide simple analytic characterisations of them; in section 4, we do the same for significant CES microinstabilities. Finally, in section 5, we discuss the general implications of these instabilities for classical, collisional plasmas, and consider future research directions. Throughout this paper, most lengthy calculations are exiled to appendices; a glossary of mathematical notation is given in appendix A. ## 2 Problem setup ### Kinetic versus fluid description of classical plasma The evolution of classical plasma is most generally described by kinetic theory, via the solution of Maxwell-Vlasov-Landau equations for the distribution functions of constituent particles. More specifically, in a kinetic description of a plasma, the distribution function \(f_{s}(\mathbf{r},\mathbf{v},t)\) of the particle of species \(s\) satisfies \[\frac{\partial f_{s}}{\partial t}+\mathbf{v}\mathbf{\cdot}\mathbf{\nabla}f_{s}+\frac{Z_{s} e}{m_{s}}\left(\mathbf{E}+\frac{\mathbf{v}\times\mathbf{B}}{c}\right)\mathbf{\cdot}\frac{ \partial f_{s}}{\partial\mathbf{v}}=\sum_{s^{\prime}}\mathfrak{C}(f_{s},f_{s^{ \prime}}), \tag{1}\] where \(t\) is time, \(\mathbf{r}\) spatial position, \(\mathbf{v}\) the velocity, \(e\) the elementary charge, \(Z_{s}e\) the charge and \(m_{s}\) the mass of species \(s\), \(\mathbf{E}\) the electric field, \(\mathbf{B}\) the magnetic field, \(c\) the speed of light, and \(\mathfrak{C}(f_{s},f_{s^{\prime}})\) the collision operator for interactions between species \(s\) and \(s^{\prime}\). Equation (1) is coupled to Maxwell's equations: \[\mathbf{\nabla\cdot}\mathbf{E} =4\mathbf{\pi}\sum_{s}Z_{s}e\int\mathrm{d}^{3}\mathbf{v}\,f_{s}, \tag{2a}\] \[\mathbf{\nabla\cdot}\mathbf{B} =0, \tag{2b}\] \[\eqalignno{\mathbf{\nabla}\times\mathbf{E}&=-{1\over c}{\partial\mathbf{B}\over\partial t},&(2.2c)\cr\mathbf{\nabla}\times\mathbf{B}&={1\over c}{\partial\mathbf{E}\over\partial t}+{4\pi \over c}\sum_{s}Z_{s}e\int{\rm d}^{3}\mathbf{v}\,\mathbf{v}\,f_{s}.&(2.2d)\cr}\] Together, (2.1) and (2.2) form a closed set of governing equations. The density \(n_{s}\), bulk fluid velocity \(\mathbf{V}_{s}\) and temperature \(T_{s}\) of species \(s\) can be formally defined in terms of moments of the distribution function: \[\eqalignno{n_{s}&\equiv\int{\rm d}^{3}\mathbf{v}\,f_{s},&(2.3a)\cr\mathbf{V}_{s}&\equiv {1\over n_{s}}\int{\rm d}^{3}\mathbf{v}\,\mathbf{v}\,f_{s},&(2.3b)\cr T_{s}&\equiv {1\over n_{s}}\int{\rm d}^{3}\mathbf{v}\,{1\over 3}m_{s}|\mathbf{v}-\mathbf{V}_{s}|^{2}\,f_{s}.&(2.3c)\cr}\] Governing "fluid" equations are then derived by integrating (2.1) or outer products of (2.1) and the velocity variable \(\mathbf{v}\) with respect to \(\mathbf{v}\): \[\eqalignno{{{\rm D}n_{s}\over{\rm D}t}\bigg{|}_{s}+n_{s}\mathbf{\nabla}\cdot\mathbf{V} _{s}=0,&(2.4a)\cr m_{s}n_{s}{{\rm D}\mathbf{V}_{s}\over{\rm D}t}\bigg{|}_{s}=-\bm {\nabla}p_{s}-\mathbf{\nabla}\cdot\mathbf{\pi}_{s}+Z_{s}en_{s}\left(\mathbf{E}+{\mathbf{V}_{s} \times\mathbf{B}\over c}\right)+\mathbf{R}_{s},&(2.4b)\cr{3\over 2}n_{s}{{\rm D}T_{s} \over{\rm D}t}\bigg{|}_{s}+p_{s}\mathbf{\nabla}\cdot\mathbf{V}_{s}=-\mathbf{\nabla}\cdot \mathbf{q}_{s}-\mathbf{\pi}_{s}:\mathbf{\nabla}\mathbf{V}_{s}+{\cal Q}_{s},&(2.4c)\cr}\] where \[\eqalignno{{{\rm D}\over{\rm D}t}\bigg{|}_{s}\equiv{\partial\over\partial t}+ \mathbf{V}_{s}\cdot\mathbf{\nabla}&(2.5)\cr}\] is the convective derivative with respect to the fluid motions of species \(s\), \(p_{s}\) the pressure, \(\mathbf{\pi}_{s}\) the viscosity tensor, and \(\mathbf{q}_{s}\) the heat flux of species \(s\), \(\mathbf{R}_{s}\) the friction force on this species due to collisional interactions with other species, and \({\cal Q}_{s}\) the heating rate due to inter-species collisions. The latter quantities are formally defined in terms of the distribution function as follows: \[\eqalignno{p_{s}&\equiv\int{\rm d}^{3}\mathbf{v}\,{1\over 3}m_{s}|\mathbf{v}-\mathbf{V}_{s}| ^{2}\,f_{s}=n_{s}T_{s},&(2.6a)\cr\mathbf{\pi}_{s}&\equiv-p_{s}\mathbf{l}+\int{\rm d}^ {3}\mathbf{v}\,m_{s}\left(\mathbf{v}-\mathbf{V}_{s}\right)\left(\mathbf{v}-\mathbf{V}_{s}\right)\, f_{s},&(2.6b)\cr\mathbf{q}_{s}&\equiv\int{\rm d}^{3}\mathbf{v}\,{1\over 2 }m_{s}|\mathbf{v}-\mathbf{V}_{s}|^{2}\left(\mathbf{v}-\mathbf{V}_{s}\right)\,f_{s},&(2.6c)\cr \mathbf{R}_{s}&\equiv\sum_{s^{\prime}}\int{\rm d}^{3}\mathbf{v}\,m_{s}\mathbf{v}\,\mathfrak{ C}(f_{s},f_{s^{\prime}}),&(2.6d)\cr\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\omit\span\omit\span\omit\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit as \[\mathbf{\nabla\cdot E} = 4\pi\sum_{s}Z_{s}en_{s}, \tag{11a}\] \[\mathbf{\nabla}\times\mathbf{B} = \frac{1}{c}\frac{\partial\mathbf{E}}{\partial t}+\frac{4\pi}{c}\sum_{s }Z_{s}en_{s}\mathbf{V}_{s}. \tag{11b}\] Unlike the kinetic description, the fluid equations (4) combined with Maxwell's equations (11b), (11d), (11a), and (11b) are not a closed system: knowledge of the distribution function, not just of \(n_{s}\), \(\mathbf{V}_{s}\) or \(T_{s}\), is required to calculate momentum and heat fluxes, as well as the friction force or heating. As discussed in the Introduction, solving fluid equations as opposed to kinetic equations is advantageous in many cases of interest. Since the dimensionality of the kinetic system is greater (a six-dimensional phase space vs. three-dimensional position space), solving the kinetic system introduces both significant numerical and conceptual complexity. However, the system of fluid equations (4) is only usable if some type of closure can be introduced to calculate \(\mathbf{\pi}_{s}\), \(\mathbf{q}_{s}\), \(\mathbf{R}_{s}\) and \(\mathcal{Q}_{s}\) in terms of \(n_{s}\), \(\mathbf{V}_{s}\) and \(T_{s}\). For classical plasmas, such a closure is generally not possible, except in the case of strongly collisional plasmas. ### The Chapman-Enskog (CE) expansion #### 2.2.1 The CE distribution functions For a classical, collisional plasma - i.e., a plasma where the mean free path \(\lambda_{s}\) of particles of species \(s\) satisfies \(\lambda_{s}/L\ll 1\) for all \(s\), \(L\) being the length scale over which the macroscopic properties of the plasma vary - a formal procedure exists for deriving a closed system of fluid equations from a kinetic description of the plasma. This procedure is the Chapman-Enskog (CE) expansion, which gives distribution functions that are close to, but not exactly, Maxwellian. We call them Chapman-Enskog (CE) distribution functions. The non-Maxwellian components of the CE distribution functions of particle species \(s\) are proportional to \(\lambda_{s}/L\), and must be present in order to support gradients of \(n_{s}\), \(\mathbf{V}_{s}\) and \(T_{s}\) on \(O(L)\) length scales, because (11b-e) are all zero for a Maxwellian plasma. We consider a collisional electron-ion plasma (in which, by definition, \(\mu_{e}\equiv m_{e}/m_{i}\ll 1\)) with the property that all constituent particle species are strongly magnetised by the macroscopically varying magnetic field \(\mathbf{B}\): that is, the Larmor radius \(\rho_{s}\equiv m_{s}v_{\mathrm{ths}}c/|Z_{s}|e|\mathbf{B}|\) satisfies \(\rho_{s}\ll\lambda_{s}\) both for the ions and for the electrons (here \(v_{\mathrm{ths}}\equiv\sqrt{2T_{s}/m_{s}}\) is the thermal speed of species \(s\)). Equivalently, a strongly magnetised plasma is one in which the Larmor frequency \(\Omega_{s}\equiv e|Z_{s}|/m_{s}c\) satisifies \(\Omega_{s}\tau_{s}\gg 1\), where \(\tau_{s}\) is the collision time of species \(s\). In such a plasma, the macroscopic variation of the fluid moments is locally anisotropic with respect to \(\mathbf{B}\); \(L\) is the typical length scale of variation in the direction locally parallel to \(\mathbf{B}\). It can then be shown that, to first order of the Chapman-Enskog expansion in \(\lambda_{s}/L\ll 1\), and to zeroth order in \(\rho_{s}/\lambda_{s}\ll 1\), the CE distribution functions of the electrons and ions are \[f_{e}(\tilde{v}_{e\parallel},\tilde{v}_{e\perp}) = \frac{n_{e}}{v_{\mathrm{the}}^{3}\pi^{3/2}}\exp\left(-\tilde{v}_{e }^{2}\right) \tag{11a}\] \[\times\Bigg{\{}1+\left[\eta_{e}^{T}A_{e}^{T}(\tilde{v}_{e})+\eta _{e}^{R}A_{e}^{R}(\tilde{v}_{e})+\eta_{e}^{u}A_{e}^{u}(\tilde{v}_{e})\right] \tilde{v}_{e\parallel}\] \[+\epsilon_{e}C_{e}(\tilde{v}_{e})\left(\tilde{v}_{e\parallel}^{2}- \frac{\tilde{v}_{e\perp}^{2}}{2}\right)\Bigg{\}},\] \[f_{i}(\tilde{v}_{i\parallel},\tilde{v}_{i\perp})=\frac{n_{i}}{v_{\rm thi }^{3}\pi^{3/2}}\exp\left(-\tilde{v}_{i}^{2}\right)\] \[\qquad\qquad\times\left\{1+\eta_{i}A_{i}(\tilde{v}_{i})\tilde{v}_{ i\parallel}+\epsilon_{i}C_{i}(\tilde{v}_{i})\left(\tilde{v}_{i\parallel}^{2}- \frac{\tilde{v}_{i\perp}^{2}}{2}\right)\right\}.\] Let us define the various symbols employed in (2.8), before discussing the origin of these expressions and their significance for formulating fluid equations (see section 2.2.2). The particle velocity \(\boldsymbol{v}\) (with the corresponding speed \(v=|\boldsymbol{v}|\)) is split into components parallel and perpendicular to the macroscopic magnetic field \(\boldsymbol{B}=B\hat{\boldsymbol{z}}\) as \(\boldsymbol{v}=v_{\parallel}\hat{\boldsymbol{z}}+\boldsymbol{v}_{\perp}\), and the perpendicular plane is in turn characterised by two vectors \(\hat{\boldsymbol{x}}\) and \(\hat{\boldsymbol{y}}\) chosen so that \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\) is an orthonormal basis. The perpendicular velocity is related to these basis vectors by the gyrophase angle \(\phi\): \[\boldsymbol{v}_{\perp}=v_{\perp}\left(\cos\phi\,\hat{\boldsymbol{x}}-\sin \phi\,\hat{\boldsymbol{y}}\right).\] The non-dimensionalised peculiar velocity \(\tilde{\boldsymbol{v}}_{s}\) in the rest frame of the ion fluid is defined by \(\tilde{\boldsymbol{v}}_{s}\equiv(\boldsymbol{v}-\boldsymbol{V}_{i})/v_{\rm ths}\), \(\tilde{v}_{s}\equiv|\tilde{\boldsymbol{v}}_{s}|\), \(\tilde{v}_{s\parallel}\equiv\hat{\boldsymbol{z}}\boldsymbol{\cdot}\tilde{ \boldsymbol{v}}_{s}\), and \(\tilde{v}_{s\perp}\equiv|\tilde{\boldsymbol{v}}_{s}-\tilde{v}_{s\parallel} \hat{\boldsymbol{z}}|\). The number densities satisfy the quasi-neutrality condition \[Zn_{i}=n_{e}\,,\] where we have utilised \(Z_{e}=-1\), and defined \(Z\equiv Z_{i}\). We emphasise that \(n_{s}\), \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\) and \(v_{\rm ths}\) all vary over length scales \(L\) in the plasma, but not on shorter scales (at least not in the direction locally parallel to \(\boldsymbol{B}\)). The functions \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), \(A_{e}^{u}(\tilde{v}_{e})\), \(C_{e}(\tilde{v}_{e})\), \(A_{i}(\tilde{v}_{i})\) and \(C_{i}(\tilde{v}_{i})\) are isotropic functions. Their magnitude is \(\mathit{O}(1)\) when \(\tilde{v}_{e}\sim 1\) or \(\tilde{v}_{i}\sim 1\), for electrons and ions respectively. Finally, the parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\) are defined as follows: \[\eta_{e}^{T} =\lambda_{e}\nabla_{\parallel}\log T_{e}=\mathrm{sgn}(\nabla_{ \parallel}\log T_{e})\frac{\lambda_{e}}{L_{T}},\] (11a) \[\eta_{e}^{R} =\lambda_{e}\frac{R_{e\parallel}}{p_{e}},\] (11b) \[\eta_{e}^{u} =\lambda_{e}\frac{m_{e}u_{e\parallel}}{T_{e}\tau_{e}},\] (11c) \[\eta_{i} =\lambda_{i}\nabla_{\parallel}\log T_{i}=\mathrm{sgn}(\nabla_{ \parallel}\log T_{i})\frac{\lambda_{i}}{L_{T_{i}}},\] (11d) \[\epsilon_{e} =\frac{\lambda_{e}}{v_{\rm the}}\bigg{(}\hat{\boldsymbol{z}} \hat{\boldsymbol{z}}-\frac{1}{3}\boldsymbol{I}\bigg{)}\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbolboldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \ \leftleft( \left. \ \left\ \left\|\ \ \ \left\|\ \ \ \left\|\ \ \text{for $\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol }}}}}}}}}}}}}} \; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ \[L_{V_{e}} \equiv \frac{1}{V_{e}}\left|\left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\bm {I}\right)\mathbf{:W}_{e}\right|^{-1}\,,\] (13 \[c\] ) \[L_{V} \equiv \frac{1}{V_{i}}\left|\left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\bm {I}\right)\mathbf{:W}_{i}\right|^{-1}\,,\] (13 \[d\] ) are, respectively, the electron- and ion-temperature and the electron- and ion-flow length scales parallel to the background magnetic field. The mean free paths are formally defined for a two-species plasma by \[\lambda_{e} \equiv v_{\rm th\/}\tau_{e},\] (14 \[a\] ) \[\lambda_{i} \equiv v_{\rm th\/}\tau_{i},\] (14 \[b\] ) and the collision times \(\tau_{e}\) and \(\tau_{i}\) are given in terms of macroscopic plasma parameters by \[\tau_{e} \equiv \frac{3m_{e}^{1/2}T_{e}^{3/2}}{4\sqrt{2\pi}Z_{i}^{2}e^{4}n_{i} \log\Lambda_{\rm CL}},\] (15 \[a\] ) \[\tau_{i} \equiv \frac{3m_{i}^{1/2}T_{i}^{3/2}}{4\sqrt{2\pi}Z_{i}^{4}e^{4}n_{i} \log\Lambda_{\rm CL}},\] (15 \[b\] ) where \(\log\Lambda_{\rm CL}\) is the Coulomb logarithm (Braginskii, 1965)1. In a collisional plasma, \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\) are assumed small. We note that all these parameters can be either positive or negative, depending on the orientation of temperature and velocity gradients. Footnote 1: Braginskii defined his ion collision time as equal to (15 \(b\)) multiplied by a factor of \(\sqrt{2}\); for the sake of species equality, we remove this factor. It is clear from their definitions that each of the non-Maxwellian terms associated with the parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\) is linked to a different macroscopic physical quantity. Thus, \(\eta_{e}^{T}\) and \(\eta_{i}\) are proportional to the electron- and ion-temperature gradients, respectively; we will therefore refer to the associated non-Maxwellian terms as the _CE electron-temperature-gradient term_ and the _CE ion-temperature-gradient term_. We refer to the non-Maxwellian term proportional to \(\eta_{e}^{R}\) as the _CE electron-friction term_, to the non-Maxwellian term proportional to \(\eta_{e}^{u}\) as the _CE electron-ion-drift term_, and the non-Maxwellian terms proportional to \(\epsilon_{e}\) and \(\epsilon_{i}\) as the _CE electron-shear term_ and the _CE ion-shear term_. We note that the friction and electron-ion-drift terms appear in the electron CE distribution function but not the ion CE distribution function because of our choice to define all velocities in the ion-fluid rest frame. The derivation of the CE distribution functions (8) for a two-species strongly magnetised plasma undergoing sonic motions (that is, \(V_{i}\sim v_{\rm th\/}\)) from the kinetic equation (1) was first completed by Braginskii (1965) for arbitrary values of \(\rho_{s}/\lambda_{s}\). We do not reproduce the full derivation in the main text, but, for the reader's convenience, we provide a derivation of (8) in appendix B.1. The gist of the full derivation is to assume that the distribution function is close to a Maxwellian, with parameters that only evolve on a slow time scale \(t^{\prime}\sim tL/\lambda_{e}\sim tL/\lambda_{i}\gg t\). The kinetic equation (1) is then expanded and solved order by order in \(\lambda_{e}/L\sim\lambda_{i}/L\ll 1\), allowing for the calculation of the (small) non-Maxwellian components of the distribution function. The small parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\), as well as the isotropic functions \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), \(A_{e}^{u}(\tilde{v}_{e})\), \(C_{e}(\tilde{v}_{e})\), \(A_{i}(\tilde{v}_{i})\) and \(C_{i}(\tilde{v}_{i})\) emerge during this calculation. The precise forms of these functions depend only on the collision operator assumed in the original Maxwell-Vlasov-Landau system; in appendix B.2, we provide a simple illustration of this, by calculating these isotropic functions explicitly for Krook (Bhatnagar _et al._, 1954) and Lorentz collision operators (Appendices B.2.1 and B.2.2, respectively). For the full Landau collision operator, the equivalent calculation is more complicated, but can be performed (for example) by expanding the isotropic functions in Sonine polynomials (see Helander & Sigmar 2005). #### 2.2.2 Closure of fluid equations (4) Once the CE distribution function has been calculated, the desired fluid closure can be obtained by evaluating the heat fluxes, the friction forces, and the momentum fluxes (6) associated with the non-Maxwellian components of the CE distribution functions. These calculations were carried out in full for arbitrary values of \(\rho_{s}/\lambda_{s}\) by Braginskii (1965). We do not reproduce the full fluid closure relations here; instead, we illustrate how the non-Maxwellian terms in the CE distribution functions (8) give rise to the friction force and heat fluxes parallel to the macroscopic magnetic field, as well as to the viscosity tensor. In a strongly magnetised two-species plasma (where \(\rho_{s}\ll\lambda_{s}\)), parallel friction forces and heat fluxes are typically much larger than their perpendicular or diamagnetic counterparts. \(\bullet\)Heat fluxes. Recalling (6), the parallel heat flux \(q_{s\parallel}\equiv\hat{\boldsymbol{z}}\boldsymbol{\cdot}\boldsymbol{q}_{s}\) associated with species \(s\) is given by \[q_{s\parallel}=\frac{1}{2}\int\mathrm{d}^{3}\boldsymbol{v}_{s}^{\prime}\,m_{s }\,|\boldsymbol{v}_{s}^{\prime}|^{2}\,v_{s\parallel}^{\prime}\,f_{s}, \tag{16}\] where \(\boldsymbol{v}_{s}^{\prime}\equiv\boldsymbol{v}-\boldsymbol{V}_{s}\). Noting that the electron distribution function (8) is specified in the rest frame of the ions, not electrons, it is necessary first to calculate the electron distribution function in the electron rest frame before calculating the parallel electron heat flux. An expression for this quantity is given by (18) in appendix B.1 as part of our derivation of (8): \[f_{e}(v_{e\parallel}^{\prime},v_{e\perp}^{\prime}) = \frac{n_{e}}{v_{\text{the}}^{3}\pi^{3/2}}\exp\left(-\frac{| \boldsymbol{v}_{e}^{\prime}|^{2}}{v_{\text{the}}^{2}}\right) \tag{17}\] \[\times\Bigg{\{}1+\left[\eta_{e}^{T}A_{e}^{T}\bigg{(}\frac{| \boldsymbol{v}_{e}^{\prime}|}{v_{\text{the}}}\bigg{)}+\eta_{e}^{R}A_{e}^{R} \bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{\text{the}}}\bigg{)}+\eta_{e}^ {u}\left(A_{e}^{u}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{\text{the}} }\bigg{)}-1\right)\right]\!\frac{v_{e\parallel}^{\prime}}{v_{\text{the}}}\] \[+\epsilon_{e}C_{e}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_ {\text{the}}}\bigg{)}\left(\frac{v_{e\parallel}^{\prime 2}}{v_{\text{the}}^{2}}- \frac{v_{e\perp}^{\prime 2}}{2v_{\text{the}}^{2}}\right)\Bigg{\}}\,,\] Now substituting (17) into (16) (with \(s=e\)), we find that the parallel electron heat flux is \[q_{e\parallel}=-n_{e}T_{e}v_{\text{the}}\left[\mathcal{A}_{e}^{T}\eta_{e}^{T}+ \mathcal{A}_{e}^{R}\eta_{e}^{R}+\left(\mathcal{A}_{e}^{u}-\frac{1}{2}\right) \eta_{e}^{u}\right], \tag{18}\] where \[\mathcal{A}_{e}^{T,R,u}=-\frac{4}{3\sqrt{\pi}}\int_{0}^{\infty}\mathrm{d} \tilde{v}_{e}\,\tilde{v}_{e}^{6}A_{e}^{T,R,u}(\tilde{v}_{e})\exp\left(-\tilde {v}_{e}^{2}\right). \tag{19}\] The minus signs in the definitions of \(\mathcal{A}_{e}^{T,R,u}\) have been introduced so that \(\mathcal{A}_{e}^{T,R,u}\geqslant 0\) for a typical collision operator (determining that these constants are indeed positive for any given collision operator is non-trivial, but it is a simple exercise to show this for a Krook collision operator, using the expressions for \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), and \(A_{e}^{u}(\tilde{v}_{e})\) given in appendix B.2.1). Expression (18) for the electron heat flux can be rewritten as \[q_{e\parallel}=-\kappa_{e}^{\parallel}\nabla_{\parallel}T_{e}-\left[\mathcal{ A}_{e}^{u}-\frac{1}{2}-\frac{\mathcal{A}_{e}^{R}}{\tilde{\mathcal{A}}_{e}^{R}} \left(\tilde{\mathcal{A}}_{e}^{u}-\frac{1}{2}\right)\right]n_{e}T_{e}u_{e \parallel}\,, \tag{20}\] where the parallel electron heat conductivity is defined by \[\kappa_{e}^{\parallel}=2\left({\cal A}_{e}^{T}-\frac{{\cal A}_{e}^{R}}{{\cal\tilde {A}}_{e}^{R}}{\cal\tilde{A}}_{e}^{T}\right)\frac{n_{e}T_{e}\tau_{e}}{m_{e}}\,, \tag{21}\] and \[{\cal\tilde{A}}_{e}^{T,R,u}=-\frac{4}{3\sqrt{\pi}}\int_{0}^{\infty}{\rm d} \tilde{v}_{e}\,\tilde{v}_{e}^{4}A_{e}^{T,R,u}(\tilde{v}_{e})\exp\left(-\tilde{ v}_{e}^{2}\right)\,. \tag{22}\] Numerical evaluation of the coefficients \({\cal A}_{e}^{T,R,u}\) and \({\cal\tilde{A}}_{e}^{T,R,u}\) for the Landau collision operator gives (Braginskii 1965) \[q_{e\parallel}\simeq-3.16\frac{n_{e}T_{e}\tau_{e}}{m_{e}}\nabla_{\parallel}T_{ e}+0.71n_{e}T_{e}u_{ei\parallel}\,. \tag{23}\] The ion heat flux can be calculated directly from (16) (\(s=i\)) using (8\(b\)): \[q_{i\parallel}=-n_{i}T_{i}v_{{\rm th}i}{\cal A}_{i}\eta_{i}, \tag{24}\] where \[{\cal A}_{i}=-\frac{4}{3\sqrt{\pi}}\int_{0}^{\infty}{\rm d}\tilde{v}_{i}\, \tilde{v}_{i}^{6}A_{i}(\tilde{v}_{i})\exp\left(-\tilde{v}_{i}^{2}\right)\,. \tag{25}\] This becomes \[q_{i\parallel}=-\kappa_{i}^{\parallel}\nabla_{\parallel}T_{i}, \tag{26}\] where the parallel ion heat conductivity is \[\kappa_{i}^{\parallel}=-2{\cal A}_{i}\frac{n_{i}T_{i}\tau_{i}}{m_{i}}\simeq-3. 9\frac{n_{i}T_{i}\tau_{i}}{m_{i}}\,. \tag{27}\] The last equality is for the Landau collision operator (Braginskii 1965). Note that the absence of a term proportional to the electron-ion-drift in the ion heat flux (24) is physically due to the smallness of the ion-electron collision operator (Helander & Sigmar 2005). \(\bullet\) Friction force. We evaluate the friction force by considering the electron-ion-drift associated with electron CE distribution function. Namely, noting that \[u_{ei\parallel}=\frac{v_{{\rm the}e}^{4}}{n_{e}}\int{\rm d}^{3}\tilde{\mathbf{v}}_{e}\,\tilde{v}_{e\parallel}f_{e}, \tag{28}\] it follows from (8\(a\)) that \[u_{ei\parallel}=v_{{\rm the}}\left({\cal\tilde{A}}_{e}^{T}\eta_{e}^{T}+{\cal \tilde{A}}_{e}^{R}\eta_{e}^{R}+{\cal\tilde{A}}_{e}^{u}\eta_{e}^{u}\right). \tag{29}\] This expression can in turn be used to relate the parallel electron-friction force \(R_{e\parallel}\), defined in (6\(d\)), to electron flows and temperature gradients: \[R_{e\parallel}=-\left(\frac{2{\cal\tilde{A}}_{e}^{u}+1}{2{\cal\tilde{A}}_{e}^ {R}}\right)\frac{n_{e}m_{e}u_{ei\parallel}}{\tau_{e}}-\frac{{\cal\tilde{A}}_{ e}^{T}}{{\cal\tilde{A}}_{e}^{R}}n_{e}\nabla_{\parallel}T_{e}\,. \tag{30}\] Evaluating the coefficients \({\cal\tilde{A}}_{e}^{T}\), \({\cal\tilde{A}}_{e}^{R}\) and \({\cal\tilde{A}}_{e}^{u}\) for the full Landau collision operator, one finds (Braginskii 1965) \[R_{e\parallel}\simeq-0.51\frac{n_{e}m_{e}u_{ei\parallel}}{\tau_{e}}-0.71n_{e} \nabla_{\parallel}T_{e}\,. \tag{31}\] \(\bullet\) Viscosity tensor. For gyrotropic distributions such as the CE distribution functions (8), the viscosity tensor \(\mathbf{\pi}_{s}\) of species \(s\) defined by (6\(b\)) - which is the momentum flux excluding the convective terms and isotropic pressure - is given by \[\boldsymbol{\pi}_{s}=\left(p_{s\parallel}-p_{s\perp}\right)\left(\hat{\boldsymbol {z}}\hat{\boldsymbol{z}}-\frac{1}{3}\boldsymbol{I}\right), \tag{32}\] where the parallel pressure \(p_{s\parallel}\) and the perpendicular pressure \(p_{s\perp}\) are defined by \[p_{s\parallel} \equiv \int\mathrm{d}^{3}\boldsymbol{v}_{s}^{\prime}\,m_{s}|v_{s \parallel}^{\prime}|^{2}f_{s}=n_{s}T_{s}\left(1-\frac{2}{3}\epsilon_{s} \mathcal{C}_{s}\right)\,, \tag{33a}\] \[p_{s\perp} \equiv \frac{1}{2}\int\mathrm{d}^{3}\boldsymbol{v}_{s}^{\prime}\,m_{s}| \boldsymbol{v}_{s\perp}^{\prime}|^{2}f_{s}=n_{s}T_{s}\left(1+\frac{1}{3} \epsilon_{s}\mathcal{C}_{s}\right)\,, \tag{33b}\] with the last expressions having being obtained on substitution of the CE distribution function (8), and \[\mathcal{C}_{s}=-\frac{8}{5\sqrt{\pi}}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s }\,\tilde{v}_{s}^{6}C_{s}(\tilde{v}_{s})\exp\left(-\tilde{v}_{s}^{2}\right)\,. \tag{34}\] The sign of the constant \(\mathcal{C}_{s}\) is again chosen so that \(\mathcal{C}_{s}>0\) for typical collision operators; for the Landau collision operator, \(\mathcal{C}_{e}\simeq 1.1\) and \(\mathcal{C}_{i}\simeq 1.44\) (Braginskii, 1965). We note for reference that the parameter \(\epsilon_{s}\) [see (11e-f)] has a simple relationship to the pressure anisotropy of species \(s\): utilising (33), one finds \[\Delta_{s}\equiv\frac{p_{s\perp}-p_{s\parallel}}{p_{s}}=\mathcal{C}_{s} \epsilon_{s}\,. \tag{35}\] Using (33), the viscosity tensor (32) can be written \[\boldsymbol{\pi}_{s}=-\frac{\mu_{\mathrm{v}s}}{2}\left(\hat{\boldsymbol{z}} \hat{\boldsymbol{z}}-\frac{1}{3}\boldsymbol{I}\right)\left(\hat{\boldsymbol{ z}}\hat{\boldsymbol{z}}-\frac{1}{3}\boldsymbol{I}\right)\boldsymbol{:W}_{s}, \tag{36}\] where the dynamic viscosity of species \(s\) is \[\mu_{\mathrm{v}s}\equiv 2\mathcal{C}_{s}n_{s}T_{s}\tau_{s}\,. \tag{37}\] \(\bullet\)Thermal energy transfer between species. It can be shown that for the CE distribution functions (8), the rate of thermal energy transfer from electrons to ions \(\mathcal{Q}_{e}\) is simply \[\mathcal{Q}_{e}=-\boldsymbol{R}_{e}\boldsymbol{\cdot}\boldsymbol{u}_{ei}, \tag{38}\] while the rate of thermal energy transfer from ions to electrons vanishes: \(\mathcal{Q}_{i}\approx 0\). This is because the ion-electron collision rate is assumed small (by a factor of the mass ratio) compared to the ion-ion collision rate when deriving (8 \(b\)), and is thus neglected. Braginskii (1965) shows that, in fact, there is a non-zero (but small) rate of transfer: \[\mathcal{Q}_{i}=-\mathcal{Q}_{e}-\boldsymbol{R}_{e}\boldsymbol{\cdot} \boldsymbol{u}_{ei}=\frac{3n_{e}m_{e}}{m_{i}\tau_{e}}\left(T_{e}-T_{i}\right)\,. \tag{39}\] The time scale on which the ion and electron temperatures equilibrate is the ion-electron temperature equilibration time \[\tau_{ie}^{\mathrm{eq}}\equiv\frac{1}{2}\mu_{e}^{-1/2}\tau_{i}. \tag{40}\] In summary, the non-Maxwellian components of the CE distribution function are essential for a collisional plasma to be able to support fluxes of heat and momentum. More specifically, (20) demonstrates that the electron heat fluxes in a CE plasma are proportional to both temperature gradients and electron-ion drifts, and are carried by the electron-temperature-gradient, friction and electron-ion-drift terms of the CE distribution function. In contrast, the ion heat fluxes (26) are proportional only to ion temperature gradients (and carried by the CE ion-temperature-gradient term). Momentum fluxes (36) for electrons and ions are carried by the CE electron- and ion-shear terms, respectively, and are proportional to components of the rate-of-strain tensor. #### 2.2.3 Relative size of non-Maxwellian terms in the CE distribution function In the case of magnetised, two-species plasma satisfying \(T_{i}\sim T_{e}\), (11) can be used to estimate the size of the small parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\). Although these parameters are _a priori_ proportional to \(\lambda_{s}/L\) for both ions and electrons, their precise magnitudes are, in fact, subtly different. Namely, the terms associated with \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\) and \(\eta_{i}\) are gradients of the electron and ion temperatures and electron-ion relative parallel drift velocities, whereas terms associated with \(\epsilon_{e}\) and \(\epsilon_{i}\) involve gradients of the bulk flows [cf. (11)] - and these gradients do not necessarily occur on the same length scale. Recalling that the (electron) temperature and the (ion) flow length scales parallel to the macroscopic magnetic field are defined by [cf. (13)] \[L_{T} = \left|\nabla_{\parallel}\log T_{e}\right|^{-1}\,, \tag{41a}\] \[L_{V} = \frac{1}{V_{i}}\left|\left(\hat{\vec{z}}\hat{\vec{z}}-\frac{1}{ 3}\vec{I}\right)\boldsymbol{:W}_{i}\right|^{-1}\,, \tag{41b}\] where \(\boldsymbol{W}_{i}\) is the ion rate-of-strain tensor (12), and assuming that \(L_{T_{i}}=\left(\nabla_{\parallel}\log T_{i}\right)^{-1}\sim L_{T}\) (an assumption we will check _a posteriori_), it follows from (11) that \[\eta_{e}^{T} \sim \frac{\lambda_{e}}{L_{T}}, \tag{42a}\] \[\eta_{e}^{R} \sim \lambda_{e}\frac{R_{e\parallel}}{p_{e}}\sim\frac{\lambda_{e}}{L_ {T}}\sim\eta_{e}^{T},\] (42b) \[\eta_{e}^{u} \sim \frac{u_{e\parallel}}{v_{\rm th}e}\sim\frac{\lambda_{e}}{L_{T}} \sim\eta_{e}^{T},\] (42c) \[\eta_{i} \sim \frac{\lambda_{i}}{L_{T}}\sim\frac{1}{Z^{2}}\eta_{e}^{T},\] (42d) \[\epsilon_{e} \sim \frac{V_{i}}{v_{\rm th}e}\,\frac{\lambda_{e}}{L_{V}}\sim{\rm Ma} \,\mu_{e}^{1/2}\frac{L_{T}}{L_{V}}\eta_{e}^{T},\] (42e) \[\epsilon_{i} \sim \frac{V_{i}}{v_{\rm thi}}\frac{\lambda_{i}}{L_{V}}\sim{\rm Ma} \,\frac{L_{T}}{Z^{2}L_{V}}\eta_{e}^{T}, \tag{42f}\] where \({\rm Ma}\equiv V_{i}/v_{\rm th\,i}\) is the Mach number. Note that, to arrive at (42b), we assumed that \(R_{e\parallel}\sim p_{e}/L_{T}\) and \(u_{e\parallel}\sim v_{\rm the}\lambda_{e}/L_{T}\), justified by (30) and (29), respectively. The relative magnitudes of \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\) therefore depend on the Mach number of the plasma, as well as on the length scales \(L_{T}\) and \(L_{V}\). In the work of Braginskii (1965), who _a priori_ presumes all "fluid" quantities in the plasma to vary on just a single scale \(L\sim L_{T}\sim L_{V}\), with sonic ordering \({\rm Ma}\lesssim 1\), determining the relative size of these parameters for a hydrogen plasma (\(Z=1\)) is simple: \[\epsilon_{e}\sim\mu_{e}^{1/2}\epsilon_{i}\ll\epsilon_{i}\sim\eta_{i}\sim\eta _{e}^{T}\sim\eta_{e}^{R}\sim\eta_{e}^{u}\,. \tag{43}\] However, in most interesting applications, this single-scale ordering is incorrect. In a plasma with \(\lambda_{s}/L\ll 1\) under Braginskii's ordering, motions on many scales naturally arise. The fluid Reynolds number in such a plasma is given by \[{\rm Re}\equiv\frac{V_{0}L_{0}}{\nu}\,, \tag{44}\] where \(V_{0}\) is the typical fluid velocity at the scale \(L_{0}\) of driving motions and \(\nu\equiv\mu_{\rm vi}/m_{i}n_{i}\sim v_{{\rm th}i}\lambda_{i}\) is the kinematic viscosity [see (37)]. Typically, this number is large: \[{\rm Re}\sim\frac{V_{0}}{v_{{\rm th}i}}\frac{L_{0}}{\lambda_{i}}\gtrsim\frac{1}{ \epsilon_{i}}\gg 1\,, \tag{45}\] where we have assumed \({\rm Ma}_{0}\equiv V_{0}/v_{{\rm th}i}\lesssim 1\), in line with Braginskii's sonic ordering. Therefore, such a plasma will naturally become turbulent and exhibit motions across a range of scales. As a consequence, velocity and temperature fluctuations on the smallest (fluid) scales must be considered, since the associated shears and temperature gradients are the largest. To estimate \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\), \(\eta_{i}\), \(\epsilon_{e}\) and \(\epsilon_{i}\) accurately, we must determine the magnitude of these gradients. First, let \(\ell_{\nu}\) be the smallest scale on which the velocity varies due to turbulent motions (the Kolmogorov scale), with velocity fluctuations on scales \(\ell\ll\ell_{\nu}\) being suppressed by viscous diffusion. Then it follows that \({\rm Re}_{\ell_{\nu}}\sim 1\), where \({\rm Re}_{\ell}\equiv V(\ell)\,\ell/\nu\) is the scale-dependent Reynolds number and \(V(\ell)\) is the typical fluid velocity on scale \(\ell\). For Kolmogorov turbulence, \[\frac{V(\ell)}{V_{0}}\sim\left(\frac{\ell}{L_{0}}\right)^{1/3}\sim\left(\frac {{\rm Re}_{\ell}}{{\rm Re}}\right)^{1/4}, \tag{46}\] and \(\ell/L_{0}\sim\left({\rm Re}_{\ell}/{\rm Re}\right)^{3/4}\), which gives \(V(\ell)/\ell\sim\left(V_{0}/L_{0}\right)\left({\rm Re}_{\ell}/{\rm Re}\right)^ {-1/2}\), and thus, from (45), \[\frac{V(\ell_{\nu})}{\ell_{\nu}}\sim\frac{V_{0}}{L_{0}}\left(\frac{{\rm Re}_{ \ell_{\nu}}}{{\rm Re}}\right)^{-1/2}\sim{\rm Ma}_{0}^{1/2}\left(\frac{\lambda_ {i}}{L_{0}}\right)^{-1/2}\frac{V_{0}}{L_{0}}\,. \tag{47}\] We therefore conclude that \[L_{V}\sim\ell_{\nu}\frac{V_{0}}{V(\ell_{\nu})}\sim L_{0}{\rm Ma}_{0}^{-1/2} \left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}. \tag{48}\] Next, the smallest scale on which the electron temperature varies, \(\ell_{\chi}\), is the scale below which temperature fluctuations are suppressed by thermal diffusion; it satisfies \({\rm Pe}_{\ell_{\chi}}\sim 1\), where \({\rm Pe}_{\ell}\equiv V(\ell)\,L/\chi\) is the scale-dependent Peclet number and \(\chi\equiv 2\kappa_{e}^{\parallel}/3n_{e}\sim v_{{\rm th}e}\lambda_{e}\) is the (parallel) thermal diffusivity [see (21)]. Because temperature is passively advected by the flow, the temperature fluctuation \(T(\ell)\) at any scale \(\ell>\ell_{\chi}\) obeys the same scaling as the bulk velocity: \[\frac{T(\ell)}{T(L_{0})}\sim\frac{V(\ell)}{V_{0}}\sim\left(\frac{{\rm Pe}_{ \ell}}{{\rm Pe}}\right)^{1/4}\,. \tag{49}\] In addition, the magnitude of temperature fluctuations at the driving scale is related to the mean temperature by the Mach number of the driving-scale motions, \(T(L_{0})\sim T_{0}{\rm Ma}_{0}\), which then gives \[\frac{T(\ell)}{T_{0}}\sim{\rm Ma}_{0}\left(\frac{{\rm Pe}_{\ell}}{{\rm Pe}} \right)^{1/4}\,, \tag{50}\] where \({\rm Pe}\equiv{\rm Pe}_{L_{0}}\). It follows from an analogous argument to that just given for the velocity fluctuations that \[\frac{T(\ell_{\chi})}{\ell_{\chi}}\sim\frac{T_{0}}{L_{0}}{\rm Ma}_{0}{\rm Pe}^{ 1/2}. \tag{51}\] Under Braginskii's ordering, the Prandtl number of CE plasma is \[{\rm Pr}\equiv\frac{\nu}{\chi}=\frac{{\rm Pe}}{{\rm Re}}\sim\frac{v_{{\rm th}i }\lambda_{i}}{v_{{\rm th}e}\lambda_{e}}\sim\mu_{e}^{1/2}\ll 1\,, \tag{52}\] and, therefore, \[L_{T}\sim\ell_{\chi}\frac{T_{0}}{T(\ell_{\chi})}\sim L_{0}\mu_{e}^{-1/4}\mbox{Ma} _{0}^{-3/2}\left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}. \tag{53}\] Thus, \(L_{V}\sim\mbox{Ma}_{0}\mu_{e}^{1/4}L_{T}\ll L_{T}\) under the assumed ordering. Finally, we consider whether our _a priori_ assumption that \(L_{T_{i}}\sim L_{T}\) is, in fact, justified. A sufficient condition for ion-temperature gradients to be the same as electron-temperature gradients is for the evolution time \(\tau_{L}\) of all macroscopic motions to be much longer than the ion-electron temperature equilibration time \(\tau_{ie}^{\rm eq}\) defined by (40). Since \(\tau_{L}\gtrsim\ell_{\nu}/V(\ell_{\nu})\), it follows that \[\frac{\tau_{L}}{\tau_{ie}^{\rm eq}}\sim\left(\frac{m_{i}}{m_{e}}\right)^{1/2} \mbox{Ma}_{0}^{3/2}\left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}\sim\epsilon_{ i}\left(\frac{m_{i}}{m_{e}}\right)^{1/2}\,. \tag{54}\] Thus, if \(\epsilon_{i}\gg\mu_{e}^{1/2}\), we conclude that collisional equilibration of ion and electron temperatures might be too inefficient to regulate small-scale ion-temperature fluctuations, in which case it would follow that \(L_{T_{i}}<L_{T}\). However, it has been previously demonstrated via numerical solution of the Vlasov-Fokker-Planck equation that the CE expansion procedure breaks down due to nonlocal transport effects if \(\lambda_{e}/L\) is only moderately small (Bell _et al._, 1981); thus, the only regime in which there is not ion-electron equilibration over all scales is one where the CE expansion is not valid anyway. In short, we conclude that assuming \(L_{T_{i}}\sim L_{T}\) is reasonable. Bringing these considerations together with (42), we find that \[\eta_{e}^{T} \sim \mu_{e}^{1/4}\mbox{Ma}_{0}\frac{\lambda_{i}}{L_{V}}\sim\mbox{Ma} _{0}^{3/2}\mu_{e}^{1/4}\left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}\sim\eta_{ e}^{R}\sim\eta_{e}^{u}\sim\eta_{i}\,, \tag{55a}\] \[\epsilon_{e} \sim \mu_{e}^{1/2}\mbox{Ma}_{0}\frac{\lambda_{i}}{L_{V}}\sim\mu_{e}^{1 /2}\mbox{Ma}_{0}^{3/2}\left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}\,,\] (55b) \[\epsilon_{i} \sim \mbox{Ma}_{0}\frac{\lambda_{i}}{L_{V}}\sim\mbox{Ma}_{0}^{3/2} \left(\frac{\lambda_{i}}{L_{0}}\right)^{1/2}\,. \tag{55c}\] Thus, we conclude that the largest distortions of the ion CE distribution are due to flow gradients, while temperature gradients cause the greatest distortions of the electron CE distribution function. ### Kinetic stability of classical, collisional plasma #### 2.3.1 Overview We have seen that the CE expansion provides a procedure for the calculation of the distribution functions arising in a classical, collisional plasma in terms of gradients of temperature, electron-ion drifts and bulk fluid velocities; these calculations in turn allow for the closure of the system (4) of fluid equations. However, these same gradients are sources of free energy in the plasma, so they can lead to instabilities. Some of these instabilities will be 'fluid', i.e., they are captured within the CE description and are features of the fluid dynamics of plasmas; others are kinetic ('microinstabilities'), and their existence implies that the CE expansion is, in fact, illegitimate. Our primary purpose in this paper is to determine when such microinstabilities do not occur in a strongly magnetised two-species plasma. If, however, they do occur, we wish to determine their growth rates. We begin by making a few general qualitative comments concerning the existence and nature of these microinstabilities, before presenting the technical details of their derivation. #### 2.3.2 Existence of microinstabilities in classical, collisional plasma It might naively be assumed that a classical, collisional plasma is kinetically stable, on two grounds. The first of these is that the distribution function of such a plasma is 'almost' Maxwellian, and thus stable. While it is certainly the case that a plasma whose constituent particles have Maxwellian distribution functions is kinetically stable (Bernstein 1958; Krall & Trivelpiece 1973), it is also known that a plasma with anisotropic particle distribution functions is typically not (Furth 1963; Kalman _et al._ 1968; Davidson 1983; Gary 1993). The (small) non-Maxwellian component of the CE distribution function is anisotropic (as, e.g., was explicitly demonstrated by the calculation of pressure anisotropy in section 2.2.2), and thus we cannot _a priori_ rule out microinstabilities associated with this anisotropy. The second naive reason for dismissing the possibility of microinstabilities in classical, collisional plasma is the potentially stabilising effect of collisional damping on microinstability growth rates. If collisional processes are sufficiently dominant to be responsible for the mediation of macroscopic momentum and heat fluxes in the plasma, it might be naively inferred that they would also suppress microinstabilities. This is, in fact, far from guaranteed, for the following reason. The characteristic scales of the microinstabilities are not fluid scales, but are rather intrinsic plasma length scales related to quantities such as the Larmor radius \(\rho_{s}\) or the inertial scale \(d_{s}\) of species \(s\), or the Debye length \(\lambda_{\rm D}\) - quantities given in terms of macroscopic physical properties of plasma by \[\rho_{s} = \frac{m_{s}v_{\rm ths}c}{Z_{s}e|\mathbf{B}|}, \tag{56a}\] \[d_{s} \equiv \left(\frac{4\pi Z_{s}^{2}e^{2}n_{s}}{m_{s}c^{2}}\right)^{-1/2}= \rho_{s}\beta_{s}^{-1/2},\] (56b) \[\lambda_{\rm D} \equiv \left(\sum_{s}\frac{4\pi Z_{s}^{2}e^{2}n_{s}}{T_{s}}\right)^{-1/2 }=\left(\sum_{s}\frac{2c}{d_{s}^{2}v_{\rm ths}}\right)^{-1/2}, \tag{56c}\] where \[\beta_{s}\equiv\frac{8\pi n_{s}T_{s}}{B^{2}} \tag{57}\] is the plasma beta of species \(s\). The crucial observation is then that the dynamics on characteristic microinstability scales may be collisionless. For a classical, collisional hydrogen plasma (where \(\lambda\equiv\lambda_{e}\sim\lambda_{i}\) for \(T_{e}\sim T_{i}\)), the mean free path is much larger than the Debye length: \(\lambda/\lambda_{\rm D}\sim n_{e}\lambda_{\rm D}^{3}\gg 1\); so there exists a range of wavenumbers \(k\) on which microinstabilities are both possible (\(k\lambda_{\rm D}\lesssim 1\)) and collisionless (\(k\lambda\gg 1\)). For a strongly magnetised collisional plasma, \(\lambda_{s}\gg\rho_{s}\) for all species by definition; thus, any microinstability with a characteristic scale comparable to the Larmor radius of any constituent particle will be effectively collisionless. We note that such a range of collisionless wavenumbers only exists in classical (viz., weakly coupled) plasmas; in strongly coupled plasmas, for which \(\lambda\lesssim\lambda_{\rm D}\), all hypothetically possible microinstability wavenumber scales are collisional. Thus the phenomenon of microinstabilities in collisional plasmas is solely a concern for the classical regime. #### 2.3.3 A simple example: the firehose instability in CE plasmas Perhaps the simplest example of a microinstability that can occur in CE plasma is the firehose instability. This example was previously discussed by Schekochihin _et al._ (2005), but we nonetheless outline it here to illustrate the central concept of our paper. Consider bulk fluid motions of the plasma on length scales \(L_{V}\) that are much smaller than the mean free path \(\lambda_{i}\), but much larger than the ion Larmor radius \(\rho_{i}\); the characteristic frequencies associated with these motions are assumed to be much smaller that the ion Larmor frequency \(\Omega_{i}\), but much larger than the inverse of the ion collision time \(\tau_{i}^{-1}\). Under these assumptions, the following four statements can be shown to be true (Schekochihin _et al._, 2005): 1. The bulk velocities of the electron and ion species are approximately equal: \(\mathbf{V}_{e}\approx\mathbf{V}_{i}\). 2. The electric field in a frame co-moving with the ion fluid vanishes; transforming to the stationary frame of the system, this gives \[\mathbf{E}=-\frac{\mathbf{V}_{i}\times\mathbf{B}}{ c}\,.\] (58) 3. The contribution of the displacement current to the Maxwell-Ampere law (2 \(d\)) is negligible, and so \[en_{e}\left(\mathbf{V}_{i}-\mathbf{V}_{e}\right)\approx\frac {c}{4\uppi}\mathbf{\nabla}\times\mathbf{B}\,.\] (59) 4. The electron and ion viscosity tensors both take the form (32), and the electron pressure anisotropy, defined by (35), is small compared to the ion pressure anisotropy: \(\Delta_{e}\ll\Delta_{i}\). It then follows directly from (4 \(b\)), summed over both ion and electron species, that \[m_{i}n_{i}\frac{\mbox{D}\mathbf{V}_{i}}{\mbox{D}t}\bigg{|}_{i}=- \mathbf{\nabla}\left(\frac{B^{2}}{8\uppi}+p_{e\perp}+p_{i\perp} \right)-\mathbf{\nabla}\mathbf{\cdot}\left[\hat{\mathbf{z}}\hat{\mathbf{z}}\left(p_{i\perp}-p_{i\parallel}\right) \right]+\frac{\mathbf{B}\mathbf{\cdot}\mathbf{ \nabla}\mathbf{B}}{4\uppi}\,. \tag{60}\] We remind the reader that \(\hat{\mathbf{z}}=\mathbf{B}/B\), and emphasize that we have neglected the electron inertial term on the grounds that it is small compared to the ion inertial term: \[m_{e}n_{e}\frac{\mbox{D}\mathbf{V}_{e}}{\mbox{D}t}\bigg{|}_{e}\ll m _{i}n_{i}\frac{\mbox{D}\mathbf{V}_{i}}{\mbox{D}t}\bigg{|}_{i}\,. \tag{61}\] The evolution of the magnetic field is described by the induction equation, \[\frac{\mbox{D}\mathbf{B}}{\mbox{D}t}\bigg{|}_{i}=\mathbf{B }\mathbf{\cdot}\mathbf{\nabla}\mathbf{V}_{i}- \mathbf{B}\mathbf{\nabla}\mathbf{\cdot}\mathbf{V}_{i}\,, \tag{62}\] which is derived by substituting (58) into Faraday's law (2 \(c\)). Now consider small-amplitude perturbations with respect to a particular macroscale state of the plasma \[\delta\mathbf{V}_{i} = \delta\widehat{\mathbf{V}}_{i\perp}\exp\left\{{\rm i} \left(\mathbf{k}\mathbf{\cdot}\mathbf{r}-\omega t \right)\right\}, \tag{63a}\] \[\delta\mathbf{B} = \delta\widehat{\mathbf{B}}_{\perp}\exp\left\{{\rm i} \left(\mathbf{k}\mathbf{\cdot}\mathbf{r}-\omega t \right)\right\}, \tag{63b}\] whose characteristic frequency \(\omega\) is much greater than that of the plasma's bulk fluid motions (but is still much smaller than \(\Omega_{i}\)), whose wavevector \(\mathbf{k}=k_{\parallel}\hat{\mathbf{z}}\) is parallel to \(B\), and assume also that the velocity and magnetic-field perturbations are perpendicular to \(B\). It is then easy to show that (60) and (62) become \[-{\rm i}m_{i}n_{i}\omega\delta\widehat{\mathbf{V}}_{i\perp} = {\rm i}\left(\frac{B_{0}^{2}}{4\uppi}+p_{i\perp}-p_{i\parallel} \right)k_{\parallel}\frac{\delta\widehat{\mathbf{B}}_{\perp}}{B}\,, \tag{64a}\] \[-{\rm i}\omega\delta\widehat{\mathbf{B}}_{\perp} = {\rm i}Bk_{\parallel}\delta\widehat{\mathbf{V}}_{i\perp}\,, \tag{64b}\] where \(p_{i\perp}\) and \(p_{i\parallel}\) are the perpendicular and parallel ion pressures associated with the macroscale state (which, on account of its comparatively slow evolution compared to the perturbation, can be regarded a quasi-equilibrium). Physically, the macroscale flow gives rise to different values of \(p_{i\perp}\) and \(p_{i\parallel}\), and thereby an ion pressure anisotropy \(\Delta_{i}\), because it changes the strength \(B\) of the macroscale magnetic field; thanks to the effective conservation of the first and second adiabatic moments of the ions on the evolution timescale of the macroscale flow (Chew _et al._, 1956), an increase (decrease) in \(B\) results in an increase (decrease) in \(p_{i\perp}\), and a decrease (increase) in \(p_{i\parallel}\). The dispersion relation for the perturbation is then \[\omega^{2}=k_{\parallel}^{2}v_{\rm thi}^{2}\left(\frac{1}{\beta_{i}}+\frac{ \Delta_{i}}{2}\right)\,, \tag{65}\] where \(\beta_{i}\), defined by (57), is the ion plasma beta. For a sufficiently negative ion pressure anisotropy, viz., \(\Delta_{i}<-2/\beta_{i}\), the perturbation is unstable. This instability is known as the (parallel) firehose instability. The underlying physics of the parallel firehose instability has been discussed extensively elsewhere (see Rosin _et al._, 2011, and references therein; also see section 4.4.1). Here, we simply note that the firehose instability arises in a magnetised plasma with sufficiently negative pressure anisotropy as compared to the inverse of the ion plasma beta; because the ion CE distribution function has a small, non-zero pressure anisotropy, this statement applies to CE plasma at large \(\beta_{i}\). We also observe that the product of the growth rate (65) of the firehose instability with the ion-ion collision time satisfies \[\omega\tau_{i}\sim k_{\parallel}\lambda_{i}\left|\frac{1}{\beta_{i}}+\Delta_ {i}\right|^{1/2}\sim\frac{1}{\beta_{i}}\frac{\lambda_{i}}{\rho_{i}}\,, \tag{66}\] where we have assumed that \(\Delta_{i}\lesssim 2\beta_{i}^{-1}\), and employed the (non-trivial) result that the peak growth of the parallel firehose instability occurs at wavenumbers satisfying \(k_{\parallel}\rho_{i}\sim\beta_{i}^{-1/2}\) (see sections 4.4.1 and 4.4.2). Thus, if \(\beta_{i}\ll\lambda_{i}/\rho_{i}\) - a condition easily satisifed in weakly collisional astrophysical environments such as the ICM (see table 4) - it follows that \(\omega\tau_{i}\gg 1\), and so collisional damping is unable to inhibit the parallel firehose in a CE plasma1. This failure is directly attributable to its characteristic wavelength being at collisionless scales: the parallel wavenumber satisfies \(k_{\parallel}\lambda_{i}\sim\beta_{i}^{-1/2}\lambda_{i}/\rho_{i}\gg 1\). Footnote 1: In fact, the naive condition \(\gamma\tau_{i}\lesssim 1\) is not sufficient to ensure collisional stabilisation of the firehose instability; the true stabilisation condition is instead \(k_{\parallel}\lambda_{i}\lesssim 1\) (see section 2.5.7 for a discussion of this claim). This simple example clearly illustrates that microinstabilities are indeed possible in a classical, collisional plasma, for precisely the reasons given in section 2.3.2. #### 2.3.4 Which microinstabilities are relevant Although the naive arguments described in section 2.3.2 do not imply kinetic stability of CE plasma, these same arguments do lead to significant restrictions on the type of microinstabilities that can arise. Namely, for some plasma modes, the small anisotropy of CE distribution functions is an insufficient free-energy source for overcoming the competing collisionless damping mechanisms that ensure stability for pure Maxwellian distribution functions - e.g., Landau damping or cyclotron damping. For other plasma modes, the characteristic length scales are so large that collisional damping does suppress growth. In magnetised plasmas, there also exist cyclotron harmonic oscillations that, despite minimal damping, can only become unstable for sufficiently large anisotropy of the particle distribution function: e.g., the electrostatic Harris instability (Harris, 1959; Hall _et al._, 1964). Since the anisotropy threshold for such microinstabilities is typically \(\Delta_{s}\gtrsim 1\)(Shima & Hall, 1965), they cannot operate in a CE plasma. We claim that there are only two classes of microinstabilities that can be triggered in a CE plasma. The first are _quasi-cold plasma modes_: these are modes whose frequency is so large that resonant wave-particle interactions (Landau or cyclotron resonances) only occur with electrons whose speed greatly exceeds the electron thermal speed \(v_{\rm the}\). Collisionless damping of such modes is typically very weak, and thus small anisotropies of particle distribution functions can be sufficient to drive an instability. Well-known examples of a small non-Maxwellian part of the distribution function giving rise to microinstabilities include the bump-on-tail instability associated with a fast beam of electrons (see section 3.3.3 of Davidson 1983), or the whistler instability for small temperature anisotropies (see section 3.3.5 of Davidson 1983). The existence of such instabilities for the CE distribution can be demonstrated explicitly: e.g., the peak growth rate of the bump-on-tail instability associated with the CE distribution function ('the CE bump-on-tail instability') is calculated in appendix D.3. However, the growth rates \(\gamma\) of such instabilities are exponentially small in \(\lambda_{e}/L\ll 1\). This claim, which is explicitly proven for the CE bump-on-tail instability in appendix D.3, applies to all electrostatic instabilities (see appendix D.4), and it can be argued that it also applies to all quasi-cold plasma modes (see appendix E). When combined with the constraint that the resonant wave-particle interactions required for such instabilities cannot occur if \(\gamma\tau_{r}\lesssim 1\), where \(\tau_{r}\) is the collision time of the resonant particles, the exponential smallness of the growth rate suggests that such microinstabilities will not be significant provided \(\lambda_{e}/L\) really is small. As discussed in section 2.2.3, plasmas in which \(\lambda_{e}/L\) is only moderately small are not well modelled as CE plasmas anyway, and thus, for the rest of this paper, we will not study quasi-cold-plasma-mode instabilities. The second class of allowed microinstabilities comprises modes that are electromagnetic and low-frequency in the sense that the complex frequency \(\omega\) of the microinstability satisfies, for at least one particle species \(s\), \[\frac{\omega}{kv_{\rm ths}}\sim\left(\frac{\lambda_{s}}{L}\right)^{\iota}\ll 1, \tag{67}\] where \(\iota\) is some order-unity number. Low-frequency electromagnetic modes are in general only subject to weak Landau and cyclotron damping (of order \(\omega/kv_{\rm ths}\ll 1\) or less), and thus can become unstable for small distribution-function anisotropies. By contrast, electromagnetic modes satisfying \(\omega\sim kv_{\rm ths}\) would typically generate strong inductive electric fields, which would in turn be subject to significant Landau or cyclotron damping, overwhelming any unstable tendency. The firehose instability introduced in section 2.3.3 is one example of this type of microinstability: it satisfies (67) with \(\iota=1/2\), provided its \(\beta\)-stabilisation threshold is surpassed. In this paper, we will focus on microinstabilities in this second class. Whilst small compared to the streaming rate \(kv_{\rm ths}\) of species \(s\), the growth rates satisfying (67) can still be significantly larger than the rate at which the plasma evolves on macroscopic scales, and thus invalidate the CE expansion. We do not in this paper present a rigorous proof that there are no microinstabilities of the CE distribution function which do not fall into either of the two classes considered above. However, there do exist more precise arguments supporting the latter claim than those based on physical intuition just presented; these are discussed further in sections 2.4.2 and 2.5.8. The microinstabilities satisfying (67) fall into two sub-classes. The first sub-class consists of microinstabilities driven by the CE temperature-gradient, CE electron-friction and CE electron-ion-drift terms in the CE distribution functions (8); we refer to these collectively as _CE temperature-gradient-driven microinstabilities_, or CET microinstabilities, on account of the parameters \(\eta_{e}^{R}\) and \(\eta_{e}^{u}\) scaling with temperature gradients (see section 2.2.2). The second sub-class is microinstabilities driven by the CE shear terms, or _CE shear-driven microinstabilities_ (CES microinstabilities). This sub-classification is necessary for two reasons. First, the velocity-space anisotropy associated with the CE shear terms is different from other non-Maxwellian terms, and thus different types of microinstabilities can emerge for the two sub-classes. Secondly, as was discussed in section 2.2.3 for the case of CE plasma, the typical size of small parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\) and \(\eta_{i}\) is different from that of \(\epsilon_{e}\) and \(\epsilon_{i}\). In our initial overview of our calculations (section 2.4) and in the more detailed discussion of our method (section 2.5), we will consider all microinstabilities driven by the non-Maxwellian terms of the CE distribution together; however, when it comes to presenting detailed results, we will consider CET and CES microinstabilities separately (sections 3 and 4, respectively). ### Linear stability calculation: overview #### 2.4.1 General dispersion relation Our linear kinetic stability calculation proceeds as follows: we consider an electromagnetic perturbation with wavevector \(\mathbf{k}\) and (complex) frequency \(\omega\) of the form \[\delta\mathbf{E} =\widehat{\delta\mathbf{E}}\exp\left\{{\rm i}\left(\mathbf{k\cdot r}- \omega t\right)\right\}, \tag{11a}\] \[\delta\mathbf{B} =\widehat{\delta\mathbf{B}}\exp\left\{{\rm i}\left(\mathbf{k\cdot r}- \omega t\right)\right\}, \tag{11b}\] in a plasma with the equilibrium electron and ion distribution functions given by (11a) and (11b), respectively. We assume that all macroscopic parameters in the CE distribution function are effectively constant on the time scales and length scales associated with microinstabilities: this is equivalent to assuming that \(k\lambda_{e},k\lambda_{i}\gg 1\) (where \(k\equiv|\mathbf{k}|\) is the wavenumber of the perturbation), and \(|\omega|\tau_{L}\gg 1\). To minimise confusion between quantities evolving on short, collisionless time scales, and those on long, fluid time scales, we relabel the equilibrium number density of species \(s\) as \(n_{s0}\), and the macroscopic magnetic field as \(\mathbf{B}_{0}\) in subsequent calculations. For notational convenience, we define \[\eta_{e}\equiv\eta_{e}^{T}, \tag{12}\] and \[A_{e}(\tilde{v}_{e})\equiv A_{e}^{T}(\tilde{v}_{e})+\frac{\eta_{e}^{R}}{\eta _{e}^{T}}A_{e}^{R}(\tilde{v}_{e})+\frac{\eta_{e}^{u}}{\eta_{e}^{T}}A_{e}^{u}( \tilde{v}_{e})\,, \tag{13}\] which in turn allows for the equilibrium distribution function of species \(s\) to be written as \[f_{s0}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=\frac{n_{s0}}{v_{{\rm ths} }^{3}\pi^{3/2}}\exp\left(-\tilde{v}_{s}^{2}\right)\left[1+\eta_{s}A_{s}( \tilde{v}_{s})\tilde{v}_{s\parallel}+\epsilon_{s}C_{s}(\tilde{v}_{s})\left( \tilde{v}_{s\parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2}\right)\right]. \tag{14}\] Finally, without loss of generality, we can set \(\mathbf{V}_{i}=0\) by choosing to perform the kinetic calculation in the frame of the ions; thus, \(\tilde{\mathbf{v}}_{s}=\mathbf{v}/v_{{\rm th}s}\). It is well known (Stix, 1962; Parra, 2017) that the electric field of all linear electromagnetic perturbations in a collisionless, magnetised plasma with equilibrium distribution function \(f_{s0}\) must satisfy \[\left[\frac{c^{2}k^{2}}{\omega^{2}}\left(\hat{\mathbf{k}}\hat{\mathbf{k}}-\mathbf{l} \right)+\mathfrak{E}\right]\mathbf{\cdot}\widehat{\delta\mathbf{E}}=0\,, \tag{15}\] where \(\hat{\mathbf{k}}\equiv\mathbf{k}/k\) is the direction of the perturbation, \[\mathbf{\mathfrak{E}}\equiv\mathbf{l}+\frac{4\pi\mathrm{i}}{\omega}\mathbf{\sigma} \tag{73}\] the plasma dielectric tensor, and \(\mathbf{\sigma}\) the plasma conductivity tensor. The hot-plasma dispersion relation is then given by \[\det\left[\frac{c^{2}k^{2}}{\omega^{2}}\left(\hat{\mathbf{k}}\hat{\mathbf{k}}-\mathbf{l} \right)+\mathbf{\mathfrak{E}}\right]=0. \tag{74}\] The conductivity tensor in a hot, magnetised plasma is best displayed in an orthogonal coordinate system with basis vectors \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) defined in terms of \(\mathbf{B}_{0}\) and \(\mathbf{k}\): \[\hat{\mathbf{z}}\equiv\frac{\mathbf{B}_{0}}{B_{0}},\quad\hat{\mathbf{x}}\equiv\frac{\mathbf{k }_{\perp}}{k_{\perp}}\equiv\frac{\mathbf{k}-k_{\parallel}\hat{\mathbf{z}}}{k_{\perp}}, \quad\hat{\mathbf{y}}\equiv\hat{\mathbf{z}}\times\hat{\mathbf{x}}, \tag{75}\] where \(B_{0}\equiv|\mathbf{B}_{0}|\), \(k_{\parallel}\equiv\mathbf{k}\mathbf{\cdot}\hat{\mathbf{z}}\), and \(k_{\perp}\equiv|\mathbf{k}_{\perp}|\). In this notation, \(\mathbf{k}=k_{\parallel}\hat{\mathbf{z}}+k_{\perp}\hat{\mathbf{x}}\). The conductivity tensor is then given by \[\mathbf{\sigma}=\sum_{s}\mathbf{\sigma}_{s}=-\frac{\mathrm{i}}{4\pi\mathrm{ c}\omega}\sum_{s}\omega_{\mathrm{ps}}^{2}\bigg{[}\frac{2}{\sqrt{\pi}}\frac{k_{ \parallel}}{k_{\parallel}}\int_{-\infty}^{\infty}\mathrm{d}\tilde{w}_{s\parallel }\,\tilde{w}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\Lambda_ {s}(\tilde{w}_{s\parallel},\tilde{v}_{s\perp})\hat{\mathbf{z}}\hat{\mathbf{z}}\\ +\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\int_{C_{L}} \mathrm{d}\tilde{w}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp} \tilde{v}_{s\perp}^{2}\,\Xi_{s}(\tilde{w}_{s\parallel},\tilde{v}_{s\perp}) \sum_{n=-\infty}^{\infty}\frac{\mathbf{R}_{sn}}{\zeta_{sn}-\tilde{w}_{s\parallel}} \bigg{]}\,, \tag{76}\] where \[\omega_{\mathrm{ps}}\equiv\sqrt{\frac{4\pi Z_{s}^{2}e^{2}n_{s0}}{m_{s}}}, \tag{77}\] \[\tilde{w}_{s\parallel}\equiv\frac{k_{\parallel}\tilde{v}_{s\parallel}}{|k_{ \parallel}|}, \tag{78}\] \[\tilde{\rho}_{s}\equiv\frac{m_{s}cv_{\mathrm{th}s}}{Z_{s}eB_{0}}=\frac{|Z_{s} |}{Z_{s}}\rho_{s}, \tag{79}\] \[\tilde{\omega}_{s\parallel}\equiv\frac{\omega}{|k_{\parallel}|v_{\mathrm{th}s}}, \tag{80}\] \[\zeta_{sn}\equiv\tilde{\omega}_{s\parallel}-\frac{n}{|k_{\parallel}||\tilde{ \rho}_{s}}, \tag{81}\] \[\tilde{f}_{s0}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\equiv\frac{\pi^{3/2 }v_{\mathrm{th}s}^{3}}{n_{s0}}f_{s0}\left(\frac{k_{\parallel}}{|k_{\parallel} |}v_{\mathrm{th}s}\tilde{w}_{s\parallel},v_{\mathrm{th}s}\tilde{v}_{s\perp} \right), \tag{82}\] \[\Lambda_{s}(\tilde{w}_{s\parallel},\tilde{v}_{s\perp})\equiv\tilde{v}_{s \perp}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{w}_{s\parallel}}-\tilde{w} _{s\parallel}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}, \tag{83}\] \[\Xi_{s}(\tilde{w}_{s\parallel},\tilde{v}_{s\perp})\equiv\frac{\partial\tilde{ f}_{s0}}{\partial\tilde{v}_{s\perp}}+\frac{\Lambda_{s}(\tilde{w}_{s\parallel}, \tilde{v}_{s\perp})}{\tilde{\omega}_{s\parallel}}, \tag{84}\] \[(\mathbf{R}_{sn})_{xx}\equiv\frac{n^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}\tilde{v}_{s\perp}^{2}}, \tag{85a}\] \[(\mathbf{R}_{sn})_{xy}\equiv\frac{\mathrm{in}J_{n}(k_{\perp}\tilde{ \rho}_{s}\tilde{v}_{s\perp})J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})}{k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp}}, \tag{85b}\] \[(\boldsymbol{R}_{sn})_{xz} \equiv \frac{nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{k_{ \perp}\tilde{\rho}_{s}\tilde{v}_{s\perp}}\frac{k_{\parallel}\tilde{w}_{s \parallel}}{|k_{\parallel}|\tilde{v}_{s\perp}},\] (85_c_) \[(\boldsymbol{R}_{sn})_{yx} \equiv -(\boldsymbol{R}_{sn})_{xy},\] (85_d_) \[(\boldsymbol{R}_{sn})_{yy} \equiv J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp })^{2},\] (85_e_) \[(\boldsymbol{R}_{sn})_{yz} \equiv -{\rm i}nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n}^ {\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\frac{k_{\parallel}\tilde {w}_{s\parallel}}{|k_{\parallel}|\tilde{v}_{s\perp}},\] (85_f_) \[(\boldsymbol{R}_{sn})_{zx} \equiv (\boldsymbol{R}_{sn})_{xz},\] (85_g_) \[(\boldsymbol{R}_{sn})_{zy} \equiv -(\boldsymbol{R}_{sn})_{yz},\] (85_h_) \[(\boldsymbol{R}_{sn})_{zz} \equiv \frac{\tilde{w}_{s\parallel}^{2}}{\tilde{v}_{s\perp}^{2}}J_{n}(k_ {\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}.\] (85_i_) Here \((\boldsymbol{R}_{sn})_{xy}=\hat{\boldsymbol{x}}\cdot\boldsymbol{R}_{sn}\cdot \hat{\boldsymbol{y}}\), and similarly for other components of \(\boldsymbol{R}_{sn}\). For the reader's convenience, a summary of the derivation of the hot-plasma dispersion relation is given in appendix C. We note that the dielectric and conductivity tensors have the following symmetries: \[\mathfrak{E}_{yx}=-\mathfrak{E}_{xy}\,,\quad\mathfrak{E}_{zx}=\mathfrak{E}_{ xz}\,,\quad\mathfrak{E}_{zy}=-\mathfrak{E}_{yz}\,, \tag{86}\] \[\sigma_{yx}=-\sigma_{xy}\,,\quad\sigma_{zx}=\sigma_{xz}\,,\quad\sigma_{zy}=- \sigma_{yz}\,, \tag{87}\] where, for tensors with no species subscript, we use the notation \(\mathfrak{E}_{xy}\equiv\hat{\boldsymbol{x}}\cdot\mathfrak{E}\cdot\hat{ \boldsymbol{y}}\). We also observe that if \(f_{s0}(v_{\parallel},v_{\perp})\) is an even function with respect to \(v_{\parallel}\), then, for \(k_{\parallel}>0\), \[\sigma_{xx}(-k_{\parallel}) = \sigma_{xx}(k_{\parallel})\,, \tag{88a}\] \[\sigma_{xy}(-k_{\parallel}) = \sigma_{xy}(k_{\parallel})\,,\] (88b) \[\sigma_{xz}(-k_{\parallel}) = -\sigma_{xz}(k_{\parallel})\,,\] (88c) \[\sigma_{yy}(-k_{\parallel}) = \sigma_{yy}(k_{\parallel})\,,\] (88d) \[\sigma_{yz}(-k_{\parallel}) = -\sigma_{yz}(k_{\parallel})\,,\] (88e) \[\sigma_{zz}(-k_{\parallel}) = \sigma_{zz}(k_{\parallel})\,, \tag{88f}\] with the remaining components of the conductivity tensor given by equations (87). If \(f_{s0}(v_{\parallel},v_{\perp})\) is an odd function with respect to \(v_{\parallel}\), then \[\sigma_{xx}(-k_{\parallel}) = -\sigma_{xx}(k_{\parallel})\,, \tag{89a}\] \[\sigma_{xy}(-k_{\parallel}) = -\sigma_{xy}(k_{\parallel})\,,\] (89b) \[\sigma_{xz}(-k_{\parallel}) = \sigma_{xz}(k_{\parallel})\,,\] (89c) \[\sigma_{yy}(-k_{\parallel}) = -\sigma_{yy}(k_{\parallel})\,,\] (89d) \[\sigma_{yz}(-k_{\parallel}) = \sigma_{yz}(k_{\parallel})\,,\] (89e) \[\sigma_{zz}(-k_{\parallel}) = -\sigma_{zz}(k_{\parallel})\,. \tag{89f}\] These symmetries can be used to determine completely the behaviour of perturbations with \(k_{\parallel}<0\) directly from perturbations with \(k_{\parallel}>0\), without any additional calculations. Thus, unless stated otherwise, from this point on, we assume \(k_{\parallel}>0\), and thus \(\tilde{w}_{s\parallel}=\tilde{v}_{s\parallel}\) [see (78)]. #### 2.4.2 Simplifications of dispersion relation: overview of our approach The full hot-plasma dispersion relation (74) is a transcendental equation, and thus, for general distribution functions, the growth rates of perturbations can only be determined numerically; this hinders the systematic investigation of stability over wide-ranging parameter regimes. However, adopting a few simplifications both to the form of the CE distribution functions (71) and to the type of microinstabilities being considered (see section 2.3.4) turns out to be advantageous when attempting a systematic study. It enables us to obtain simple analytical results for microinstability growth rates and characteristic wavenumbers, as well as greatly reducing the numerical cost of evaluating these quantities. The former allows us to make straightforward comparisons between microinstabilities, while the latter facilitates the calculation of stability plots over a wide range of parameters without requiring intensive computational resources. First, we choose a Krook collision operator, with constant collision time \(\tau_{s}\) for each species \(s\)(Bhatnagar _et al._, 1954), when evaluating the isotropic functions \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), \(A_{e}^{u}(\tilde{v}_{e})\), \(A_{i}(\tilde{v}_{i})\), \(C_{e}(\tilde{v}_{e})\), and \(C_{i}(\tilde{v}_{i})\) in (71). As was explained in section 2.2.1, these functions are determined by the collision operator. While the full Landau collision operator might seem to be the most appropriate choice, the conductivity tensor \(\boldsymbol{\sigma}\) defined by (76) cannot be written in terms of standard mathematical functions if this choice is made. Instead, the relevant integrals must be done numerically. If a simplified collision operator is assumed, \(\boldsymbol{\sigma}\) can be evaluated analytically with only a moderate amount of algebra. In appendix B.2.1, we show that for the Krook collision operator, \[A_{e}^{T}(\tilde{v}_{e}) =-\left(\tilde{v}_{e}^{2}-\frac{5}{2}\right)\,, \tag{90a}\] \[A_{e}^{R}(\tilde{v}_{e}) =-1\,,\] (90b) \[A_{e}^{u}(\tilde{v}_{e}) =0\,,\] (90c) \[A_{i}(\tilde{v}_{e}) =-\left(\tilde{v}_{i}^{2}-\frac{5}{2}\right)\,,\] (90d) \[C_{e}(\tilde{v}_{e}) =-1\,,\] (90e) \[C_{i}(\tilde{v}_{i}) =-1\,, \tag{90f}\] where it is assumed that \(\tilde{v}_{e},\tilde{v}_{i}\ll\eta_{e}^{-1/3},\epsilon_{i}^{-1/2}\) in order that the CE distribution functions retain positive signs (the vanishing of the CE electron-ion-drift term is discussed in appendix B.2.1). Adopting the Krook collision operator has the additional advantage of allowing a simple prescription for collisional damping of microinstabilities to be introduced self-consistently into our stability calculation (see section 2.5.7 for further discussion of this). Secondly, as discussed in section 2.3.4, the most important microinstabilities associated with the CE distribution function are low-frequency, i.e., they satisfy (67). Therefore, instead of solving the full hot-plasma dispersion relation, we can obtain a less complicated algebraic dispersion relation. We also always consider electromagnetic rather than electrostatic perturbations. This is because it can be shown for a CE plasma that purely electrostatic microinstabilities are limited to the quasi-cold plasma modes (see appendix D). Describing how the simplified dispersion relation for low-frequency, electromagnetic perturbations is obtained from the full hot-plasma dispersion relation requires a rather lengthy exposition, and necessitates the introduction of a substantial amount of additional mathematical notation. In addition to this, certain shortcomings of this approach warrant an extended discussion. Readers who are interested these details will find them in the next section (section 2.5). Readers who are instead keen to see the results of the stability calculations as soon as possible are encouraged to jump to sections 3 and 4. ### Linear stability calculation: detailed methodology #### 2.5.1 Low-frequency condition in a magnetised plasma Before applying to the hot-plasma dispersion relation (2.74) the simplifications discussed in section 2.4.2, we refine the low-frequency condition (2.67) based on the specific form (2.76) of the conductivity tensor for a magnetised plasma. It is clear that the equilibrium distribution function only affects the conductivity tensor via the functions \(\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\) and \(\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\) [see (2.83) and (2.84)]. For a distribution function of the form (2.71), it can be shown that \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\tilde{v}_{s\perp} \exp\left(-\tilde{v}_{s}^{2}\right)\left[\eta_{s}A_{s}(\tilde{v}_{s})-3\epsilon _{s}C_{s}(\tilde{v}_{s})\tilde{v}_{s\parallel}\right], \tag{2.91}\] and \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\tilde{v}_{s \perp}\exp\left(-\tilde{v}_{s}^{2}\right)\left[2+2\tilde{v}_{s\parallel}\eta_{ s}A_{s}(\tilde{v}_{s})-\frac{\tilde{v}_{s\parallel}}{\tilde{v}_{s}}\eta_{s}A^{ \prime}_{s}(\tilde{v}_{s})\right.\] \[\left.+2\epsilon_{s}C_{s}(\tilde{v}_{s})\left(\tilde{v}_{s \parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2}+\frac{1}{2}\right)-\frac{1}{ \tilde{v}_{s}}\left(\tilde{v}_{s\parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2 }\right)\epsilon_{s}C^{\prime}_{s}(\tilde{v}_{s})\right.\] \[\left.+\frac{\eta_{s}}{\tilde{\omega}_{s\parallel}}A_{s}(\tilde{ v}_{s})-3\frac{\epsilon_{s}}{\tilde{\omega}_{s\parallel}}C_{s}(\tilde{v}_{s}) \tilde{v}_{s\parallel}\right], \tag{2.92}\] where the first term in the square brackets in (2.92) originates from the Maxwellian part of the distribution function. A comparison of the size of the second, third, fourth, and fifth terms with the first indicates that for \(\tilde{v}_{s}\sim 1\) - for which \(\Xi_{s}\) attains its largest characteristic values - the non-Maxwellian terms of the CE distribution function only provide a small, \(\mbox{\it O}(\eta_{e},\epsilon_{e})\) contribution, and thus the conductivity is only altered slightly. However, considering the sixth and seventh terms in the square brackets in (2.92) (which are only present thanks to the anisotropy of the CE distribution function), it is clear that the non-Maxwellian contribution to the conductivity tensor can be significant for \(\tilde{v}_{s}\sim 1\) provided the frequency (2.80) satisfies one of \[\tilde{\omega}_{s\parallel}\sim\eta_{s}\ll 1\quad\mbox{or}\quad\tilde{\omega}_{s \parallel}\sim\epsilon_{s}\ll 1\,. \tag{2.93}\] Thus, the relevant low-frequency condition in a magnetised plasma involves the parallel particle streaming rate \(k_{\parallel}v_{\rm ths}\). There do exist certain caveats to the claim that it is necessary for microinstabilities of CE plasma to satisfy (2.93); we defer detailed statement and discussion of these caveats - as well as of other potential shortcomings of our approach - to sections 2.5.6, 2.5.7 and 2.5.8. #### 2.5.2 Simplification I: non-relativistic electromagnetic fluctuations The requirement that the mode be electromagnetic, combined with the fact we are interested in non-relativistic fluctuations (\(\omega\ll kc\)) enables our first simplification. We see from (2.74) that for any perturbation of interest, the dielectric tensor must satisfy \(\|\mathbf{\mathfrak{C}}\|\gtrsim k^{2}c^{2}/\omega^{2}\gg 1\) (where \(\|\cdot\|\) is the Euclidean tensor norm); therefore, it simplifies to \[\mathbf{\mathfrak{C}}\approx\frac{4\pi\mbox{\rm\i}}{\omega}\mathbf{\sigma}\,. \tag{2.94}\] This amounts to ignoring the displacement current in the Ampere-Maxwell law, leaving Ampere's original equation. For convenience of exposition, we denote the contribution of each species \(s\) to (2.94) by \[\mathbf{\mathfrak{C}}_{s}\equiv\frac{4\pi\mbox{\rm\i}}{\omega}\mathbf{\sigma}_{s}\,. \tag{2.95}\] 5.3 Simplification II: expansion of dielectric tensor in \(\omega\ll k_{\parallel}v_{\mathrm{th}s}\) The next simplification involves an expansion of the matrices \(\mathfrak{E}_{s}\) in the small parameters \(\tilde{\omega}_{s\parallel}\sim\eta_{s}\sim\epsilon_{s}\ll 1\). The general principle of the expansion is as follows. We first divide the matrix \(\mathfrak{E}_{s}\) [see (2.73), (2.76), and (2.95)] into the Maxwellian contribution \(\boldsymbol{M}_{s}\) and the non-Maxwellian one \(\boldsymbol{P}_{s}\): \[\mathfrak{E}_{s}=\frac{\omega_{ps}^{2}}{\omega^{2}}\left(\boldsymbol{M}_{s}+ \boldsymbol{P}_{s}\right), \tag{2.96}\] where the \(\omega_{\mathrm{ps}}^{2}/\omega^{2}\) factor is introduced for later convenience. Next, we note that for a Maxwellian distribution, \(\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=0\) [see (2.83)], whereas \(\Lambda_{s}\sim\epsilon_{s},\eta_{s}\) for the non-Maxwellian component of the CE distribution function. Thus, from (2.76) considered under the ordering \(k\rho_{s}\sim 1\), \(\boldsymbol{M}_{s}=O(\tilde{\omega}_{s\parallel})\) as \(\tilde{\omega}_{s\parallel}\to 0\), while \(\boldsymbol{P}_{s}=O(\eta_{s},\epsilon_{s})\). The expansion of \(\boldsymbol{M}_{s}\) and \(\boldsymbol{P}_{s}\) in \(\tilde{\omega}_{s\parallel}\) is, therefore, \[\boldsymbol{M}_{s}\big{(}\tilde{\omega}_{s\parallel},\boldsymbol{ k}\big{)} \equiv \tilde{\omega}_{s\parallel}\boldsymbol{M}_{s}^{(0)}(\boldsymbol{ k})+\tilde{\omega}_{s\parallel}^{2}\boldsymbol{M}_{s}^{(1)}(\boldsymbol{k})+...\,, \tag{2.97a}\] \[\boldsymbol{P}_{s}\big{(}\tilde{\omega}_{s\parallel},\boldsymbol{ k}\big{)} \equiv \boldsymbol{P}_{s}^{(0)}(\boldsymbol{k})+\tilde{\omega}_{s \parallel}\boldsymbol{P}_{s}^{(1)}(\boldsymbol{k})+...\,. \tag{2.97b}\] where the matrices \(\boldsymbol{M}_{s}^{(0)}\) and \(\boldsymbol{M}_{s}^{(1)}\) are \(O(1)\) functions of \(\boldsymbol{k}\) only, and \(\boldsymbol{P}_{s}^{(0)}\) and \(\boldsymbol{P}_{s}^{(1)}\) are \(O(\eta_{s},\epsilon_{s})\). We then expand \(\mathfrak{E}_{s}\) as follows: \[\mathfrak{E}_{s}=\tilde{\omega}_{s\parallel}\mathfrak{E}_{s}^{(0)}+\tilde{ \omega}_{s\parallel}^{2}\mathfrak{E}_{s}^{(1)}+...\,, \tag{2.98}\] where \[\mathfrak{E}_{s}^{(0)} \equiv \frac{\omega_{\mathrm{ps}}^{2}}{\omega^{2}}\left[\boldsymbol{M} _{s}^{(0)}(\boldsymbol{k})+\frac{1}{\tilde{\omega}_{s\parallel}}\boldsymbol{ P}_{s}^{(0)}(\boldsymbol{k})\right]\,, \tag{2.99a}\] \[\mathfrak{E}_{s}^{(1)} \equiv \frac{\omega_{\mathrm{ps}}^{2}}{\omega^{2}}\left[\boldsymbol{M} _{s}^{(1)}(\boldsymbol{k})+\frac{1}{\tilde{\omega}_{s\parallel}}\boldsymbol{ P}_{s}^{(1)}(\boldsymbol{k})\right]\,. \tag{2.99b}\] #### 2.5.4 Additional symmetries of low-frequency dielectric tensor \(\mathfrak{E}_{s}^{(0)}\) The tensor \(\mathfrak{E}_{s}^{(0)}\) defined by (2.99a) has some rather convenient additional symmetries, which lead to significant simplification of the dispersion relation. In appendix F we show that in combination with the general symmetries (2.86), which apply to \(\mathfrak{E}_{s}^{(0)}\) in addition to \(\mathfrak{E}\), for any distribution function of particle species \(s\) with a small anisotropy, \[(\mathfrak{E}_{s}^{(0)})_{xz} = -\frac{k_{\perp}}{k_{\parallel}}(\mathfrak{E}_{s}^{(0)})_{xx}\,, \tag{2.100a}\] \[(\mathfrak{E}_{s}^{(0)})_{yz} = \frac{k_{\perp}}{k_{\parallel}}(\mathfrak{E}_{s}^{(0)})_{xy}\,,\] (2.100b) \[(\mathfrak{E}_{s}^{(0)})_{zz} = \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathfrak{E}_{s}^{(0)})_{ xx}\,. \tag{2.100c}\] These symmetries have the consequence that \[\hat{\boldsymbol{k}}\boldsymbol{\cdot}\mathfrak{E}_{s}^{(0)}=\mathfrak{E}_{s}^ {(0)}\boldsymbol{\cdot}\hat{\boldsymbol{k}}=0\,. \tag{2.101}\] As a result of this identity, it is convenient to calculate the components of \(\mathfrak{E}_{s}^{(0)}\) (and \(\mathfrak{E}_{s}\)) in the coordinate basis \(\{\boldsymbol{e}_{1},\boldsymbol{e}_{2},\boldsymbol{e}_{3}\}\) defined by \[\boldsymbol{e}_{1}\equiv\hat{\boldsymbol{y}}\times\hat{\boldsymbol{k}}\,,\quad \boldsymbol{e}_{2}\equiv\hat{\boldsymbol{y}}\,,\quad\boldsymbol{e}_{3}\equiv \hat{\boldsymbol{k}}\,. \tag{2.102}\] Carrying out this calculation (see appendix F), we find \[(\mathfrak{E}^{(0)}_{s})_{11} =\frac{k^{2}}{k_{\parallel}^{2}}(\mathfrak{E}^{(0)}_{s})_{xx}\,, \tag{103a}\] \[(\mathfrak{E}^{(0)}_{s})_{12} =-(\mathfrak{E}^{(0)}_{s})_{21}=\frac{k}{k_{\parallel}}( \mathfrak{E}^{(0)}_{s})_{xy}\,,\] (103b) \[(\mathfrak{E}^{(0)}_{s})_{22} =(\mathfrak{E}^{(0)}_{s})_{yy}\,,\] (103c) \[(\mathfrak{E}^{(0)}_{s})_{13} =(\mathfrak{E}^{(0)}_{s})_{31}=(\mathfrak{E}^{(0)}_{s})_{23}=( \mathfrak{E}^{(0)}_{s})_{32}=(\mathfrak{E}^{(0)}_{s})_{33}=0\,, \tag{103d}\] where \((\mathfrak{E}^{(0)}_{s})_{ij}\) is the \((i,j)\)-th component of \(\mathfrak{E}^{(0)}_{s}\) in the basis \(\{\boldsymbol{e}_{1},\boldsymbol{e}_{2},\boldsymbol{e}_{3}\}\). We conclude that, if \(k\rho_{s}\sim 1\) and \(\tilde{\omega}_{s\parallel}\ll 1\), the components of \(\mathfrak{E}_{s}\) satisfy \[(\mathfrak{E}_{s})_{13}\sim(\mathfrak{E}_{s})_{23}\sim(\mathfrak{E}_{s})_{33 }\sim\tilde{\omega}_{s\parallel}(\mathfrak{E}_{s})_{11}\sim\tilde{\omega}_{s \parallel}(\mathfrak{E}_{s})_{12}\sim\tilde{\omega}_{s\parallel}(\mathfrak{E}_ {s})_{22}\,. \tag{104}\] These components can be written in terms of the components of \(\mathfrak{E}_{s}\) in the \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\) coordinate frame [see (75)] via a coordinate transformation; the resulting expressions are rather bulky, so we do not reproduce them here - they are detailed in appendix G. #### 2.5.5 Consequences for dispersion relation On account of the additional symmetries described in the previous section, a simplified dispersion relation for low-frequency modes can be derived in place of the full hot-plasma dispersion relation (74). However, depending on the frequency and characteristic wavelengths of modes, this derivation has a subtlety because of the large discrepancy between ion and electron masses. In, e.g., a two-species plasma with \(\mu_{e}=m_{e}/m_{i}\ll 1\) (and ion charge \(Z\)), we have \[\frac{\tilde{\omega}_{e\parallel}}{\tilde{\omega}_{i\parallel}}=\sqrt{\mu_{ e}\tau}\,, \tag{105}\] where \(\tau=T_{i}/T_{e}\). If \(\tau\sim 1\) [as would be expected in a collisional plasma on macroscopic evolution time scales \(\tau_{L}\) greater than the ion-electron temperature equilibration time \(\tau_{ie}^{\rm eq}\) - cf. (54)], then \(\tilde{\omega}_{i\parallel}\sim\mu_{e}^{-1/2}\tilde{\omega}_{e\parallel}\gg \tilde{\omega}_{e\parallel}\). Thus, in general, \(\tilde{\omega}_{i\parallel}\not\sim\tilde{\omega}_{e\parallel}\), and any dispersion relation will in principle depend on an additional (small) dimensionless parameter \(\mu_{e}\). This introduces various complications to the simplified dispersion relation's derivation, most significant of which being that, since \(\rho_{e}=Z\mu_{e}^{1/2}\tau^{-1/2}\rho_{i}\ll\rho_{i}\) (for \(Z\gtrsim 1\)), to assume the ordering \(k\rho_{s}\sim 1\) for both ions and electrons is inconsistent (see section 2.5.6). To avoid the description of our approach being obscured by these complications, we consider a special case at first: we adopt the ordering \(k\rho_{e}\sim 1\) in a two-species plasma and assume that \(\tilde{\omega}_{i\parallel}\sim\mu_{e}^{-1/2}\tilde{\omega}_{e\parallel}\ll 1\). In this case, \(\tilde{\omega}_{i\parallel}\|\mathfrak{E}^{(0)}_{i}\|\sim\mu_{e}^{1/2}Z\tau^{ -1/2}\tilde{\omega}_{e}\|\mathfrak{E}^{(0)}_{e}\|\ll\tilde{\omega}_{e\parallel }\|\mathfrak{E}^{(0)}_{e}\|\), and so the dielectric tensor \(\mathfrak{E}\) is given by \[\mathfrak{E}=\tilde{\omega}_{e\parallel}\mathfrak{E}^{(0)}+\tilde{\omega}_{e \parallel}^{2}\mathfrak{E}^{(1)}+...\,, \tag{106}\] where \[\mathfrak{E}^{(0)} \equiv\mathfrak{E}^{(0)}_{e}+\frac{\tilde{\omega}_{i\parallel}}{ \tilde{\omega}_{e\parallel}}\mathfrak{E}^{(0)}_{i}\approx\mathfrak{E}^{(0)}_{e }\,, \tag{107a}\] \[\mathfrak{E}^{(1)}_{s} \equiv\mathfrak{E}^{(1)}_{e}+\frac{\tilde{\omega}_{i\parallel}^{2} }{\tilde{\omega}_{e\parallel}^{2}}\mathfrak{E}^{(1)}_{i}\,. \tag{107b}\] Thus, to leading order in the \(\tilde{\omega}_{e\parallel}\ll 1\) expansion, only the electron species contributes to the dielectric tensor for electron-Larmor-scale modes. We revisit the derivation of simplified dispersion relations for CE microinstabilities more generally in section 2.5.6. To derive the simplified dispersion relation for electron-Larmor-scale modes, we start by considering the component of (2.72) for the electric field that is parallel to the wavevector \(\hat{\boldsymbol{k}}\), \[\hat{\boldsymbol{k}}\boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}\boldsymbol{ \cdot}\,\widehat{\delta\boldsymbol{E}}=0\,, \tag{2.108}\] and then substitute the expanded form (2.106) of the dielectric tensor (with \(s=e\)). The orthogonality of \(\boldsymbol{\mathfrak{E}}_{e}^{(0)}\) to \(\hat{\boldsymbol{k}}\) - viz., (2.101) - implies that (2.108) becomes \[\hat{\boldsymbol{k}}\boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}^{(1)} \boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}=\mathfrak{C}_{33}^{(1)} \hat{\boldsymbol{k}}\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}+\hat{ \boldsymbol{k}}\boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}^{(1)}\boldsymbol {\cdot}\,\widehat{\delta\boldsymbol{E}}_{T}=\,O(\tilde{\omega}_{e\parallel}| \widehat{\delta\boldsymbol{E}}|)\,, \tag{2.109}\] where the transverse electric field is defined by \(\widehat{\delta\boldsymbol{E}}_{T}\equiv\widehat{\delta\boldsymbol{E}} \boldsymbol{\cdot}\left(\boldsymbol{l}-\hat{\boldsymbol{k}}\hat{\boldsymbol{ k}}\right)\). In appendix D.2, we show that for \(\tilde{\omega}_{e\parallel},\tilde{\omega}_{i\parallel}\ll 1\), \[\mathfrak{C}_{33}^{(1)}\approx\frac{\omega_{pe}^{2}}{\omega^{2}}\frac{2k_{ \parallel}^{2}}{k^{2}}(1+Z\tau^{-1})\left[1+\,O(\eta_{e},\epsilon_{e})\right]\,. \tag{2.110}\] Since this is strictly positive, we can rewrite (2.109) to give the electrostatic field in terms of the transverse electric field: \[\hat{\boldsymbol{k}}\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}=-\left( \boldsymbol{\mathfrak{E}}_{33}^{(1)}\right)^{-1}\left(\hat{\boldsymbol{k}} \boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}^{(1)}\boldsymbol{\cdot}\, \widehat{\delta\boldsymbol{E}}_{T}\right)\,. \tag{2.111}\] We conclude that \(|\hat{\boldsymbol{k}}\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}|\sim| \widehat{\delta\boldsymbol{E}}_{T}|\) for all low-frequency perturbations with \(k_{\parallel}\sim k\); a corollary of this result is that there can be no low-frequency purely electrostatic perturbations (see appendix D.4.1 for an alternative demonstration of this). We can now derive the dispersion relation from the other two components of (2.72), \[\left[\frac{c^{2}k^{2}}{\omega^{2}}\left(\hat{\boldsymbol{k}}\hat{\boldsymbol {k}}-\boldsymbol{l}\right)+\left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}- \boldsymbol{l}\right)\boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}\right] \boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}=0\,, \tag{2.112}\] by (again) substituting the expanded dielectric tensor (2.106) into (2.112): \[\left[\tilde{\omega}_{e\parallel}\,\boldsymbol{\mathfrak{E}}^{(0)}+\frac{c^{ 2}k^{2}}{\omega^{2}}\left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}-\boldsymbol {l}\right)\right]\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}_{T}=- \left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}-\boldsymbol{l}\right) \boldsymbol{\cdot}\left(\boldsymbol{\mathfrak{E}}-\tilde{\omega}_{e\parallel }\,\boldsymbol{\mathfrak{E}}^{(0)}\right)\boldsymbol{\cdot}\,\widehat{\delta \boldsymbol{E}}\,, \tag{2.113}\] where we have used the identity \[\boldsymbol{\mathfrak{E}}^{(0)}=\left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}- \boldsymbol{l}\right)\boldsymbol{\cdot}\,\boldsymbol{\mathfrak{E}}^{(0)} \boldsymbol{\cdot}\left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}-\boldsymbol {l}\right)\,, \tag{2.114}\] and ordered \(k^{2}c^{2}/\omega^{2}\sim\tilde{\omega}_{e\parallel}\|\boldsymbol{\mathfrak{E }}^{(0)}\|\). The ratio of the right-hand side of (2.113) to the left-hand side is \(\,O(\tilde{\omega}_{e\parallel})\); we thus conclude that, to leading order in the \(\tilde{\omega}_{e\parallel}\ll 1\) expansion, \[\left[\tilde{\omega}_{e\parallel}\,\boldsymbol{\mathfrak{E}}_{e}^{(0)}+\frac {c^{2}k^{2}}{\omega^{2}}\left(\hat{\boldsymbol{k}}\hat{\boldsymbol{k}}- \boldsymbol{l}\right)\right]\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}} _{T}=0\,, \tag{2.115}\] and the dispersion relation is approximately \[\left[\tilde{\omega}_{e\parallel}(\boldsymbol{\mathfrak{E}}_{e}^{(0)})_{11}- \frac{k^{2}c^{2}}{\omega^{2}}\right]\left[\tilde{\omega}_{e\parallel}( \boldsymbol{\mathfrak{E}}_{e}^{(0)})_{22}-\frac{k^{2}c^{2}}{\omega^{2}} \right]+\left[\tilde{\omega}_{e\parallel}(\boldsymbol{\mathfrak{E}}_{e}^{(0)})_ {12}\right]^{2}=0\,. \tag{2.116}\] Finally, writing the dielectric tensor in terms of \(\boldsymbol{M}_{e}\) and \(\boldsymbol{P}_{e}\) as defined by (2.96_a_), we find \[\left[\tilde{\omega}_{e\parallel}(\boldsymbol{M}_{e}^{(0)})_{11}+( \boldsymbol{P}_{e}^{(0)})_{11}-k^{2}d_{e}^{2}\right]\left[\tilde{\omega}_{e \parallel}(\boldsymbol{M}_{e}^{(0)})_{22}+(\boldsymbol{P}_{e}^{(0)})_{22}-k^{ 2}d_{e}^{2}\right]\] \[+\left[\tilde{\omega}_{e\parallel}(\boldsymbol{M}_{e}^{(0)})_{12}+ (\boldsymbol{P}_{e}^{(0)})_{12}\right]^{2}=0\,, \tag{2.117}\] where \(d_{e}=c/\omega_{\rm pe}\) is the electron inertial scale [see (2.56_b_)]. This can be re-written as a quadratic equation in \(\omega\) - and thus, expressions for the complex frequency of any low-frequency perturbation can be found for any given positive wavenumber. We note that the electron inertial scale is related to the electron Larmor radius by \(d_{e}=\rho_{e}\beta_{e}^{-1/2}\); therefore, our expansion scheme is only consistent with the low-frequency assumption (2.93) under our assumed ordering, \(\tilde{\omega}_{e\parallel}\sim\beta_{e}^{-1}\), when \(\beta_{e}\gg 1\). We note that one only needs to know \(\mbox{\goth C}_{e}^{(0)}\) in order to obtain the dispersion relation of low-frequency perturbations and the transverse component of the electric field, whereas to determine the electrostatic component of the electric field (and other quantities, such as the density perturbation - see appendix H), one must go to higher order in the \(\tilde{\omega}_{e\parallel}\ll 1\) expansion. Since we are primarily interested in microinstability growth rates and wavenumber scales, we will not explicitly calculate the electrostatic fields associated with perturbations using (2.111), and thus can avoid the rather laborious calculation of \(\mbox{\goth C}^{(1)}\) for CE distribution functions. We do, however, in appendix G.1.3 derive an explicit expression for \(\mbox{\goth C}^{(1)}\) for a plasma with Maxwellian distribution functions for all particle species; this in turn allows us to relate the electrostatic electric field to the transverse field for such a plasma (see appendix I). For the sake of completeness, we also observe that if the non-Maxwellian part of the CE distribution function is even with respect to \(v_{\parallel}\), the transformation rules (2.88) combined with (2.103) imply that a perturbation with a negative parallel wavenumber \(k_{\parallel}\) will obey exactly the same dispersion relation as a perturbation for a positive parallel wavenumber, viz., for \(k_{\parallel}>0\), \[\mbox{\goth P}_{e}^{(0)}\bigl{(}-k_{\parallel},k_{\perp}\bigr{)}=\mbox{\goth P }_{e}^{(0)}\bigl{(}k_{\parallel},k_{\perp}\bigr{)}. \tag{2.118}\] If instead the non-Maxwellian part is odd, then, for \(k_{\parallel}>0\), \[\mbox{\goth P}_{e}^{(0)}\bigl{(}-k_{\parallel},k_{\perp}\bigr{)}=-\mbox{\goth P }_{e}^{(0)}\bigl{(}k_{\parallel},k_{\perp}\bigr{)}. \tag{2.119}\] The dispersion relation for perturbations with \(k_{\parallel}<0\) can, therefore, be recovered by considering perturbations with \(k_{\parallel}>0\), but under the substitution \(\mbox{\goth P}_{e}^{(0)}\to-\mbox{\goth P}_{e}^{(0)}\). Thus, we can characterise all unstable perturbations under the assumption that \(k_{\parallel}>0\). In all subsequent calculations, we require the Maxwellian part \(\mbox{\goth M}_{e}^{(0)}\) of the dielectric tensor. The elements of the matrix \(\mbox{\goth M}_{s}^{(0)}\) of species \(s\) are as follows: \[(\mbox{\goth M}_{s}^{(0)})_{11} = {\rm i}\frac{k^{2}}{k_{\parallel}^{2}}F\bigl{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\bigr{)}\,\] (2.120_a_) \[(\mbox{\goth M}_{s}^{(0)})_{12} = -{\rm i}\frac{k}{k_{\parallel}}G\bigl{(}k_{\parallel}\tilde{\rho }_{s},k_{\perp}\tilde{\rho}_{s}\bigr{)}\,\] (2.120_b_) \[(\mbox{\goth M}_{s}^{(0)})_{21} = {\rm i}\frac{k}{k_{\parallel}}G\bigl{(}k_{\parallel}\tilde{\rho }_{s},k_{\perp}\tilde{\rho}_{s}\bigr{)}\,\] (2.120_c_) \[(\mbox{\goth M}_{s}^{(0)})_{22} = {\rm i}H\bigl{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\bigr{)}\,\] (2.120_d_) where the functions \(F(x,y)\), \(G(x,y)\) and \(H(x,y)\) are \[F(x,y) \equiv \frac{4\sqrt{\pi}}{y^{2}}\exp\left(-\frac{y^{2}}{2}\right)\sum_{ m=-\infty}^{\infty}m^{2}I_{m}\biggl{(}\frac{y^{2}}{2}\biggr{)}\exp\left(-\frac{m^{2}}{ x^{2}}\right),\] (2.121_a_) \[G(x,y) \equiv \exp\left(-\frac{y^{2}}{2}\right)\sum_{m=-\infty}^{\infty}m\,{\rm Re }\ Z\Bigl{(}\frac{m}{x}\Bigr{)}\left[I_{m}^{\prime}\biggl{(}\frac{y^{2}}{2} \biggr{)}-I_{m}\biggl{(}\frac{y^{2}}{2}\biggr{)}\right]\,,\] (2.121_b_) \[H(x,y)\equiv F(x,y)+\sqrt{\pi}y^{2}\exp\left(-\frac{y^{2}}{2}\right)\sum_{m=- \infty}^{\infty}\left[I_{m}\bigg{(}\frac{y^{2}}{2}\bigg{)}-I_{m}^{\prime}\bigg{(} \frac{y^{2}}{2}\bigg{)}\right]\exp\left(-\frac{m^{2}}{x^{2}}\right),\] (121 \(c\) ) \(I_{m}(\alpha)\) is the \(m\)-th modified Bessel function, and \[Z(z)=\frac{1}{\sqrt{\pi}}\int_{C_{L}}\frac{\mathrm{d}u\exp\left(-u^{2}\right)}{ u-z} \tag{122}\] is the plasma dispersion function (\(C_{L}\) is the Landau contour) (Fried & Conte 1961). The derivation of these results from the full dielectric tensor (which is calculated in appendix G.1.1) for a plasma whose constituent particles all have Maxwellian distributions is presented in Appendices G.1.2 (expansion in the \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) basis) and G.1.3 (expansion in the \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) basis). #### 2.5.6 Effect of multiple species on dispersion-relation derivations We now relax the assumptions adopted in section 2.5.5 that the low-frequency modes of interest are on electron Larmor scales, and discuss how we derive simplified dispersion relations for (low-frequency) CE microinstabilities more generally. First, it is unnecessarily restrictive to assume that, for all CE microinstabilities, \(\tilde{\omega}_{s\parallel}\ll 1\) for all particle species. There are some instabilities for which \(\tilde{\omega}_{e\parallel}\sim\eta_{e}\sim\epsilon_{e}\ll 1\) while \(\tilde{\omega}_{i\parallel}\gtrsim 1\). Recalling the orderings \(\tilde{\omega}_{e\parallel}\sim\beta_{e}^{-1}\) and \(k\rho_{e}\sim 1\) that were adopted for the electron-Larmor-scale instabilities described in section 2.5.5, it follows that \(\tilde{\omega}_{i\parallel}\gtrsim 1\) whenever \(\beta_{e}\lesssim\tau^{-1/2}\mu_{e}^{-1/2}\); in other words, electron-Larmor-scale CE microinstabilities in plasmas with \(\beta_{e}\) that is not too large will satisfy \(\tilde{\omega}_{i\parallel}\gtrsim 1\). Therefore, we cannot naively apply our low-frequency approximation to both \(\mathfrak{C}_{e}\) and \(\mathfrak{C}_{i}\) in all cases of interest. We will remain cognisant of this in the calculations that follow - a concrete example of \(\tilde{\omega}_{i\parallel}\gtrsim 1\) will be considered in section 3.3.1. Secondly, because of the large separation between electron and ion Larmor scales, it is necessary to consider whether the approximation \(\mbox{\boldmath$M$\unboldmath}_{\!s}\big{(}\tilde{\omega}_{s\parallel},\mathbf{k} \big{)}\approx\tilde{\omega}_{s\parallel}\mbox{\boldmath$M$\unboldmath}_{\!s }^{(0)}(\mathbf{k})\) remains valid for parallel or perpendicular wavenumbers much larger or smaller than the inverse Larmor radii of each species. We show in appendix G.1.6 that the leading-order term in the \(\tilde{\omega}_{s\parallel}\ll 1\) expansion remains larger than higher-order terms for all \(k_{\parallel}\rho_{s}\gtrsim 1\) (as, indeed, was implicitly assumed in section 2.5.5). However, for \(k_{\parallel}\rho_{s}\) sufficiently small, the same statement does not hold for all components of \(\mbox{\boldmath$M$\unboldmath}_{\!s}\). More specifically, it is shown in the same appendix that the dominant contribution to \(\mbox{\boldmath$M$\unboldmath}_{\!s}(\mathbf{k})\) when \(k_{\parallel}\rho_{s}\ll 1\) instead comes from the quadratic term \(\tilde{\omega}_{s\parallel}^{2}\mbox{\boldmath$M$\unboldmath}_{\!s}^{(1)}(\mathbf{ k})\) (rather than any higher-order term). Thus, in general, our simplified dispersion relation for low-frequency modes in a two-species plasma has the form of a quartic in \(\omega\), rather than a quadratic, if \(k_{\parallel}\rho_{s}\ll 1\) for at least the electron species. Physically, the reason why a quadratic dispersion relation is no longer a reasonable approximation is the existence of more than two low-frequency modes in a two-species Maxwellian plasma in certain wavenumber regimes. For example, for quasi-parallel modes with characteristic parallel wavenumbers satisfying \(k_{\parallel}\rho_{i}\ll 1\), there are four low-frequency modes (see section 4.4.1). Nevertheless, in other situations, the components of \(\mbox{\boldmath$M$\unboldmath}_{\!s}(\mathbf{k})\) for which the \(\mbox{\boldmath$M$\unboldmath}_{\!s}\big{(}\tilde{\omega}_{s\parallel},\mathbf{k} \big{)}\approx\tilde{\omega}_{s\parallel}\mbox{\boldmath$M$\unboldmath}_{\!s }^{(0)}(\mathbf{k})\) approximation breaks down are not important, on account of their small size compared with terms in the dispersion relation associated with other Maxwellian components. In this case, the original quadratic dispersion relation is sufficient. An explicit wavenumber regime in which this is realised is \(k_{\parallel}\rho_{e}\sim k_{\perp}\rho_{e}\ll 1\) but \(k\rho_{i}\gg 1\) - see sections 4.3.4 and 4.4.7. Taking these multiple-species effects into account, the reasons behind the decision made in section 2.3.4 to consider the CES microinstabilities separately from the CET microinstabilities come into plain focus. First, the characteristic sizes of the CE electron-temperature-gradient and ion-temperature-gradient terms are comparable (\(\eta_{i}\sim\eta_{e}\)), while the CE ion-shear term is much larger than the CE electron-shear term: \(\epsilon_{i}\sim\mu_{e}^{-1/2}\epsilon_{e}\). This has the consequence that the natural orderings of \(\tilde{\omega}_{e\parallel}\) and \(\tilde{\omega}_{i\parallel}\) with respect to other parameters are different for CES and CET microinstabilities. Secondly, the fact that the velocity-space anisotropy associated with the CE temperature-gradient terms differs from the CE shear terms - which excite microinstabilities with different characteristic wavevectors - means that the form of the dispersion relations of CET and CES microinstabilities are distinct. More specifically, the dispersion relation for CET microinstabilities at both electron and ion scales can always be simplified to a quadratic equation in \(\omega\); in contrast, for CES microinstabilities, the dispersion relation cannot in general be reduced to anything simpler than a quartic. #### 2.5.7 Modelling collisional effects on CE microinstabilities As proposed thus far, our method for characterising microinstabilities in a CE plasma does not include explicitly the effect of collisions on the microinstabilities themselves. In principle, this can be worked out by introducing a collision operator into the linearised Maxwell-Vlasov-Landau equation from which the hot-plasma dispersion relation (74) is derived. Indeed, if a Krook collision operator is assumed (as was done in section 2.4.2 when determining the precise form of the CE distribution functions of ions and electrons), the resulting modification of the hot-plasma dispersion relation is quite simple: the conductivity tensor (76) remains the same, but with the substitution \[\tilde{\omega}_{s\parallel}\to\tilde{\omega}_{s\parallel}\equiv\tilde{\omega} _{s\parallel}+\frac{\mathrm{i}}{k_{\parallel}\lambda_{s}}\,, \tag{123}\] in the resonant denominators (see appendix C). As for how this affects the simplifications to the dispersion relation outlined in section 2.5.3, the expansion parameter in the dielectric tensor's expansion (98) is altered, becoming \(\hat{\omega}_{s\parallel}\ll 1\) (as opposed to \(\tilde{\omega}_{s\parallel}\ll 1\)); in other words, \(\|\mathfrak{C}_{s}^{(1)}\|/\|\mathfrak{C}_{s}^{(0)}\|\sim\hat{\omega}_{s\parallel}\). The latter result leads to an seemingly counterintuitive conclusion: collisions typically fail to stabilise low-frequency instabilities in CE plasma if \(\omega\tau_{s}\lesssim 1\) (where \(\tau_{s}\) is the collision time of species \(s\)) but \(k_{\parallel}v_{\mathrm{thi}}\tau_{s}=k_{\parallel}\lambda_{s}\gg 1\). This is because the simplified dispersion relation (117) only involves leading-order terms in the expanded dielectric tensor. These terms are independent of \(\hat{\omega}_{s\parallel}\), and thus the growth rate of any microinstability that is adequately described by (117) does not depend on the size of \(\omega\tau_{s}\). For these microinstabilities, the effect of collisions only becomes relevant if \[k_{\parallel}\lambda_{s}\lesssim 1\,. \tag{124}\] This is inconsistent with the assumptions \(k\lambda_{e}\gg 1\), \(k\lambda_{i}\gg 1\) made when setting up our calculation in section 2.4.1. Thus, the only regime where collisions can reasonably be included in our calculation is one where they are typically not important. An exception to this rule arises when two-species plasma effects mean that the first-order terms in the \(\hat{\omega}_{s\parallel}\ll 1\) expansion are needed for a correct characterisation of the growth rate of certain microinstabilities (see section 2.5.6); for these instabilities, we include the effect of collisions using (123). Although our calculation is not formally valid when (124) holds, so we cannot show explicitly that growth ceases, this condition nonetheless represents a sensible criterion for suppression of microinstabilities by collisional damping. Physically, it signifies that collisions are strong enough to scatter a particle before it has streamed across a typical wavelength of fluctuation excited by a microinstability. This collisional scattering prevents particles from being resonant, which in turn would suppress the growth of many different microinstabilities. However, we acknowledge that there exist microinstabilities that do not involve resonant-particle populations (e.g., the firehose instability - see sections 2.3.3 and 4.4.1), and thus it cannot be rigorously concluded from our work that all microinstabilities are suppressed when (2.124) applies. Yet even without an actual proof of collisional stabilisation, there is another reason implying that (2.124) is a reasonable threshold for microinstabilities: the characteristic growth time of microinstabilities at wavenumbers satisfying (2.124) is comparable the evolution time \(\tau_{L}\) of macroscopic motions in the plasma. To illustrate this idea, we consider the ordering (2.93) relating the complex frequency of microinstabilities to the small parameter \(\epsilon_{s}\) for CES (CE shear-driven) microinstabilities, and use it to estimate \[\omega\tau_{L}\sim\epsilon_{s}k_{\parallel}v_{\rm ths}\tau_{L}\lesssim \epsilon_{s}\frac{L_{V}}{\lambda_{s}}\frac{v_{\rm ths}}{V}, \tag{2.125}\] where \(V\sim L_{V}/\tau_{L}\) is the characteristic ion bulk-flow velocity. Considering orderings (2.55), it follows that \(\epsilon_{e}\sim\mu_{e}^{1/2}\epsilon_{i}\), and so \[\epsilon_{i}\frac{v_{\rm thi}}{V}\sim\epsilon_{e}\frac{v_{\rm the}}{V}\sim \frac{\lambda_{e}}{L_{V}}\sim\frac{\lambda_{i}}{L_{V}}\,. \tag{2.126}\] Then (2.125) becomes \[\omega\tau_{L}\lesssim 1, \tag{2.127}\] implying (as claimed) that the CES microinstability growth rate is smaller than the fluid turnover rate \(\tau_{L}^{-1}\). Spelled out clearly, this means that the underlying quasiequilibrium state changes before going unstable. Similar arguments can be applied to CET (CE temperature-gradient-driven) microinstabilities. Thus, (2.124) represents a lower bound on the characteristic wavenumbers at which microinstabilities can operate. We shall therefore assume throughout the rest of this paper that microinstabilities are suppressed (or rendered irrelevant) if they satisfy (2.124). 5.8 Caveats: microinstabilities in CE plasma where \(\omega/k_{\parallel}v_{\rm ths}\not\sim\eta_{s},\epsilon_{s}\) As mentioned in section 2.4.2, there are a number of important caveats to the claim that the ordering (2.93) must be satisfied by microinstabilities in a CE plasma. The first of these is that our comparison of non-Maxwellian with the Maxwellian terms in expression (2.92) for \(\Xi_{s}\) is in essence a pointwise comparison at characteristic values of \(\tilde{v}_{s}\) for which \(\Xi_{s}\) attains its largest typical magnitude. However, \(\Xi_{s}\) affects the components of the conductivity tensor via the velocity integral of its product with a complicated function of frequency and wavenumber [see (2.76)]. Thus, it does not necessarily follow that the ratio of the integrated responses of the Maxwellian and non-Maxwellian contributions to the conductivity tensor is the same as the pointwise ratio of the respective contributions to \(\Xi_{s}\). In some circumstances, this can result in the Maxwellian part being smaller than anticipated, leading to faster microinstabilities. An example of this phenomenon was given in section 2.5.6: for \(k_{\parallel}\rho_{s}\ll 1\), the characteristic magnitude of the Maxwellian contribution to some components of the dielectric tensor is \(O(\tilde{\omega}_{s\parallel}^{2})\), as compared with the naive estimate \(O(\tilde{\omega}_{s\parallel})\). This leads to certain CES microinstabilities (for example, the CE ion-shear-driven firehose instability - section 4.4.1) satisfying a modified low-frequency condition \[\tilde{\omega}_{s\parallel}\sim\epsilon_{s}^{1/2}\ll 1. \tag{2.128}\] A similar phenomenon affects the limit \(k_{\parallel}\to 0\) for fixed \(k_{\perp}\), in which case it can be shown that the Maxwellian contribution to \(\sigma_{zz}\) is \(O(k_{\parallel}/k_{\perp})\); this leads to a CES microinstability (the CE electron-shear-driven ordinary-mode instability - see section 4.4.11) satisfying a modified ordering \[\frac{\omega}{k_{\perp}v_{\mathrm{th}s}}\sim\epsilon_{s}\ll 1. \tag{129}\] The second caveat is that for some plasma modes, the particles predominantly responsible for collisionless damping or growth are suprathermal, i.e., \(\tilde{v}_{s}\gg 1\). Then the previous comparison of terms in (92) is not applicable. Modes of this sort are the quasi-cold plasma modes discussed in section 2.3.4 and appendix D. They can be unstable, but always with a growth rate that is exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). In spite of these two caveats, we proceed by considering the full hot-plasma dispersion relation (74) in the low-frequency limit \(\omega\ll k_{\parallel}v_{\mathrm{th}s}\). This approach enables the treatment of all microinstabilities satisfying condition \[\tilde{\omega}_{s\parallel}\sim\eta_{s}^{\iota_{\eta}},\epsilon_{s}^{\iota_{ s}}\ll 1, \tag{130}\] where \(\iota_{\eta}\) and \(\iota_{\epsilon}\) are any fractional powers. Similarly to the discussion in section 2.3.4, we claim that the microinstabilities satisfying the low-frequency condition (130) are likely to be the most rapid of all possible microinstabilities in CE plasma. A formal justification of this claim relies on the argument - presented in appendix E - that for all plasma modes satisfying \(\omega\gtrsim k_{\parallel}v_{\mathrm{th}s}\) and \(|\mathrm{Re}\;\omega|\gg|\mathrm{Im}\;\omega|\), the growth rate is exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). By definition, this class of modes includes the quasi-cold modes. In a plasma where \(\epsilon_{s},\eta_{s}\ll 1\), the growth rates of such microinstabilities will be exponentially small, and thus of little significance. The only situation that we are aware of in which the low-frequency condition (130) is not appropriate is the aforementioned CES ordinary-mode instability; a separate treatment of it involving the full hot-plasma dispersion relation is provided in appendix K.3.13. ## 3 CET (Chapman-Enskog, temperature-gradient-driven) microinstabilities ### Form of CE distribution function We consider first the non-Maxwellian terms of the CE distribution function arising from temperature gradients and electron-ion drifts. Neglecting bulk-flow gradients [viz., setting \(\epsilon_{s}=0\) for both species - see (11\(e\),_f_)], the CE distribution functions (71) for the electrons and ions become \[f_{e0}(\tilde{v}_{e\parallel},\tilde{v}_{e\perp}) = \frac{n_{e0}}{v_{\mathrm{th}}^{3}\pi^{3/2}}\exp\left(-\tilde{v}_{ e}^{2}\right)\biggl{\{}1-\tilde{v}_{\parallel e}\left[\eta_{e}^{T}\left( \tilde{v}_{e}^{2}-\frac{5}{2}\right)+\eta_{e}^{R}\right]\bigg{\}}, \tag{131a}\] \[f_{i0}(\tilde{v}_{i\parallel},\tilde{v}_{i\perp}) = \frac{n_{i0}}{v_{\mathrm{th}i}^{3}\pi^{3/2}}\exp\left(-\tilde{v} _{i}^{2}\right)\biggl{\{}1-\eta_{i}\tilde{v}_{\parallel i}\left(\tilde{v}_{i} ^{2}-\frac{5}{2}\right)\bigg{\}}, \tag{131b}\] where we have written out explicitly the electron-temperature-gradient [\(\eta_{e}^{T}\), \(\eta_{i}\) - see (11\(a\),_d_)] and electron-friction [\(\eta_{e}^{R}\) - see (11_b_)] terms under the assumption that the Maxwell-Vlasov-Landau system from which these CE distribution functions were derived is governed by a Krook collision operator. We remind the reader that the electron-ion-drift term [\(\eta_{e}^{u}\) - see (11_c_)] disappears for this choice of collision operator. We also observe that the non-Maxwellian part of the distribution functions (131) have odd parity; thus, any unstable mode with \(k_{\parallel}>0\) has a corresponding unstable mode with \(k_{\parallel}<0\) and the signs of \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), and \(\eta_{i}\) reversed (see section 2.5.5, last paragraph). The precise methodology that we employ to calculate the growth rates of CET microinstabilities is described in appendix J; here, we focus on the results of those calculations. In section 3.2, we will present the overview of the CET stability landscape, while the microinstabilities referred to there will be treated analytically in section 3.3. ### Stability We determine the stability (or otherwise) of the CE distribution functions of the form (11a) and (11b) for different values of \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), and \(\eta_{i}\), the electron inertial scale \(d_{e}\), the electron-temperature scale length \(L_{T}=|\nabla_{\parallel}\log T_{e}|^{-1}\), and for fixed electron and ion plasma betas (\(\beta_{e}\) and \(\beta_{i}\), respectively). Stability calculations are carried out for particular combinations of values of \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{i}\), \(d_{e}\), \(L_{T}\), \(\beta_{e}\) and \(\beta_{i}\) by solving for the maximum microinstability growth rate across all wavevectors (see appendix J for explanation of how this is done), and determining whether this growth rate is positive for the microinstabilities whose wavelength is smaller than the Coulomb mean free paths (a condition necessary for our calculation to be valid). The results of one such stability calculation - for a temperature-equilibrated hydrogen plasma (\(\eta_{e}^{T}=\eta_{i}\), \(\beta_{i}=\beta_{e}\)) - are presented in figure 2. In spite of the five-dimensional (\(\eta_{e}^{T},\eta_{e}^{R},d_{e},L_{T},\beta_{e}\)) parameter space that seemingly needs to be explored, we can, in fact, convey the most salient information concerning the stability of the CE distribution functions (11) using plots over a two-dimensional (\(d_{e}/L_{T},\lambda_{e}/L_{T}\)) parameter space at a fixed \(\beta_{e}\) [where we remind the reader that \(\lambda_{e}/L_{T}=|\eta_{e}^{T}|\) - see (11a)]. This reduction in phase-space dimensionality is possible for two reasons. First, it transpires that the CE electron-friction term of the form given in (11a) does not drive any microinstabilities, bur merely modifies the real frequency of perturbations with respect to their Maxwellian frequencies (this is proven in appendix J.1). Thus, we can set \(\eta_{e}^{R}=0\) without qualitatively altering the stability properties of the CE distribution functions (11). Secondly, none of the salient stability thresholds applying to CET microinstabilities depends on \(d_{e}\) and \(L_{T}\) separately: one is a function of \(d_{e}/L_{T}\), while another is independent of both quantities. Figure 2a shows the regions of instability and stability of the CE distribution function (11) over the (\(d_{e}/L_{T},\lambda_{e}/L_{T}\)) parameter space. The unstable region is bracketed by two thresholds. For \(d_{e}/L_{T}\) below a critical value (\(d_{e}/L_{T}\))\({}_{00}\), stability is independent of \(d_{e}/L_{T}\), and only depends on the relative magnitude of \(\lambda_{e}/L_{T}\) and \(\beta_{e}\): CET microinstabilities are quenched if \(\lambda_{e}\beta_{e}/L_{T}\ll 1\). For \(d_{e}/L_{T}\gtrsim(d_{e}/L_{T})_{\rm c0}\), and \(\lambda_{e}\beta_{e}/L_{T}\gtrsim 1\), stability is attained at fixed \(\lambda_{e}/L_{T}\) for \(d_{e}/L_{T}>(d_{e}/L_{T})_{\rm c}\), where \((d_{e}/L_{T})_{\rm c}\) increases monotonically with \(\lambda_{e}/L_{T}\). If \(\lambda_{e}\beta_{e}/L_{T}\gtrsim 1\) and \(d_{e}/L_{T}\lesssim(d_{e}/L_{T})_{\rm c}\), then the CE distribution function (11) is unstable. The fastest-growing CET microinstability is the _whistler (heat-flux) instability_: whistler waves driven unstable by the small anisotropy of the CE electron-temperature-gradient term (see section 3.3.1). That this instability with wavevector parallel to the magnetic field is indeed the dominant microinstability is most easily ascertained by comparing simple analytic expressions for its peak growth rate and wavevector to the equivalent quantities recorded when performing the general stability calculation (see figures 2b, 2c and 2d). The maximum microinstability growth rate matches the analytic result (10) for the CET whistler instability in the limit \(\lambda_{e}\beta_{e}/L_{T}\gg 1\), while the parallel wavenumber (\(|k_{\parallel}|\rho_{e}\))\({}_{\rm peak}\) of the fastest-growing mode is extremely well described by (11). In addition, figure 2d demonstrates that the parallel instability is indeed the fastest. The CET whistler instability has been considered previously by a number of authors (see references in section 3.3.1); we note that these prior studies of this instability suggest that, nonlinearly, oblique CET whistler modes may be the more important ones, even though linearly the parallel modes are the fastest growing (see section 3.3.2). The two thresholds demarcating the unstable region can then be associated with stabilisation conditions of the CET whistler instability, each with a simple physical interpretation. The first condition is the \(\beta\)-stabilisation condition of the whistler instability. It is shown in section 3.3.1 that when \(\lambda_{e}\beta_{e}/L_{T}\ll 1\), cyclotron damping on whistler modes is sufficiently strong that only quasi-parallel modes with parallel wavenumbers \(k_{\parallel}\rho_{e}\lesssim(\lambda_{e}\beta_{e}/L_{T})^{1/3}\ll 1\) can be destabilised by the anisotropy of the CE distribution function, and that the peak growth rate \(\gamma_{\rm whistler,T}\) of these unstable modes is exponentially small in \(\lambda_{e}\beta_{e}/L_{T}\) compared to the electron Larmor frequency [see (10)]: \(\gamma_{\rm whistler,T}/\Omega_{e}\sim\lambda_{e}\exp{[-(\lambda_{e}\beta_{e}/2 L_{T})^{-2/3}]}/L_{T}\). This means that if \(\lambda_{e}\beta_{e}/L_{T}\) is reduced below unity, the growth rate of the CET whistler instability decreases dramatically, and thus the instability is unable to operate effectively on timescales shorter than those over which the CE plasma is evolving macroscopically. The second condition is collisional stabilisation of the CET whistler instability. Naively, it might be expected that two conditions must be satisfied in order for the microinstability to operate: that its growth rate must satisfy \(\gamma_{\rm whistler,T}\tau_{e}\gg 1\), and its characteristic wavenumber \(k\lambda_{e}\gg 1\) [see (124)]. Noting that for the CET whistler instability [cf. (10)], \[\frac{\gamma_{\rm whistler,T}\tau_{e}}{k\lambda_{e}}=\frac{\gamma_{\rm whistler,T}}{kv_{\rm the}}\sim\frac{\lambda_{e}}{L_{T}}\left(\frac{\lambda_{e}\beta_{e }}{L_{T}}\right)^{-1/5}\ll 1\,, \tag{13}\] it follows that the former condition is more restrictive. Written as a condition on \(d_{e}/L_{T}\) in terms of \(\lambda_{e}/L_{T}\) [and using \(\gamma_{\rm whistler,T}\sim\lambda_{e}\Omega_{e}/L_{T}\) - see (10)], \(\gamma_{\rm whistler,T}\tau_{e}\gg 1\) becomes \[\frac{d_{e}}{L_{T}}\ll\beta_{e}^{-5/2}\left(\frac{\lambda_{e}\beta_{e}}{L_{T}} \right)^{2}\,, \tag{14}\] while the condition \(k\lambda_{e}\gg 1\) on the instability wavenumber \(k_{\parallel}\rho_{e}\sim(\lambda_{e}\beta_{e}/L_{T})^{1/5}\) [see (11)] leads to \[\frac{d_{e}}{L_{T}}\ll\left(\frac{d_{e}}{L_{T}}\right)_{\rm c}\equiv\beta_{e} ^{-3/2}\left(\frac{\lambda_{e}\beta_{e}}{L_{T}}\right)^{6/5}\,. \tag{15}\] It is the latter that agrees well with the true result, as shown in figure 2a, implying that \((d_{e}/L_{T})_{\rm c0}=\beta_{e}^{-3/2}\). The (arguably surprising) result that the CET whistler instability can operate even if \(\gamma_{\rm whistler,T}\tau_{e}\lesssim 1\) is, in fact, a generic feature of low-frequency (viz., \(\omega\ll kv_{\rm the}\)) plasma instabilities (see section 2.5.7). The physical instability mechanism underlying such modes can be sustained provided the time taken for thermal particles (in this case, electrons) to cross the mode's wavelength is much shorter than the collision time, irrespective of the mode's own frequency - in other words, \(\tau_{e}kv_{\rm the}=k\lambda_{e}\gg 1\). We point out that the collisional-stabilisation condition of the CET whistler instability can _never_ be satisfied in a strongly magnetised plasma if \(\lambda_{e}\beta_{e}/L_{T}\gtrsim 1\): this is because its wavenumber \(k\) satisfies \(k^{-1}\lesssim\rho_{e}\ll\lambda_{e}\). Whilst it is the fastest-growing one (assuming \(\eta_{e}^{T}\sim\eta_{i}\)), the CET whistler instability is not the only CET microinstability of interest. There are two other instabilities driven by the CET ion-temperature gradient term, neither of which has previously been identified, to our knowledge: the _slow (hydromagnetic) wave instability_ (see section 3.3.3), and the _long-wavelength kinetic-Alfven wave instability_ (see section 3.3.4). The former, whose characteristic wavenumber scale satisfies \(k\rho_{i}\sim 1\), has a larger characteristic growth rate \(\gamma_{\rm SW}\sim\lambda_{i}\Omega_{i}/L_{T_{i}}\) (where \(L_{T_{i}}=|\nabla_{\parallel}\log T_{i}|^{-1}\) is the scale length of the ion temperature gradient). Similarly to the CET whistler instability, the CET slow-wave instability has \(\beta\)-stabilisation and collisional-stabilisation conditions \(\lambda_{i}\beta_{i}/L_{T_{i}}\ll 1\) and \(\lambda_{i}\lesssim\rho_{i}\), respectively. Thus, unless \(\lambda_{i}\beta_{i}/L_{T_{i}}>\lambda_{e}\beta_{e}/L_{T_{e}}\) (a condition equivalent to \(\tau^{3}L_{T_{e}}/L_{T_{i}}>Z^{3}\), where \(\tau=T_{i}/T_{e}\)), the CET slow-wave instability only operates when the CET whistler wave instability does, but on larger, ion rather than electron, scales. Nevertheless, the CET slow-wave instability is worth noting because, on account of being an ion instability, it should continue to operate even if the electron-scale CET whistler instability modifies the underlying electron distribution function. The slow-wave instability will then be responsible for modifying the ion distribution function. We are not aware of any work on the CET slow-wave instability and, thus, on its effect on ion heat conduction. Readers who are interested in knowing more about the properties and growth rates of CET microinstabilities are encouraged to continue section 3.3; those who are focused on the wider question of the kinetic stability of the CE distribution function should jump ahead to section 4. ### CET microinstability classification #### 3.3.1 Parallel whistler (heat-flux) instability The CET whistler instability, which has been studied previously by a number of authors (Levinson & Eichler, 1992; Pistinner & Eichler, 1998; Gary & Li, 2000; Roberg-Clark _et al._, 2016; Komarov _et al._, 2018; Roberg-Clark _et al._, 2018\(a\),b_; Shaaban _et al._, 2019; Kuzichev _et al._, 2019; Drake _et al._, 2021), is driven by parallel electron heat fluxes. These heat fluxes introduce the asymmetry to the CE electron distribution function (i.e., the electron-temperature-gradient term), which, if it is sufficiently large, can overcome electron cyclotron damping of (electromagnetic) whistler waves and render them unstable. The instability is mediated by gyroresonant wave-particle interactions that allow whistlers to drain free energy from electrons with parallel velocities \(v_{\parallel}=\pm\Omega_{e}/k_{\parallel}\). For a positive, parallel electron heat flux, which is driven by an anti-parallel temperature gradient (\(\nabla_{\parallel}T_{e}<0\), so \(\eta_{e}^{T}<0\)), it is only whistlers with a positive parallel wavenumber that are unstable. Whistler waves with both parallel and oblique wavevectors with respect to the magnetic field can be destabilised, although the parallel modes are the fastest-growing ones. The CET whistler instability is most simply characterised analytically for parallel wavenumbers (i.e., \(k=k_{\parallel}\)). Then, it can be shown [see appendix J.3.1, and also Levinson & Eichler (1992) and Roberg-Clark _et al._ (2016)] that the real frequency \(\varpi\) and growth rate \(\gamma\) at arbitrary \(k_{\parallel}>0\) are given by \[\frac{\varpi}{\Omega_{e}} =\eta_{e}^{T}\left(\frac{k_{\parallel}\rho_{e}}{4}-\frac{1}{2k_{ \parallel}\rho_{e}}\right)-\frac{\left(\eta_{e}^{T}/2+k_{\parallel}^{3}\rho_{ e}^{3}/\beta_{e}\right)\mathrm{Re}\;Z\big{(}1/k_{\parallel}\rho_{e}\big{)}}{ \big{[}\mathrm{Re}\;Z\big{(}1/k_{\parallel}\rho_{e}\big{)}\big{]}^{2}+\pi\mathrm{ exp}\left(-2/k_{\parallel}^{2}\rho_{e}^{2}\right)}\,, \tag{11a}\] \[\frac{\gamma}{\Omega_{e}} =-\frac{\sqrt{\pi}\left(\eta_{e}^{T}/2+k_{\parallel}^{3}\rho_{e}^ {3}/\beta_{e}\right)}{\big{[}\mathrm{Re}\;Z\big{(}1/k_{\parallel}\rho_{e} \big{)}\big{]}^{2}\exp\left(1/k_{\parallel}^{2}\rho_{e}^{2}\right)+\pi\mathrm{ exp}\left(-1/k_{\parallel}^{2}\rho_{e}^{2}\right)}\,. \tag{11b}\] For \(\eta_{e}^{T}>0\), \(\gamma<0\), but if \(\eta_{e}^{T}<0\), then \(\gamma\) is non-negative for \(k_{\parallel}\rho_{e}\leq\big{(}\eta_{e}^{T}\beta_{e}/2\big{)}^{1/3}\). The dispersion curves \(\varpi=\varpi(k_{\parallel})\) and \(\gamma=\gamma(k_{\parallel})\) of unstable whistler waves with parallel wavevectors for three different values of \(|\eta_{e}^{T}|\beta_{e}\) are plotted in figure 3 using the above formulae. For \(|\eta_{e}^{T}|\beta_{e}\gtrsim 1\), the range of unstable parallel wavenumbers, \(\Delta k_{\parallel}\), is comparable to the characteristic wavenumber of the instability: \(\Delta k_{\parallel}\sim k_{\parallel}\sim\rho_{e}^{-1}\). The expressions (11a) and (11b) can be simplified in two subsidiary limits, which in turn allows for the derivation of analytic expressions for the maximum growth rate of the instability and the (parallel) wavenumber at which that growth rate is realised. First, adopting the ordering \(k_{\parallel}\rho_{e}\sim\big{(}\eta_{e}^{T}\beta_{e}\big{)}^{1/3}\ll 1\) under which the destabilising \(\eta_{e}^{T}\) terms and the stabilising electron FLR terms are the same order, we find \[\varpi \approx\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\Omega_{e}\,, \tag{12a}\] \[\gamma \approx-\frac{\sqrt{\pi}}{k_{\parallel}^{2}\rho_{e}^{2}}\left( \frac{\eta_{e}^{T}}{2}+\frac{k_{\parallel}^{3}\rho_{e}^{3}}{\beta_{e}}\right) \mathrm{exp}\left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)\Omega_{e}\,. \tag{12b}\] The frequency corresponds to that of a whistler wave in the \(k_{\parallel}\rho_{e}\ll 1\) limit (Boldyrev _et al._, 2013). The fastest growth, which occurs at the wavenumber \[k_{\parallel}\rho_{e}\approx\left(\frac{|\eta_{e}^{T}|\beta_{e}}{2}\right)^{1/ 3}-\frac{|\eta_{e}^{T}|\beta_{e}}{4}\,, \tag{10}\] is exponentially slow in \(|\eta_{e}^{T}|\beta_{e}\ll 1\): \[\gamma_{\rm max}\approx\frac{3\sqrt{\pi}}{4}|\eta_{e}^{T}|\exp\left[-\frac{2 ^{2/3}}{\left(|\eta_{e}^{T}|\beta_{e}\right)^{2/3}}-1\right]\Omega_{e}\,. \tag{11}\] Next, considering the opposite limit \(k_{\parallel}\rho_{e}\gg 1\), we obtain \[\varpi \approx\left[\eta_{e}^{T}\beta_{e}\left(\frac{1}{4}k_{\parallel} \rho_{e}-\frac{\pi-2}{2\pi k_{\parallel}\rho_{e}}\right)+\frac{2}{\pi}k_{ \parallel}^{2}\rho_{e}^{2}\right]\frac{\Omega_{e}}{\beta_{e}}\,, \tag{12a}\] \[\gamma \approx-\frac{1}{\sqrt{\pi}}\left[\eta_{e}^{T}\beta_{e}\left( \frac{1}{2}-\frac{4-\pi}{2\pi k_{\parallel}^{2}\rho_{e}^{2}}\right)+k_{ \parallel}^{3}\rho_{e}^{3}\right]\frac{\Omega_{e}}{\beta_{e}}\,. \tag{12b}\] We then find that the maximum growth rate of the parallel mode is given by \[\gamma_{\rm max} \approx\frac{|\eta_{e}^{T}|}{\sqrt{\pi}}\left\{1-\left[\frac{1}{ \sqrt{\pi}}\left(\frac{4}{\pi}-1\right)\right]^{3/5}\left[\left(\frac{3}{2} \right)^{2/5}-\left(\frac{2}{3}\right)^{3/5}\right]\left(|\eta_{e}^{T}|\beta_ {e}\right)^{-2/5}\right\}\Omega_{e}\] \[\approx 0.56|\eta_{e}^{T}|\left[1-0.13\left(|\eta_{e}^{T}|\beta_{e} \right)^{-2/5}\right]\Omega_{e}\,, \tag{12c}\] at the parallel wavenumber \[k_{\parallel}\rho_{e}=\left[\frac{2}{3\sqrt{\pi}}\left(\frac{4}{\pi}-1\right) \right]^{1/5}\left(|\eta_{e}^{T}|\beta_{e}\right)^{1/5}\approx 0.63\left(|\eta_{e}^{ T}|\beta_{e}\right)^{1/5}\,. \tag{12d}\] Figure 3: _Parallel CET whistler instability._ Dispersion curves of unstable whistler modes, whose instability is driven by the electron-temperature-gradient term in the CE distribution function (11a), for wavevectors that are co-parallel with the background magnetic field (viz., \(\mathbf{k}=k_{\parallel}\tilde{\mathbf{z}}\)). The frequency (solid blue) and growth rates (solid red) of the modes are calculated using (11a) and (11b), respectively. The resulting frequencies and growth rates, when normalised as \(\gamma\beta_{e}/\Omega_{e}\), are functions of the dimensionless quantity \(\eta_{e}^{T}\beta_{e}\); we show the dispersion curves for three different values of \(\eta_{e}^{T}\beta_{e}\). The approximations (11a) and (11b) for the frequency (dotted blue) and growth rate (dotted red) in the limit \(k_{\parallel}\rho_{e}\ll 1\) are also plotted, as are the approximations (12a) and (12b) for the frequency (dashed blue) and growth rate (dashed red) in the limit \(k_{\parallel}\rho_{e}\gg 1\). In addition, we see that the real frequency of modes with \(k_{\parallel}\rho_{e}\lesssim\left(|\eta_{e}^{T}|\beta_{e}/2\right)^{1/3}\) is larger than the growth rate of the mode: \(\varpi\sim k_{\parallel}\rho_{e}\gamma\gg\gamma\). Thus, these modes oscillate more rapidly than they grow. The approximate expressions for (10) and (11) are valid in the limits \(|\eta_{e}^{T}|\beta_{e}\ll 1\) and \(|\eta_{e}^{T}|\beta_{e}\gg 1\), respectively, and are plotted in figure 3 alongside the exact results (11). Of particular note is the accuracy of the approximate expression (11b) for the growth rate when \(k_{\parallel}\rho_{e}\gtrsim 0.6\); this suggests that (12) is a reasonable estimate of the peak growth rate for \(|\eta_{e}^{T}|\beta_{e}\gtrsim 1\). #### 3.3.2 Oblique whistler (heat-flux) instability Analytical expressions for the frequency and growth rate of unstable modes with an oblique wavevector at an angle to the magnetic field are more complicated than the analogous expressions for parallel modes. In appendix J.3, we show that there are two low-frequency oblique modes, whose complex frequencies \(\omega\) are given by \[\omega=\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{-B_{\rm T}\pm \sqrt{B_{\rm T}^{2}+4A_{\rm T}C_{\rm T}}}{2A_{\rm T}}\,, \tag{13}\] where the coefficients \(A_{\rm T}=A_{\rm T}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\eta_{e}^{T}\beta _{e})\), \(B_{\rm T}=B_{\rm T}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\eta_{e}^{T}\beta _{e})\), and \(C_{\rm T}=C_{\rm T}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\eta_{e}^{T}\beta _{e})\) are composed of the sums and products of the special functions defined in (121), and also other special functions defined in appendix G.3. For a given wavenumber, we can use (13) to calculate the growth rates of any unstable oblique modes - and, in particular, demonstrate that positive growth rates are present for certain values of \(\eta_{e}^{T}\). When they do exist, (13) suggests that they will have the typical size \(\gamma\sim\Omega_{e}/\beta_{e}\sim\eta_{e}^{T}\Omega_{e}\) when \(k\rho_{e}\sim 1\) and \(\eta_{e}^{T}\beta_{e}\sim 1\). For \(\eta_{e}^{T}>0\), we find that both modes (13) are damped; for \(\eta_{e}^{T}<0\), one mode is damped for all wavenumbers, but the other is not. Figure 4 shows the maximum (positive) growth rate \(\gamma\) (normalised to \(\Omega_{e}/\beta_{e}\)) of this mode at a fixed value of \(\eta_{e}^{T}\), for a range of \(\beta_{e}\). The growth rate is calculated by evaluating the imaginary part of (13) at a given wavenumber. For \(-\eta_{e}^{T}<1/\beta_{e}\), the mode of interest is damped for most wavenumbers, except for a small region of wavenumbers quasi-parallel to the magnetic field: in this region, there is a very small growth rate \(\gamma\ll\Omega_{e}/\beta_{e}\) (figure 4a). This finding is consistent with the exponentially small growth rates found for the parallel whistler modes [see (12)]. When \(-\eta_{e}^{T}\sim 1/\beta_{e}\), there is a marked change in behaviour: a larger region of unstable modes appears, with \(\gamma\sim\Omega_{e}/\beta_{e}\), at wavenumbers \(k\rho_{e}\sim 1\) (figures 4b and c). The growth rate is the largest for parallel modes - but there also exist oblique modes with \(k_{\perp}\lesssim k_{\parallel}\) whose growth rate is close to the peak growth rate. For example, for \(\eta_{e}^{T}\beta_{e}=-4\), we find that the growth rate of the fastest-growing mode with a wavevector angle \(\theta=10^{\circ}\) is only \(\sim\)2% smaller than the fastest-growing parallel mode; for a wavevector angle \(\theta=20^{\circ}\), the reduction is by \(\sim\)6%; and for \(\theta=30^{\circ}\), the reduction is by \(\sim\)20%. Finally, if \(-\eta_{e}^{T}\gg 1/\beta_{e}\), there exists a extended region of unstable modes, with \(1\lesssim k\rho_{e}\lesssim\left|\eta_{e}^{T}\beta_{e}\right|^{1/3}\), and \(\gamma\sim|\eta_{e}^{T}\Omega_{e}|\) (figure 4d). Again, the peak growth rate is at \(k_{\perp}=0\), but oblique modes also have a significant growth rate (for unstable modes with \(\theta=30^{\circ}\), the reduction in the largest growth rate compared to the fastest-growing parallel mode is only by \(\sim\)4%). Most of the unstable modes have a non-zero real frequency: for \(-\eta_{e}^{T}\sim 1/\beta_{e}\), \(\omega\sim\gamma\) (figure 4e), while for \(-\eta_{e}^{T}\gg 1/\beta_{e}\), \(\omega\gg\gamma\) for \(k\rho_{e}\gg 1\) (figure 4f). Note, however, that in the latter case there exists a band of wavenumbers at which there is no real frequency. In summary, we have (re-)established that the fastest-growing modes of the CET whistler instability are parallel to the magnetic field; however, we have shown semi Figure 4: _Oblique CET whistler instabilities_. Maximum positive growth rates of unstable whistler modes whose instability is driven by the electron-temperature-gradient term in CE distribution function (19a), at arbitrary wavevectors with respect to the background magnetic field. The growth rates of the modes are calculated by taking the imaginary part of (18), where coefficients \(A_{\rm T}\), \(B_{\rm T}\) and \(C_{\rm T}\) are known functions of the wavevector. The growth rates are calculated on a \(400^{2}\) grid, with equal logarithmic spacing in both perpendicular and parallel directions between the minimum and maximum wavenumbers. The resulting growth rates, when normalised as \(\gamma\beta_{e}/\Omega_{e}\), are functions of the dimensionless quantity \(\eta_{e}^{T}\beta_{e}\). **a)**\(\eta_{e}^{T}\beta_{e}=-0.5\). **b)**\(\eta_{e}^{T}\beta_{e}=-4\). **c)** Same as b) but with normalisation \(\gamma/|\eta_{e}^{T}|\Omega_{e}\). **d)** Same as c), but with \(\eta_{e}^{T}\beta_{e}=-100\). **e)** Ratio of growth rate to absolute value of real frequency for unstable modes for \(\eta_{e}^{T}\beta_{e}=-4\). **f)** Same as e), but with \(\eta_{e}^{T}\beta_{e}=-100\). analytically (a novel result of this work) that the growth of oblique perturbations can be almost as large. This result is of some significance, because it has been argued that oblique whistler modes are necessary for the instability to scatter heat-carrying electrons efficiently (see, e.g., Komarov _et al._, 2018). It was proposed previously that such modes could arise from modifications to the CET electron-temperature-gradient terms induced by the unstable parallel whistler modes rendering the oblique modes the fastest-growing ones; our calculations suggest that it would only a require a small change to the CET whistler growth rates for this to be realised. As a further aside, we observe that in a plasma with sufficiently high plasma \(\beta_{e}\), these oblique modes are in fact closer in nature to kinetic Alfven waves (KAWs) than to whistler waves. Whistler waves are characterised as having effectively immobile ions (\(\omega\gg k_{\perp}v_{\rm thi}\)), while KAWs have warm ions (\(\omega\ll k_{\perp}v_{\rm thi}\)); as a consequence, whistler waves have a negligible density perturbation (\(\delta n_{e}\ll Zen_{e}\varphi/T_{i}\), where \(\varphi\) is the electrostatic potential associated with the wave), while KAWs do not: \(\delta n_{e}\approx Zen_{e}\varphi/T_{i}\)(Boldyrev _et al._, 2013). In a \(\beta_{e}\sim 1\) plasma for \(k_{\perp}\gtrsim k_{\parallel}\), the real frequency of whistler modes satisfies \(\omega/k_{\perp}v_{\rm thi}\sim k_{\parallel}\rho_{i}/\beta_{e}\sim k_{ \parallel}\rho_{i}\); thus, we conclude from our above considerations that the two waves must operate in different regions of wavenumber space, viz., \(k_{\parallel}\rho_{i}\ll 1\), \(k_{\perp}\rho_{i}>1\) for KAWs, and \(k_{\parallel}\rho_{i}\gg 1\) for whistlers. However, for \(\beta_{e}\gtrsim\mu_{e}^{-1/2}\) (where \(\mu_{e}=m_{e}/m_{i}\)) and \(k_{\perp}\sim k_{\parallel}\gg\rho_{i}^{-1}\), the frequency of whistler waves is too low for \(\omega\gg k_{\perp}v_{\rm thi}\) to be satisfied whilst also maintaining \(k_{\parallel}\rho_{e}\ll 1\). Instead, the ions participate in the wave mechanism, and \(\delta n_{e}\approx-Zen_{e}\varphi/T_{i}\) (see appendix H.2). For further discussion of the physics of the whistler instability (as well as its nonlinear evolution), see Komarov _et al._ (2018) and the other references given at the beginning of section 3.3.1. #### 3.3.3 Slow-(hydromagnetic)-wave instability Although parallel ion heat fluxes in a classical, collisional plasma are typically much weaker than electron heat fluxes, they can still act as a free-energy source for instabilities, by introducing anisotropy to the ion distribution function (1.1) (i.e., the CE ion-temperature-gradient term). Furthermore, anisotropy in the ion distribution function can enable the instability of plasma modes that are not destabilised by the CE electron-temperature-gradient term. This exact situation is realised in the CET slow-hydromagnetic-wave instability, in which a sufficiently large CET ion-temperature-gradient term counteracts the effect of ion cyclotron damping on slow hydromagnetic waves. The slow hydromagnetic wave (or slow wave) (Rogister, 1971; Foote & Kulsrud, 1979) is the left-hand-polarised quasi-parallel electromagnetic mode in high-\(\beta\) plasma; it exists for parallel wavenumbers \(k_{\parallel}\) that satisfy \(\beta_{i}^{-1/2}\ll k_{\parallel}\rho_{i}\lesssim 1\), and has a characteristic frequency \(\omega\approx 2\Omega_{i}/\beta_{i}\). To the authors' knowledge, no instability of the slow wave due to the ion heat flux has previously been reported. The instability's mechanism is analogous to the CET whistler instability: the slow waves drain energy from ions with parallel velocities \(v_{\parallel}=\pm\Omega_{i}/k_{\parallel}\) via gyroresonant wave-particle interactions. For an anti-parallel ion temperature gradient (i.e., \(\nabla_{\parallel}T_{i}<0\), so \(\eta_{i}<0\)), slow waves propagating down the temperature gradient are destabilised, while those propagating up the temperature gradient are not. As before, the slow-wave instability is most easily characterised in the subsidiary limit \(k_{\perp}\rho_{i}\to 0\) (\(k=k_{\parallel}\)). Under the ordering \(k_{\parallel}\rho_{i}\sim 1\), the real frequency \(\varpi\) and growth rate \(\gamma\) are given by (see appendix J.4.1) \[\frac{\varpi}{\Omega_{i}} =\eta_{i}\left(\frac{k_{\parallel}\rho_{i}}{4}-\frac{1}{2k_{ \parallel}\rho_{i}}\right)-\frac{k_{\parallel}^{2}\rho_{i}^{2}\left[\text{Re} \;Z\big{(}1/k_{\parallel}\rho_{i}\big{)}+k_{\parallel}\rho_{i}\right]\big{(} \eta_{i}/4+k_{\parallel}\rho_{i}/\beta_{i}\big{)}}{\left[\text{Re}\;Z\big{(}1/ k_{\parallel}\rho_{i}\big{)}+k_{\parallel}\rho_{i}\right]^{2}+\pi\exp\left(-2/k_{ \parallel}^{2}\rho_{i}^{2}\right)}\,, \tag{11a}\] \[\frac{\gamma}{\Omega_{i}} =-\frac{\sqrt{\pi}k_{\parallel}^{2}\rho_{i}^{2}\left(\eta_{i}/4+ k_{\parallel}\rho_{i}/\beta_{i}\right)}{\left[\text{Re}\;Z\big{(}1/k_{\parallel} \rho_{i}\big{)}+k_{\parallel}\rho_{i}\right]^{2}\exp\left(1/k_{\parallel}^{2} \rho_{i}^{2}\right)+\pi\exp\left(-1/k_{\parallel}^{2}\rho_{i}^{2}\right)}\,. \tag{11b}\] The CET electron-temperature-gradient term does not appear because its contributions to the frequency and growth rate are much smaller than the equivalent contributions of the CET ion-temperature-gradient term at \(k_{\parallel}\rho_{i}\sim 1\). Plots of \(\varpi=\varpi(k_{\parallel})\) and \(\gamma=\gamma(k_{\parallel})\) for different values of \(\eta_{i}\beta_{i}<0\) are shown in figure 5. As with the CET whistler instability, we can derive simple expressions for the peak growth rate (and the wavenumber associated with that growth rate) in subsidiary limits. First, ordering \(k_{\parallel}\rho_{i}\sim\eta_{i}\beta_{i}/4\ll 1\) so that the destabilising \(\eta_{i}\) terms and the stabilising ion FLR terms are the same order, we find that the real frequency (11a) becomes \[\varpi\approx\frac{2\Omega_{i}}{\beta_{i}}\left(1-\frac{1}{4}k_{ \parallel}\rho_{i}\eta_{i}\beta_{i}-\frac{3}{2}k_{\parallel}^{2}\rho_{i}^{2} \right)\,, \tag{12}\] which is precisely that of the slow hydromagnetic wave, with first-order FLR corrections included (Foote & Kulsrud, 1979). For \(\eta_{i}<0\) and \(k_{\parallel}\rho_{i}<|\eta_{i}|\beta_{i}/4\), the growth rate (11b) is positive: \[\gamma\approx-\frac{4\sqrt{\pi}}{k_{\parallel}^{4}\rho_{i}^{4}} \left(\frac{\eta_{i}}{4}+\frac{k_{\parallel}\rho_{i}}{\beta_{i}}\right)\exp \left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)\Omega_{i}\,. \tag{13}\] Figure 5: _Parallel CET slow-hydromagnetic-wave instability._ Dispersion curves of slow hydromagnetic waves whose instability is driven by the ion-temperature-gradient term in the CE distribution function (11b), for wavevectors co-parallel with the background magnetic field (viz., \(\mathbf{k}=k_{\parallel}\hat{\mathbf{z}}\)). The frequency (solid blue) and growth rates (solid red) of the modes are calculated using (11a) and (11b), respectively. The resulting frequencies and growth rates, when normalised as \(\gamma\beta_{i}/\Omega_{i}\), are functions of the dimensionless quantity \(\eta_{i}\beta_{i}\); we show the dispersion curves for three different values of \(\eta_{i}\beta_{i}\). The approximations (12) and (13) for the frequency (dotted blue) and growth rate (dotted red) in the limit \(k_{\parallel}\rho_{i}\ll 1\) are also plotted, as are the approximations (11a) and (11b) for the frequency (dashed blue) and growth rate (dashed red) in the limit \(k_{\parallel}\rho_{i}\gg 1\). The maximum growth rate (which is exponentially small in \(\eta_{i}\beta_{i}/4\ll 1\)) is \[\gamma_{\rm max}\approx\frac{8\sqrt{\pi}}{|\eta_{i}|\beta_{i}^{2}}\exp\left(- \frac{16}{|\eta_{i}|^{2}\beta_{i}^{2}}-1\right)\!\Omega_{i}\,, \tag{16}\] achieved at the parallel wavenumber \[k_{\parallel}\rho_{i}\approx\frac{|\eta_{i}|\beta_{i}}{4}-\frac{|\eta_{i}|^{3} \beta_{i}^{3}}{128}\,. \tag{17}\] In the opposite limit, \(k_{\parallel}\rho_{i}\sim(|\eta_{i}|\beta_{i}/4)^{1/3}\gg 1\), we obtain \[\varpi \approx -\left(\eta_{i}\beta_{i}\frac{1-\pi/4}{k_{\parallel}\rho_{i}}-k_ {\parallel}^{2}\rho_{i}^{2}\right)\frac{\Omega_{i}}{\beta_{i}}\,, \tag{18a}\] \[\gamma \approx -\sqrt{\pi}\left[\frac{\eta_{i}}{4}\beta_{i}\left(1-\frac{\pi-3 }{k_{\parallel}^{2}\rho_{i}^{2}}\right)+k_{\parallel}\rho_{i}\right]\frac{ \Omega_{i}}{\beta_{i}}\,. \tag{18b}\] The maximum positive growth rate is \[\gamma_{\rm max}\approx\frac{\sqrt{\pi}}{4}\left\{1-3\left[4\left(\pi-3 \right)\right]^{1/3}\left(|\eta_{i}|\beta_{i}\right)^{-2/3}\right\}|\eta_{i}| \Omega_{i}\approx 0.44\left[1-2.48\left(|\eta_{i}|\beta_{i}\right)^{-2/3}\right]| \eta_{i}|\Omega_{i}\,, \tag{19}\] realised for \(\eta_{i}<0\) at the parallel wavenumber \[k_{\parallel}\rho_{i}\approx\left(\frac{\pi-3}{2}\right)^{1/3}\left(|\eta_{i} |\beta_{i}\right)^{1/3}\approx 0.41\left(|\eta_{i}|\beta_{i}\right)^{1/3}\,. \tag{20}\] We note that, in contrast to the CET whistler instability, the real frequency of the fastest-growing unstable mode is smaller than its growth rate: \(\omega_{\rm peak}/\gamma_{\rm max}\approx 0.36(|\eta_{i}|\beta_{i})^{-1/3}\). The approximate expressions (14), (15), (18a), and (18b) for the frequency and growth rate in the limits \(k_{\parallel}\rho_{i}\ll 1\) and \(k_{\parallel}\rho_{i}\gg 1\), are plotted in figure 5, along with the exact results (13). As with the CET whistler instability, a general expression for the complex frequency of oblique ion CET instabilities can be derived in the form (see appendix J.4): \[\omega=\frac{\Omega_{i}}{\beta_{i}}k_{\parallel}|\rho_{i}|\frac{-\tilde{B}_{ \rm T}\pm\sqrt{\tilde{B}_{\rm T}^{2}+4\tilde{A}_{\rm T}\tilde{C}_{\rm T}}}{2 \tilde{A}_{\rm T}}\,, \tag{21}\] where \(\tilde{A}_{\rm T}=\tilde{A}_{\rm T}(k_{\parallel}\rho_{i},k_{\perp}\rho_{i}, \eta_{i}\beta_{i})\), \(\tilde{B}_{\rm T}=\tilde{B}_{\rm T}(k_{\parallel}\rho_{i},k_{\perp}\rho_{i}, \eta_{i}\beta_{i})\), and \(\tilde{C}_{\rm T}=\tilde{C}_{\rm T}(k_{\parallel}\rho_{i},k_{\perp}\rho_{i}, \eta_{i}\beta_{i})\) are again sums and products of various special mathematical functions defined in (121). Investigating such modes by evaluating (21) numerically for a range of wavenumbers (see figure 6), we find that, for \(\eta_{i}<0\), there is one mode that is always damped and one that can be unstable. For \(-\eta_{i}\lesssim 4/\beta_{i}\), the unstable modes are restricted to quasi-parallel modes (see figure 6a); for \(-\eta_{i}\gtrsim 4/\beta_{i}\), there is a much broader spectrum of unstable modes (including oblique ones). The positive growth rates of the unstable mode are shown in figure 6b for \(\eta_{i}\beta_{i}=-8\). The typical growth rate \(\gamma\) satisfies \(\gamma\sim\Omega_{i}/\beta_{i}\sim\eta_{i}\Omega_{i}\), as anticipated from (21). We also observe in figure 6b the existence of an unstable mode at quasi-perpendicular wavenumbers, which is discussed in section 3.3.4. In summary, an ion temperature gradient can destabilise ion-Larmor-scale, slow hydromagnetic waves via a similar mechanism to an electron temperature gradient destabilising electron-Larmor-scale whistler waves. If \(\beta_{i}\gg L_{T_{i}}/\lambda_{i}\), the characteristic growth rate of these modes is \(\gamma\sim\lambda_{i}\Omega_{i}/L_{T_{i}}\). Unstable modes whose wavevector is parallel to \(\mathbf{B}_{0}\) grow most rapidly, although the growth rate of (moderately) oblique modes is only somewhat smaller. While the CET whistler instability is faster growing than the CET slow-wave instability, both modes grow much more quickly than characteristic hydrodynamic time scales in a strongly magnetised plasma. In any conceivable saturation mechanism, the electron mode will adjust the electron heat flux, and the ion mode the ion heat flux. Thus, it seems likely that understanding the evolution (and ultimately, the saturation) of both instabilities would be necessary to model correctly the heat transport in a classical, collisional plasma that falls foul of the \(\beta\)-stabilisation condition. #### 3.3.4 Long-wavelength kinetic-Alfven-wave instability The instability observed in figure 6b at wavevectors satisfying \(k_{\parallel}\rho_{i}\ll k_{\perp}\rho_{i}\sim 1\) is different in nature to the slow-hydromagnetic-wave instability: it is an ion-temperature-gradient-driven instability of long-wavelength KAWs. Like the CET slow-wave instability, it operates on account of resonant wave-particle interactions that allow free energy to be drained from the anisotropy of the ion distribution function, which itself arises from the ion temperature gradient. However, the gyroresonances \(v_{\parallel}\approx\pm\Omega_{i}/k_{\parallel}\) operate inefficiently for modes with \(k_{\parallel}\rho_{i}\ll 1\) in a CE plasma, because there are comparatively few particles with \(v_{\parallel}\gg v_{\rm thi}\); the dominant resonance is instead the Landau resonance \(v_{\parallel}=\omega/k_{\parallel}\). More specifically, KAWs with \(k_{\perp}\rho_{i}\gtrsim 1\), which are usually subject to strong Landau and Barnes damping (that is, the damping rate of the waves is comparable to their real frequency), can be destabilised if the (ion) plasma beta is sufficiently large: \(\beta_{i}\gtrsim L_{T_{i}}/\lambda_{i}\). In figure 6b, the peak growth rate of the CET KAW instability is smaller than that of the CET slow-hydromagnetic-wave instability by an order of magnitude; as will be shown below, this is, in fact, a generic feature of the instability. Similarly to quasi-parallel unstable modes, quasi-perpendicular ones such as unstable KAWs can be characterised analytically, allowing for a simple identification of unstable modes and their peak growth rates. It can be shown (see appendix J.4.2) that, in the limit \(k_{\parallel}\rho_{i}\ll 1\), \(k_{\perp}\rho_{i}\sim 1\), the complex frequency of the low-frequency (\(\omega\ll k_{\parallel}v_{\rm thi}\) Figure 6: _Oblique CET ion-Larmor-scale instabilities_. Maximum positive growth rates of unstable ion-Larmor-scale modes whose instability is driven by the CE ion-temperature-gradient term in the CE distribution function (1.1_b_), at arbitrary wavevectors with respect to the background magnetic field. The growth rates of all modes are calculated by taking the imaginary part of (3.21), with coefficients \(\tilde{A}_{\rm T}\), \(\tilde{B}_{\rm T}\) and \(\tilde{C}_{\rm T}\) being known functions of the wavevector (see appendix J.4). The growth rates are calculated on a \(400^{2}\) grid, with logarithmic spacing in both perpendicular and parallel directions between the minimum and maximum wavenumber magnitudes. The resulting growth rates, when normalised as \(\gamma\beta_{i}/\Omega_{i}\), are functions of \(\eta_{i}\beta_{i}\). **a)**\(\eta_{i}\beta_{i}=-2.5\). **b)**\(\eta_{i}\beta_{i}=-8\). The unstable \(k_{\parallel}\rho_{i}\ll k_{\perp}\rho_{i}\sim 1\) modes appearing in b) are dealt with in section 3.3.4. modes in a plasma whose ion distribution function is (1.2) is \[\frac{\omega}{k_{\parallel}v_{\rm thi}} = \frac{\eta_{i}\mathcal{G}_{i}}{2\left(1-\mathcal{F}_{i}\right)}+ \frac{k_{\perp}\rho_{i}}{\beta_{i}\left(1-\mathcal{F}_{i}\right)^{2}}\Bigg{[}- \frac{\mathrm{i}\sqrt{\pi}}{2}k_{\perp}\rho_{i}\left(\mathcal{F}_{i}+\sqrt{ \frac{\mu_{e}Z^{2}}{\tau}}\right) \tag{3.22}\] \[\pm\sqrt{1-\frac{\pi}{4}\frac{k_{\perp}^{2}\rho_{i}^{2}}{\beta_{i} }\bigg{(}\mathcal{F}_{i}+\sqrt{\frac{\mu_{e}Z^{2}}{\tau}}\bigg{)}^{2}-\frac{ \mathrm{i}\sqrt{\pi}\eta_{i}\beta_{i}}{4}\frac{2\mathcal{G}_{i}-\mathcal{F}_{ i}\left(1-\mathcal{F}_{i}\right)}{1-\mathcal{F}_{i}}\Bigg{]}}\,,\] where \(\mathcal{F}_{i}\equiv\mathcal{F}(k_{\perp}\rho_{i})\), \(\mathcal{G}_{i}\equiv\mathcal{G}(k_{\perp}\rho_{i})\), and \[\mathcal{F}(\alpha) \equiv \exp\left(-\frac{\alpha^{2}}{2}\right)\left[I_{0}\left(\frac{ \alpha^{2}}{2}\right)-I_{1}\left(\frac{\alpha^{2}}{2}\right)\right]\,, \tag{3.23}\] \[\mathcal{G}(\alpha) \equiv 2\alpha^{2}\mathcal{F}(\alpha)-\exp\left(-\frac{\alpha^{2}}{2 }\right)I_{1}\left(\frac{\alpha^{2}}{2}\right)\,. \tag{3.24}\] In a Maxwellian plasma (i.e., when \(\eta_{i}=0\)), (3.22) becomes \[\frac{\omega}{k_{\parallel}v_{\rm thi}} = \frac{1}{\left(1-\mathcal{F}_{i}\right)^{2}}\Bigg{[}-\frac{ \mathrm{i}\sqrt{\pi}}{2}\frac{k_{\perp}^{2}\rho_{i}^{2}}{\beta_{i}}\left( \mathcal{F}_{i}+\sqrt{\frac{\mu_{e}Z^{2}}{\tau}}\right) \tag{3.25}\] \[\pm\sqrt{\frac{k_{\perp}^{2}\rho_{i}^{2}}{\beta_{i}^{2}}-\frac{ \pi}{4}\frac{k_{\perp}^{4}\rho_{i}^{4}}{\beta_{i}^{2}}\bigg{(}\mathcal{F}_{i}+ \sqrt{\frac{\mu_{e}Z^{2}}{\tau}}\bigg{)}^{2}}\Bigg{]}\,.\] In the subsidiary limit \(k_{\perp}\rho_{i}\gg 1\), we recover \(\omega\approx\pm k_{\parallel}v_{\rm thi}k_{\perp}\rho_{i}/\beta_{i}\), which is the well-known dispersion relation of a KAW (Schekochihin _et al._, 2009; Boldyrev _et al._, 2013; Kunz _et al._, 2018). For \(\eta_{i}\neq 0\), we find that, for modes with a positive propagation direction with respect to the background magnetic field (viz., \(k_{\parallel}>0\)), there is an instability provided \[\eta_{i}\lesssim-3.14\left(1+6.5\sqrt{\frac{\mu_{e}Z^{2}}{\tau}}\right)\beta_{ i}^{-1}\,, \tag{3.26}\] with the perpendicular wavenumber \(k_{\perp}\rho_{i}\) of the fastest-growing unstable mode at fixed \(k_{\parallel}\) just beyond this threshold being approximately given by \[k_{\perp}\rho_{i}\approx 1.77\left(1-3.4\sqrt{\frac{\mu_{e}Z^{2}}{\tau}} \right)\,. \tag{3.27}\] Figure 7 shows the real frequency and growth rate of such modes at three different (negative) values of \(\eta_{i}\beta_{i}\). As \(\eta_{i}\) is decreased beyond the threshold, modes over an increasingly large range of perpendicular wavenumbers are destabilised at both super- and sub-ion Larmor scales. Indeed, in the limit \(|\eta_{i}|\beta_{i}\gg 1\), the peak growth rate \(\gamma_{\rm max}\) (for a fixed \(k_{\parallel}\)) occurs at a perpendicular wavenumber \(k_{\perp}\rho_{i}<1\), which decreases as \(|\eta_{i}|\beta_{i}\) increases. Such modes are, in fact, no longer well described physically as KAWs; their analogues in a Maxwellian plasma are Barnes-damped, non-propagating slow modes. Although it is possible to characterise analytically the peak growth rate of the unstable modes (and the perpendicular wavenumber at which such growth is attained) in the limit \(k_{\parallel}\rho_{i}\ll 1\) by analysing (3.22), such estimates do not capture accurately the behaviour of the fastest-growing modes across all wavevectors, because these fastest-growing modes occur at finite values of \(k_{\parallel}\rho_{i}\); at such values, the dependence of the frequency and growth rate on \(k_{\perp}\rho_{i}\) departs somewhat from (3.22) (see figure 7). Instead, we find numerically that, for \(\eta_{i}\beta_{i}\lesssim-6\), \[\gamma_{\rm max}\approx 0.025|\eta_{i}|\Omega_{i}\quad\mbox{at}\quad(k_{\parallel} \rho_{i})_{\rm peak}\approx 0.35\,, \tag{38}\] independent of the specific value of either \(\eta_{i}\) or \(\beta_{i}\). For values of \(k_{\parallel}\rho_{i}\) that are larger than \((k_{\parallel}\rho_{i})_{\rm peak}\), the instability is quenched. It is clear that, in comparison to the slow-hydromagnetic wave instability, the growth rate of the fastest-growing perpendicular modes is small [see (38)]. This difference can be attributed to the fact that, for unstable modes in the limit \(|\eta_{i}|\beta_{i}\gg 1\), \(\gamma_{\rm max}\sim|\eta_{i}|k_{\parallel}\rho_{i}\Omega_{i}\) and the value of \(k_{\parallel}\rho_{i}\) at which maximum growth is achieved is still rather small compared to unity. We conclude that the instability of slow hydromagnetic waves that are driven by an ion temperature gradient is likely to be more significant than the analogous instability of quasi-perpendicular/KAW modes. ## 4 CES (Chapman-Enskog, shear-driven) microinstabilities ### Form of CE distribution function Next, we consider the non-Maxwellian terms of the CE distribution arising from bulk-flow gradients. If we set \(\eta_{s}=0\) for both ions and electrons (viz., neglecting both temperature gradients and electron-ion drifts), the CE distribution functions (8) for both species become \[f_{s0}(v_{\parallel},v_{\perp})=\frac{n_{s0}}{v_{\rm ths}^{3}\pi^{3/2}}\exp \left(-\tilde{v}_{s}^{2}\right)\left[1-\epsilon_{s}\left(\frac{v_{\parallel} ^{2}}{v_{\rm ths}^{2}}-\frac{v_{\perp}^{2}}{2v_{\rm ths}^{3}}\right)\right], \tag{39}\] where we have again chosen the isotropic functions \(C_{s}(\tilde{v}_{s})\) to be the ones that arise from the Krook collision operator (see section 2.4.2). We note that for this choice of collision operator, the constant \(\mathcal{C}_{s}\) defined by (34) is \(\mathcal{C}_{s}\approx 3/2\), and so the relationship (35) Figure 7: _Quasi-perpendicular CET KAW instability._ Dispersion curves of unstable KAWs whose instability is driven by the ion-temperature-gradient term in the CE distribution function (20b), for wavevectors that are almost perpendicular to the background magnetic field (viz., \(k_{\perp}\gg k_{\parallel}\)). The frequency (blue) and growth rates (red) of unstable modes are calculated at (small) fixed values of \(k_{\parallel}\rho_{i}\) from the real and imaginary parts of (21); the solid curves are calculated for \(k_{\parallel}\rho_{i}=0.35\), while the dashed curves are for \(k_{\parallel}\rho_{i}=0.05\). The resulting frequencies and growth rates, when normalised as \(\gamma\beta_{i}/k_{\parallel}v_{\rm thi}\), are functions of the dimensionless quantity \(\eta_{i}\beta_{i}\); we show the dispersion curves for three different values of \(\eta_{i}\beta_{i}\). The frequency (dotted blue) and growth rate (dotted red) in the limit \(k_{\parallel}\rho_{i}\ll 1\), which are calculated by taking the real and imaginary parts of (22), are also plotted. between the CE distribution functions' pressure anisotropy \(\Delta_{s}\) and the shear parameter \(\epsilon_{s}\) becomes \[\Delta_{s}=\frac{3}{2}\epsilon_{s}\,. \tag{4.2}\] We also observe that the CE shear terms have even parity with respect to the parallel velocity \(v_{\parallel}\), and thus for any unstable mode with positive parallel wavenumber \(k_{\parallel}>0\), there is a corresponding unstable mode with \(k_{\parallel}<0\). This conclusion has the consequence that the sign of \(\epsilon_{s}\) [which is the same as the sign of \((\hat{\boldsymbol{z}}\hat{\boldsymbol{z}}-\boldsymbol{l}/3)\)**: \(\boldsymbol{W}_{s}\)**, where \(\boldsymbol{W}_{s}\) is the rate-of-strain tensor of species \(s\) - see (2.12)] has a significant effect on possible types of CES microinstabilities. Thus, we must consider the cases \(\epsilon_{s}>0\) (positive pressure anisotropy, \(\Delta_{s}>0\)) and \(\epsilon_{s}<0\) (negative pressure anisotropy, \(\Delta_{s}<0\)) separately. For easier comparison to previous work by other authors, we will sometimes substitute \(\epsilon_{s}=2\Delta_{s}/3\), and work in terms of \(\Delta_{s}\). As with the discussion of CET microinstabilities in section 3, in the main text, we only present the main findings of our calculations: namely, the overview of the CES stability landscape (section 4.2), and the analytical characterisation of CES microinstabilities with \(\epsilon_{s}>0\) (section 4.3) and \(\epsilon_{s}<0\) (section 4.4). The methodology underlying the calculations of growth rates of CES microinstabilities is presented in appendix K. ### Stability The stability of CE distribution functions of the form (4.1) is determined as a function of the parameters \(\epsilon_{i}\), \(\epsilon_{e}\), \(d_{e}\), \(\beta_{e}\), \(\beta_{i}\), and the velocity scale length \(L_{V}=|\left(\hat{\boldsymbol{z}}\hat{\boldsymbol{z}}-\frac{1}{3}\boldsymbol{l }\right)\)**: \(\boldsymbol{W}_{i}/V_{i}|^{-1}\) by assessing whether the maximum microinstability growth rate across all wavelengths smaller than \(\lambda_{e}\) and \(\lambda_{i}\) is negative or positive (see appendix K for the methodology underpinning this calculation). As with the temperature-gradient-driven instabilities, we report the results of stability calculations that pertain to a temperature-equilibrated hydrogen plasma; that is, the particular case in which \(\beta_{i}=\beta_{e}\) and \(\epsilon_{e}=\mu_{e}^{1/2}\epsilon_{i}\) [where we recall that the characteristic magnitude of the CE electron velocity-shear term in such a plasma is smaller than the analogous CE ion velocity-shear term by a factor of \(\mu_{e}^{1/2}=(m_{e}/m_{i})^{1/2}\)]. Because \(\epsilon_{i}\) can take both positive and negative values (see section 4.1), we do one stability calculation for each case; the results of these two calculations are shown in figures 8 and 9, respectively. The key characteristics of the stability of the CE distribution function (4.1) for ions and electrons can be shown using plots over a two-dimensional \((d_{e}/L_{V},\mathrm{Ma}\,\lambda_{e}/L_{V})\) parameter space at fixed \(\beta_{e}\) and \(\mathrm{Ma}\) - we remind the reader that \(\mathrm{Ma}\,\lambda_{e}/L_{V}=|\epsilon_{i}|\), and that the Mach number \(\mathrm{Ma}\) is assumed to satisfy \(\mathrm{Ma}\lesssim 1\) - as opposed to the five-dimensional \((\epsilon_{i},d_{e},L_{V},\beta_{e},\mathrm{Ma})\) parameter space that might naively be anticipated, because the two relevant stability thresholds are not independent functions of \(d_{e}\), \(\mathrm{Ma}\), and \(L_{V}\). The regions of stability presented in figure 8a for \(\epsilon_{i}>0\) (viz., for shear flows that drive positive pressure anisotropy) and in figure 9a for \(\epsilon_{i}<0\) (viz., for shear flows driving negative pressure anisotropy), respectively, are broadly similar to the region of stability for CET microinstabilities described in section 3.2 (and shown in figure 2a), but with one crucial difference. Once again, for \(d_{e}/L_{V}\) less than a critical value \((d_{e}/L_{V})_{\mathrm{c0}}\), stability is independent of \(d_{e}/L_{V}\), and there are no instabilities for \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}\ll 1\); for \(d_{e}/L_{V}\gtrsim(d_{e}/L_{V})_{\mathrm{c0}}\) and \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}>1\), stability is guaranteed if (and only if) \(d_{e}/L_{V}>(d_{e}/L_{V})_{\mathrm{c}}\) at fixed \(\mathrm{Ma}\,\lambda_{e}/L_{V}\), where \((d_{e}/L_{V})_{\mathrm{c}}\) is a monotonically increasing function of \(\mathrm{Ma}\,\lambda_{e}/L_{V}\). As before, these two bounding thresholds correspond to the \(\beta\)-stabilisation conditions and collisional stabilisation conditions, respectively, of CES microinstabilities. However, the dependence of \((d_{e}/L_{V})_{\mathrm{c}}\) on \(\mathrm{Ma}\,\lambda_{e}/L_{V}\) is more complicated Figure 8: _CE-distribution-function stability map for CES microinstabilities driven by positive pressure anisotropy._ Exploration of the stability of the ion and electron CE distribution functions (4.1) for different positive values of small parameters \(\epsilon_{e}\) and \(\epsilon_{i}\) (viz., electron or ion pressure anisotropies), and the ratio of the electron inertial scale \(d_{e}\) to the velocity scale length \(L_{V}\), in a temperature-equilibrated hydrogen plasma. In this plot, we chose \(\epsilon_{e}=\mu_{e}^{1/2}\epsilon_{i}\), and then show \(\mathrm{Ma}\,\lambda_{e}/L_{V}=|\epsilon_{i}|\) with equal logarithmic spacing in the range \(\left[10^{-5},10^{0}\right]\); \(d_{e}/L_{V}\) is chosen with equal logarithmic spacing in the range \(\left[10^{-15},10^{0}\right]\). The total size of the grid is \(400^{2}\). For reasons of efficiency, we calculate growth rates on a \(40^{2}\) grid in wavenumber space with logarithmic spacing for both parallel and perpendicular wavenumbers. In this plot, \(\beta_{e}=\beta_{i}=10^{4}\), and \(\mathrm{Ma}=1\). **a)** Stable (blue) and unstable (red) regions of \((d_{e}/L_{V},\mathrm{Ma}\,\lambda_{e}/L_{V})\) phase space. The theoretically anticipated collisional cutoffs [right – see (4.5)] and \(\beta\)-stabilisation thresholds (horizontal dashed lines) for the CES mirror and parallel transverse instabilities, respectively, are also shown. **b)** Maximum normalised microinstability growth rate (red) versus \(\mathrm{Ma}\,\lambda_{e}/L_{V}\) for a fixed electron inertial scale \(d_{e}/L_{V}=10^{-15}\), along with the maximum growth rate for the mirror instability (purple) in the limit \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg 1\) [see (4.13)], and for the parallel transverse instability in the limit \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg\mu_{e}^{-1/2}\) [see (4.31), with \(\theta=0^{\circ}\)]. **c)** Parallel wavenumber of the fastest-growing microinstability (red) versus \(\mathrm{Ma}\,\lambda_{e}/L_{V}\) for a fixed electron inertial scale \(d_{e}/L_{V}=10^{-15}\), along with the same quantity analytically predicted for the mirror instability (purple) in the limit \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg 1\) [see (4.14)], and for the parallel transverse instability (blue) in the limit \(\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg\mu_{e}^{-1/2}\) [see (4.33), with \(\theta=0^{\circ}\)]. **d)** Wavevector angle \(\theta\equiv\tan^{-1}\left(k_{\parallel}/k_{\perp}\right)\) of the fastest-growing instability over the \((d_{e}/L_{V},\mathrm{Ma}\,\lambda_{e}\beta_{e}/L_{V})\) parameter space. than the analogous relationship between \(\left(d_{e}/L_{T}\right)_{\rm c}\) and \({\rm Ma}\,\lambda_{e}/L_{T}\) that was presented in figure 2a. Namely, if \({\rm Ma}\,\lambda_{e}/L_{T}\gtrsim\beta_{e}^{-1}\mu_{e}^{-1/2}\), then \(\left(d_{e}/L_{V}\right)_{\rm c}\) suddenly shifts towards a larger value, with the subsequent (power-law) relationship between \(\left(d_{e}/L_{V}\right)_{\rm c}\) and \({\rm Ma}\,\lambda_{e}/L_{V}\) being distinct from the analogous relationship when \({\rm Ma}\,\lambda_{e}/L_{T}\lesssim\beta_{e}^{-1}\mu_{e}^{-1/2}\). Figure 9: _CE-distribution-function stability map for CES microinstabilities driven by negative pressure anisotropy_. Same as figure 8, but for negative values of the small parameters \(\epsilon_{e}\) and \(\epsilon_{i}\). **a)** Stable (blue) and unstable (red) regions of \(\left(d_{e}/L_{V},{\rm Ma}\,\lambda_{e}/L_{V}\right)\) phase space. The theoretically anticipated collisional cutoffs [right – see (4.5)] for the CES firehose and oblique transverse instabilities, respectively, and the \(\beta\)-stabilisation thresholds (horizontal dashed lines) for the CES firehose, CES electron-scale-transition (EST) and whisper instabilities are also shown. **b)** Maximum normalised microinstability growth rate (red) versus \({\rm Ma}\,\lambda_{e}/L_{V}\) for a fixed electron inertial scale \(d_{e}/L_{V}=10^{-15}\), along with analytically predicted maximum growth rate for the firehose instability (purple) [see (4.66)], for the EST instability (green) in the limit \(\mu_{e}^{-1/2}\beta_{e}^{-5/7}\gg{\rm Ma}\,\lambda_{e}/L_{V}\gg\mu_{e}^{-1/2} \beta_{e}^{-1}\) [see (4.98)] for the whisper instability (yellow) in the limit \(\mu_{e}^{-1/2}\beta_{e}^{-1/3}\gg{\rm Ma}\,\lambda_{e}/L_{V}\gg\mu_{e}^{-1/2} \beta_{e}^{-5/7}\) [see (4.110)], and for the oblique transverse instability (blue) in the limit \({\rm Ma}\,\lambda_{e}/L_{V}\gg\mu_{e}^{-1/2}\beta_{e}^{-1}\) [see (4.101)]. **c)** Same as b), but for the parallel wavenumber of the fastest-growing microinstability. The analytical predictions of this quantity for the firehose instability (purple) [see (4.67)], for the EST instability (green) [see (4.99_b_)], and for the whisper instability (yellow) [see (4.111_b_)], respectively, are also shown. **d)** Same as b), but for the perpendicular wavenumber of the fastest-growing microinstability. The analytical predictions of this quantity for the firehose instability (purple) [see (4.67)], for the EST instability (green) [see (4.99_a_)], and for the whisper instability (yellow) [see (4.111_a_)], are also shown. This behaviour is the result of a feature of the unstable region that is present for CES but not CET microinstabilities: different instabilities being dominant in different regions of the \((d_{e}/L_{V},{\rm Ma}\,\lambda_{e}/L_{V})\) parameter space. As we will see, this arises because CES microinstabilities on ion scales have less stringent \(\beta\)-stabilisation thresholds than those on electron scales. Although their regions of stability are qualitatively similar, the types of microinstabilities that arise when \(\epsilon_{i}>0\) or \(\epsilon_{i}<0\) are quite different, so we now discuss each case in turn. #### 4.2.1 Positive pressure anisotropy For \(\epsilon_{i}>0\) and \(0.5\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim{\rm Ma}\,\lambda_{e}/L_{V}\gg\beta_{e} ^{-1}\), the fastest-growing CES microinstability is the _mirror instability_: that is, a non-propagating, compressible slow mode on ion scales that is destabilised by positive ion pressure anisotropy. For \({\rm Ma}\,\lambda_{e}\beta_{e}/L_{V}\gtrsim 0.5\mu_{e}^{-1/2}\), a faster-growing CES microinstability emerges on electron Larmor scales, driven by positive electron pressure anisotropy: the _whistler (electron-cyclotron) instability_. For fixed \(\beta_{i}\), the CES mirror instability can operate at smaller values of \({\rm Ma}\,\lambda_{e}/L_{V}\) than the CES whistler instability, because the mirror-instability threshold \(\Delta_{i}\beta_{i}=3{\rm Ma}\,\lambda_{e}\beta_{i}/2L_{V}\geq 1\) (see section 4.3.1) is a less stringent condition on \({\rm Ma}\,\lambda_{e}/L_{V}\) for fixed \(\beta_{e}\) than the threshold \(\Delta_{e}\beta_{e}=3\mu_{e}^{1/2}{\rm Ma}\,\lambda_{e}\beta_{i}/2L_{V}\gtrsim 0.5\) of the CES whistler instability (see section 4.3.2). On the other hand, once \({\rm Ma}\,\lambda_{e}\beta_{e}/L_{V}\gtrsim 0.5\mu_{e}^{-1/2}\), the maximum growth rate of the CES mirror instability \(\gamma_{\rm mirr}\sim\Delta_{i}\Omega_{i}\) is much smaller than that of the CES whistler instability: \(\gamma_{\rm whistler,S}\sim\Delta_{e}\Omega_{e}\sim\mu_{e}^{-1/2}\Delta_{i} \Omega_{i}\gg\Delta_{i}\Omega_{i}\). For \({\rm Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg\mu_{e}^{-1/2}\), in addition to unstable whistler modes, modes on sub-electron-Larmor scales are also destabilised: this is the _parallel transverse instability_, a microinstability that is essentially unmagnetised (\(k\rho_{i}\gg 1\)) in character. When it can operate, the CES parallel transverse instability has a much larger growth rate than the unstable electron-Larmor-scale whistler waves, \(\gamma_{\rm trans}\sim\Delta_{e}\left(\Delta_{e}\beta\right)^{1/2}\Omega_{e} \gg\gamma_{\rm whist}\sim\Delta_{e}\Omega_{e}\), so if \({\rm Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg\mu_{e}^{-1/2}\), the transverse instability dominates. Numerical evidence for the dominance of the CES mirror instability when \(\mu_{e}^{-1/2}\gg{\rm Ma}\,\lambda_{e}/L_{V}\gg 1\), and then the CES parallel transverse instability when \({\rm Ma}\,\lambda_{e}/L_{V}\gg\mu_{e}^{-1/2}\), can be produced by isolating the maximum growth rate, the parallel wavenumber and the wavevector angle associated with peak growth for the unstable regions of the \((d_{e}/L_{V},{\rm Ma}\,\lambda_{e}/L_{V})\) parameter space. Figure 8b shows that, for fixed \(d_{e}/L_{V}\) and a range of \({\rm Ma}\,\lambda_{e}/L_{V}\), the peak microinstability growth rate is a reasonable match for that of the mirror instability [viz., (4.13)] for \(0.5\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim{\rm Ma}\,\lambda_{e}/L_{V}\gg\beta_{e} ^{-1}\), and a good match for the parallel transverse instability [viz., (4.31)] for \({\rm Ma}\,\lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1}\). Figure 8c demonstrates that, for \(\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim{\rm Ma}\,\lambda_{e}/L_{V}\gg\beta_{e} ^{-1}\), the (non-dimensionalised) parallel wavenumber \((k_{\parallel}\rho_{e})_{\rm peak}\) of peak growth satisfies \((k_{\parallel}\rho_{e})_{\rm peak}\sim\mu_{e}^{-1/2}\), in agreement with the expected parallel wavenumber of the fastest-growing mirror modes [see (4.14)]. At \({\rm Ma}\,\lambda_{e}/L_{V}\sim\mu_{e}^{-1/2}\beta_{e}^{-1}\), there is a dramatic shift in \((k_{\parallel}\rho_{e})_{\rm peak}\) to a value \((k_{\parallel}\rho_{e})_{\rm peak}\gtrsim 1\) that agrees with the expected parallel wavenumber of the parallel transverse instability [see (4.33)]. As for the peak-growth wavevector angle (figure 8d), for \(\beta_{e}^{-1}\lesssim{\rm Ma}\,\lambda_{e}/L_{V}\lesssim\mu_{e}^{-1/2}\beta_{ e}^{-1}\), the dominant instability is oblique (as would be expected for the mirror instability), while for \({\rm Ma}\,\lambda_{e}/L_{V}\gtrsim 0.5\mu_{e}^{-1/2}\beta_{e}^{-1}\), it is parallel (implying that the CES whistler/parallel transverse instability dominates). We conclude that the mirror instability is indeed dominant when \(0.5\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim{\rm Ma}\,\lambda_{e}/L_{V}\gg\beta_{e} ^{-1}\), and the parallel transverse instability when \({\rm Ma}\,\lambda_{e}/L_{V}\gg\mu_{e}^{-1/2}\beta_{e}^{-1}\). #### 4.2.2 Negative pressure anisotropy Now considering the case when \(\epsilon_{i}<0\), i.e., the case of negative pressure anisotropy, the only CES microinstability that operates when \(\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim\mathrm{Ma}\,\lambda_{e}/L_{V}\gg\beta_{e}^{-1}\) is the _firehose instability_: the destabilisation of Alfven waves by ion pressure anisotropies \(\Delta_{i}\lesssim-1/\beta_{i}\dagger\). If \(\mathrm{Ma}\,\lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1}\), several electron-scale CES microinstabilities arise, all of which tend to have larger growth rates than the firehose instability. The first of these to develop (at \(\mathrm{Ma}\,\lambda_{e}/L_{V}\sim\mu_{e}^{-1/2}\beta_{e}^{-1}\)) is the _oblique electron firehose instability_: the destabilisation of oblique kinetic-Alfven waves by negative electron pressure anisotropy. For \(\mu_{e}^{-1/2}\beta_{e}^{-1}\lesssim\mathrm{Ma}\,\lambda_{e}/L_{V}\lesssim\mu_ {e}^{-1/2}\beta_{e}^{-5/7}\), the _electron-scale-transition (EST) instability_ begins to operate; this is a non-propagating quasi-perpendicular mode on electron Larmor scales (\(k_{\perp}\rho_{e}\sim 1\gg k_{\parallel}\rho_{e}\)), which, while damped in a Maxwellian plasma, is unstable for sufficiently negative electron pressure anisotropies, and grows more rapidly than the oblique electron firehose instability. For \(\mu_{e}^{-1/2}\beta_{e}^{-5/7}\lesssim\mathrm{Ma}\,\lambda_{e}/L_{V}\lesssim \mu_{e}^{-1/2}\beta_{e}^{-1/3}\), the EST instability is surpassed by the _whisper instability_: the instability of a newly discovered propagating wave in a Maxwellian plasma (a _whisper wave_) whose perpendicular wavelength is on sub-electron-Larmor scales (\(k_{\perp}\rho_{e}\gg 1\)), but whose parallel wavelength is above the electron-Larmor scale (\(k_{\parallel}\rho_{e}<1\)). Finally, when \(\mathrm{Ma}\,\lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1/3}\), the _oblique transverse instability_ comes to predominate; unlike either the oblique electron firehose, the EST, or whisper instabilities, it is unmagnetised in nature (like its parallel relative). Of these four instabilities, the oblique electron firehose and transverse instabilities have been identified previously (see references in sections 4.4.7 and 4.4.9, respectively), but not the EST or whisper instabilities. We support these claims (in an analogous manner to the \(\epsilon_{i}>0\) case) by calculating the growth rate of the dominant microinstabilities for given points in the \((d_{e}/L_{V},\mathrm{Ma}\,\lambda_{e}/L_{V})\) parameter space. Figure 9b shows the maximum growth rate for a fixed value of \(d_{e}/L_{V}\). For \(\mu_{e}^{-1/2}\beta_{e}^{-1}\gtrsim\mathrm{Ma}\,\lambda_{e}/L_{V}\gg\beta_{e}^ {-1}\), the peak growth rate follows the analytical prediction for the ion firehose instability, \(\gamma_{\mathrm{fire}}\sim|\Delta_{i}|^{1/2}\Omega_{i}/\sqrt{\log 1/|\Delta_{i}|}\), when \(\Delta_{i}\ll-2/\beta_{i}\) [see (4.66)]. For \(\mathrm{Ma}\,\lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1}\), the peak growth rate becomes much greater than \(\gamma_{\mathrm{fire}}\); for \(\beta_{e}^{-5/7}\gtrsim\mu_{e}^{1/2}\mathrm{Ma}\,\lambda_{e}/L_{V}\gg\beta_{e }^{-1}\), it instead matches that of the EST instability, \(\gamma_{\mathrm{EST}}\sim|\Delta_{e}|\left(|\Delta_{e}|\beta_{e}\right)^{3/2} \Omega_{e}/\sqrt{\log|\Delta_{e}|\beta_{e}}\) [see (4.98)], where we remind the reader that \(|\Delta_{e}|=3\mu_{e}^{1/2}\mathrm{Ma}\,\lambda_{e}/2L_{V}\). For \(\mu_{e}^{1/2}\mathrm{Ma}\,\lambda_{e}/L_{V}\gg\beta_{e}^{-5/7}\), the observed growth rate agrees with an analytical prediction for the whisper instability, \(\gamma_{\mathrm{whisp}}\sim|\Delta_{e}|^{1/2}\left(|\Delta_{e}|\beta_{e} \right)^{1/4}\Omega_{e}/\sqrt{\log|\Delta_{e}|\beta_{e}}\) [see (4.110)]. Finally, because of the value of \(\beta_{e}\) chosen for this numerical example, the condition \(\mathrm{Ma}\,\lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1/3}\) under which the oblique transverse instability dominates is never met for \(\mathrm{Ma}\,\lambda_{e}/L_{V}\ll 1\), and thus the numerically measured growth rate of the dominant CES microinstability is larger than the tranverse instability's peak growth rate \(\gamma_{\mathrm{trans}}\sim|\Delta_{e}|\left(|\Delta_{e}|\beta_{e}\right)^{1/ 2}\Omega_{e}\) [see (4.101)] for the entire range of \(\mathrm{Ma}\,\lambda_{e}/L_{V}\) that we show in figure 9b, (blue line). A further confirmation that the most important microinstabilities are those that we have explicitly identified is obtained by calculating the parallel and perpendicular wavenumbers associated with the dominant microinstability. Figures 9c and 9d show that, for \(\beta_{e}^{-1}\ll\mathrm{Ma}\,\lambda_{e}/L_{V}\ll\mu_{e}^{-1/2}\beta_{e}^{-1}\), \((k_{\parallel}\rho_{e})_{\mathrm{peak}}\sim(k_{\perp}\rho_{e})_{\mathrm{ peak}}\sim\mu_{e}^{1/2}\). These values of \((k_{\parallel}\rho_{e})_{\mathrm{peak}}\) are consistent with the properties of the fastest-growing unstable firehose modes (see sections 4.4.1 and 4.4.4), whose parallel wavenumber (approximately) satisfies \((k_{\parallel}\rho_{i})_{\rm peak}\sim 1/\sqrt{\log 1/|\Delta_{i}|}\) when \(\Delta_{i}\ll-2/\beta_{i}\) [see (4.67)], and whose wavevector angle is \(\theta_{\rm peak}\approx 39^{\rm o}\). At \({\rm Ma}\,\lambda_{e}/L_{V}\sim\mu_{e}^{-1/2}\beta_{e}^{-1}\), the magnitudes of the parallel and perpendicular wavenumbers changes abruptly, to \((k_{\parallel}\rho_{e})_{\rm peak}\sim(k_{\perp}\rho_{e})_{\rm peak}\sim 1\); this is in line with expectations from the onset of the oblique electron firehose instability when \(|\Delta_{e}|\beta_{e}\sim 1\). For \({\rm Ma}\,\lambda_{e}/L_{V}\gg\beta_{e}^{-1}\) (\(|\Delta_{e}|\beta_{e}\gg 1\)), the parallel scale of the fastest-growing mode remains above electron Larmor scales [\((k_{\parallel}\rho_{e})_{\rm peak}<1\)], while \((k_{\perp}\rho_{e})_{\rm peak}\) increases monotonically above unity. Both findings match theoretical expectations concerning the evolution of the parallel and perpendicular wavenumbers of the EST and whisper instabilities as functions of increasing \(|\Delta_{e}|\beta_{e}\), and analytic formulae for these quantities are in reasonable agreement with the numerical results (see sections 4.4.8 and 4.4.10). #### 4.2.3 Collisional stabilisation For both \(\epsilon_{i}>0\) and \(\epsilon_{i}<0\), the shift in \((d_{e}/L_{V})_{\rm c}\) at \({\rm Ma}\,\lambda_{e}/L_{V}\sim\mu_{e}^{-1/2}\beta_{e}^{-1}\) observed in figures 8a and 9a can be explained in terms of the ion-scale and electron-scale microinstabilities having distinct collisional-stabilisation conditions of the form (2.124) (viz., \(k\lambda_{e}\sim k\lambda_{i}\lesssim 1\)), with the condition on the ion-scale instabilities being more restrictive. The wavenumbers \(k_{\rm mirr}\) and \(k_{\rm fire}\) at which maximal growth of the ion mirror and firehose instabilities occurs satisfy \(k_{\rm mirr}\rho_{i}\sim 1\) and \(k_{\rm fire}\rho_{i}\lesssim 1\), respectively, for \({\rm Ma}\,\lambda_{e}\beta_{e}/L_{V}\gg 1\), leading to the collisional-stabilisation condition \[\frac{\lambda_{e}}{L_{V}}\lesssim\frac{\rho_{i}}{L_{V}}\sim\mu_{e}^{-1/2}\beta _{e}^{1/2}\frac{d_{e}}{L_{V}}\,. \tag{4.3}\] For the electron-scale microinstabilities, the parallel and the oblique transverse instabilities have the largest (common) wavenumber of all such instabilities that operate when \(\epsilon_{i}>0\) and \(\epsilon_{i}<0\), respectively, and so provide the most demanding collisional-stabilisation conditions. For both transverse instabilities, the wavenumber at which peak growth occurs for the satisfies \(k_{\rm trans}\rho_{e}\sim(\mu_{e}^{1/2}{\rm Ma}\,\lambda_{e}\beta_{e}/L_{V})^{1/2}\) [see (4.32)], which in turn can be rearranged to give the collisional-stabilisation condition \[\frac{\lambda_{e}}{L_{V}}\lesssim{\rm Ma}^{-1/3}\mu_{e}^{-1/6}\left(\frac{d_{ e}}{L_{V}}\right)^{2/3}\,. \tag{4.4}\] Bringing these results together, we find \[\left(\frac{d_{e}}{L_{V}}\right)_{\rm c}=\left\{\begin{array}{ll}\mu_{e}^{1 /2}\beta_{e}^{-1/2}\lambda_{e}/L_{V},&\beta_{e}^{-1}\ll{\rm Ma}\,\lambda_{e}/ L_{V}<\mu_{e}^{-1/2}\beta_{e}^{-1},\\ \mu_{e}^{1/4}{\rm Ma}^{1/2}\left(\lambda_{e}/L_{V}\right)^{3/2},&{\rm Ma}\, \lambda_{e}/L_{V}\gtrsim\mu_{e}^{-1/2}\beta_{e}^{-1},\end{array}\right. \tag{4.5}\] with \((d_{e}/L_{V})_{\rm c0}=\mu_{e}^{1/2}\beta_{e}^{-3/2}\). This matches asymptotically the numerical results shown in figures 8a and 9a. These findings confirm that, once again, the relevant collisional-stabilisation condition for the microinstabilities with wavenumber \(k\) is \(k\lambda_{e}=k\lambda_{i}\ll 1\) [viz., (2.124)], as opposed to the more restrictive conditions \(\gamma\tau_{i}\gg 1\) and \(\gamma\tau_{e}\gg 1\) on the CES ion-scale and electron-scale instabilities, respectively. Similary to the collisional-stabilisation condition on the CET whistler instability (see section 3.2), we note that the collisional-stabilisation condition on any of these microinstabilities can _never_ actually be satisfied in a strongly magnetised plasma, because \(k\lambda_{i}\gtrsim\lambda_{i}/\rho_{i}\gg 1\) for the ion-scale instabilities, and \(k\lambda_{e}\gtrsim\lambda_{e}/\rho_{e}\gg 1\) for the electron-scale instabilities. #### 4.2.4 Outline of the rest of this section Further discussion about the properties and growth rates of CES microinstabilities with \(\epsilon_{s}>0\) (viz., those driven by positive pressure anisotropy) can be found in section 4.3, with the mirror, whistler and transverse instabilities discussed in sections 4.3.1, 4.3.2 and 4.3.3, respectively. In addition to these, there is another instability (the _electron mirror instability_) that can be driven by positive pressure anisotropy of CE distribution functions that we note in passing: it consists in KAWs driven unstable by the CE electron-shear term, and to some extent by the ion-shear term (section 4.3.4). The electron mirror instability does not appear to be the fastest-growing CES microinstability anywhere in the \((d_{e}/L_{V},{\rm Ma}\,\lambda_{e}/L_{V})\) parameter space; since the instability is subdominant to two other electron-scale instabilities (the whistler and transverse instabilities), this would seem to imply that the instability is comparatively less important. CES microinstabilities with \(\epsilon_{s}<0\) (viz., those driven by negative pressure anisotropy) are explored in section 4.4. The firehose instability is overviewed in section 4.4.1, with then four subclasses of the instability (parallel, oblique, critical-line, and sub-ion-Larmor-scale) considered in sections 4.4.2, 4.4.3, 4.4.4, and4.4.5. The oblique electron firehose instability is discussed in section 4.4.7, the EST instability in section 4.4.8, the oblique transverse instability in section 4.4.9, and the whisper instability in section 4.4.10. We identify two additional CES microinstabilities which are never the fastest-growing microinstability in any unstable region: the parallel electron firehose instability (section 4.4.6), which (in spite of its name) has a different underlying physical mechanism than the oblique electron firehose, and the ordinary-mode instability (section 4.4.11), which only operates at very high \(\beta_{e}\) (\(\beta_{e}\gtrsim|\Delta_{e}|^{-3}\)), and is only characteristically distinct from the oblique transverse instability in a regime in which it is slower growing. Readers who do not wish to dwell on specific CES microinstabilities should proceed directly to section 5. ### CES microinstability classification: positive pressure anisotropy (\(\epsilon_{i}>0\)) #### 4.3.1 Mirror instability The CES mirror instability consists in the destabilisation of compressive slow modes by a sufficiently large positive ion pressure anisotropy associated with the ion-shear term of the ion CE distribution function. In a high-\(\beta\) plasma with Maxwellian ion and electron distribution functions, the slow mode - which is one of the two plasma modes which exist at oblique wavevector angles \(\theta\gtrsim\beta_{i}^{-1/4}\) (the other being the shear Alfven wave), and consists of a perturbation to the magnetic field's strength - is non-propagating, being subject to strong Barnes' (equivalently, transit-time) damping (Barnes, 1966). This damping is the result of Landau-resonant interactions between the slow mode and co-moving ions with \(v_{\parallel}=\omega/k_{\parallel}\); since, for a distribution function that decreases monotonically with \(v_{\parallel}>0\), there are more ions with \(v_{\parallel}<\omega/k_{\parallel}\) than with \(v_{\parallel}>\omega/k_{\parallel}\), there is a net transfer of free energy from the slow modes to the ions (as a particle acceleration process, this is sometimes called betatron acceleration). However, in a plasma with \(\Delta_{i}>0\), there is an increase in the relative number of ions with large pitch angles in the troughs of the slow mode's magnetic-field strength perturbation, giving rise to excess perpendicular pressure. When \(\Delta_{i}>1/\beta_{i}\), this excess pressure overbalances the magnetic pressure, leading to the mirror instability. In CE plasma with \(0<\Delta_{i}\beta_{i}-1\ll 1\), only quasi-perpendicular long-wavelength mirror modes (\(k_{\parallel}\rho_{i}\ll k_{\perp}\rho_{i}\ll 1\)) are destabilised; for larger values of \(\Delta_{i}\), a broad range of slow modes (including ion-Larmor-scale ones) become unstable. Chronologically, the earliest discussions of the mirror instability in pressure-anisotropic plasmas are due to Parker (1958) and Hasegawa (1969). Southwood & Kivelson (1993) provide a detailed and lucid discussion of the linear physics of the mirror instability (see also Kunz _et al._, 2015); various analytical (Pokhotelov _et al._, 2008; Rincon _et al._, 2015) and numerical (Hellinger _et al._, 2009; Kunz _et al._, 2014; Riquelme _et al._, 2015; Melville _et al._, 2016) studies investigating its nonlinear evolution have also been carried out. The CES mirror instability can be characterised analytically - and simple expressions derived for the maximum growth rate and the wavevector at which that growth is attained - in the limit of marginal instability. First, we define the threshold parameter \(\Gamma_{i}\equiv\beta_{i}\Delta-1\), where \(\Delta\equiv\Delta_{i}+\Delta_{e}=(1+\mu_{e}^{1/2})\Delta_{i}\), and assume that \(\Gamma_{i}\ll 1\). It can then be shown (see appendix K.3.2) that under the orderings \[k_{\parallel}\rho_{i}\sim k_{\perp}^{2}\rho_{i}^{2}\sim\Gamma_{i}\ll 1\,, \quad\frac{\gamma}{\Omega_{i}}\sim\frac{\Gamma_{i}^{2}}{\beta_{i}}\ll 1\,, \tag{10}\] the mirror modes have a growth rate given by \[\frac{\gamma}{\Omega_{i}}=\frac{k_{\parallel}\rho_{i}}{\sqrt{\pi}\beta_{i}} \left(\Gamma_{i}-\frac{3}{2}\frac{k_{\parallel}^{2}}{k_{\perp}^{2}}-\frac{3}{ 4}k_{\perp}^{2}\rho_{i}^{2}\right)\,. \tag{11}\] This is the same result as the growth rate of the mirror instability in a bi-Maxwellian plasma, with (the anticipated) threshold \(\Gamma_{i}>0\)(Hellinger, 2007). The peak growth rate \(\gamma_{\rm max}\) is then given by \[\gamma_{\rm max}=\frac{\Gamma_{i}^{2}}{6\sqrt{2\pi}\beta_{i}}\Omega_{i}\,, \tag{12}\] achieved at the wavenumber \[(k_{\parallel}\rho_{i})_{\rm peak}=\frac{\Gamma_{i}}{3\sqrt{2}}\,,\quad(k_{ \perp}\rho_{i})_{\rm peak}=\frac{\Gamma_{i}^{1/2}}{\sqrt{3}}\,. \tag{13}\] This recovers the results of Hellinger (2007). Figure 10 illustrates the accuracy of the above predictions for \(\gamma\) (and therefore \(\gamma_{\rm max}\)), \((k_{\parallel}\rho_{i})_{\rm peak}\) and \((k_{\perp}\rho_{i})_{\rm peak}\) by comparing them with the equivalent values obtained numerically using the general method outlined in appendix K for a particular value of \(\Gamma_{i}\ll 1\). The wavenumber dependence of the numerically determined growth rate (see figure 9(a)) corroborates that, close to marginality, the unstable mirror modes are quasi-perpendicular; more quantitatively, the values of \(k_{\parallel}\rho_{i}\) and \(k_{\perp}\rho_{i}\) at which peak growth is obtained numerically match (13). Furthermore, the growth rate (11) agrees well with the numerical result when plotted as a function of \(k_{\parallel}\rho_{i}\) with fixed \(k_{\perp}\rho_{i}\), and also as a function of \(k_{\perp}\rho_{i}\) with fixed \(k_{\parallel}\rho_{i}\) (figure 9(b)). In contrast, for finite \(\Gamma_{i}\gtrsim 1\), simple expressions for \(\gamma_{\rm max}\), \((k_{\parallel}\rho_{i})_{\rm peak}\), and \((k_{\perp}\rho_{i})_{\rm peak}\) are challenging to derive analytically. Our numerical calculations indicate that, when \(\Gamma_{i}\sim 1\), a broad range of (purely growing) oblique modes becomes unstable, with maximum growth rate \(\gamma_{\rm max}\sim\Omega_{i}/\beta_{i}\sim\Delta\Omega_{i}\) attained when \(k_{\parallel}\rho_{i}\lesssim k_{\perp}\rho_{i}\sim 1\) (figure 10(a)). Therefore, asymptotic expansions that treat \(k_{\perp}\rho_{i}\) and \(k_{\parallel}\rho_{i}\) as small or large cannot be used to derive simplified expressions for the growth rate of the fastest-growing mirror modes. While the expressions (13) for the wavenumber of peak growth derived in the case of near-marginality remain qualitatively correct, they are no longer quantitatively accurate; the same conclusion applies to the expression (11) for the growth rate when \(k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\sim 1\) (figure 10(b)). That being said, an expression similar to (11) can be derived (see appendix K.3.2) for long-wavelength unstable mirror modes that satisfy the ordering \[k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\ll 1\,,\quad\frac{\gamma}{ \Omega_{i}}\sim\frac{k_{\parallel}\rho_{i}}{\beta_{i}}\sim\Delta k_{\parallel} \rho_{i}\ll 1\,. \tag{14}\] Figure 11: _Mirror instability at \(\Gamma_{i}=\Delta\beta_{i}-1\sim 1\)_. **a)** Growth rates of unstable mirror modes resulting from the CE ion-shear term in the CE distribution function (4.1) for \(\Gamma_{i}=1\) (\(\Delta\beta_{i}=2\)). The growth rates of all modes are calculated in the same way as in figure 10. The dashed white lines indicate the analytic prediction (4.9) for the parallel/perpendicular wavenumber at which peak growth is achieved, while the dotted line indicates the analytical prediction (4.12) for the perpendicular wavenumber above which long-wavelength (\(k_{\parallel}\rho_{i}\lesssim k_{\perp}\rho_{i}\ll 1\)) mirror modes become unstable. **b)** The mirror mode’s growth rate (solid line) as a function of \(k_{\parallel}\rho_{i}\) with \(k_{\perp}\rho_{i}=\Gamma_{i}^{1/2}/\sqrt{3}\) (top), and as a function of \(k_{\perp}\rho_{i}\) with \(k_{\parallel}\rho_{i}=\Gamma_{i}/3\sqrt{2}\) (bottom). The dashed lines show the analytical prediction (4.7) for this quantity. Figure 10: _Mirror instability at \(\Gamma_{i}=\Delta\beta_{i}-1\ll 1\)_. **a)** Growth rates of unstable mirror modes resulting from the CE ion-shear term in the CE distribution function (4.1) for \(\Gamma_{i}=0.04\ll 1\) (\(\Delta\beta_{i}=1.04\)). The growth rates of all modes are calculated using the approach outlined in appendix K.3. The growth rates are calculated on a \(400^{2}\) grid, with logarithmic spacing in both perpendicular and parallel directions between the minimum and maximum wavenumber magnitudes. The resulting growth rates, when normalised as \(\gamma\beta_{i}/\Omega_{i}\), are functions of the dimensionless quantity \(\Delta\beta_{i}\). The dashed white lines indicate the analytical prediction (4.9) for the wavenumber at which peak growth is achieved. **b)** The mirror mode’s growth rate (solid line) as a function of \(k_{\parallel}\rho_{i}\) with \(k_{\perp}\rho_{i}=\Gamma_{i}^{1/2}/\sqrt{3}\) (top), and as a function of \(k_{\perp}\rho_{i}\) with \(k_{\parallel}\rho_{i}=\Gamma_{i}/3\sqrt{2}\) (bottom). The dashed lines show the analytical prediction (4.7) for these quantities. This expression is \[\frac{\gamma}{\Omega_{i}}=\frac{k_{\parallel}\rho_{i}}{\sqrt{\pi}\beta_{i}}\left( \Gamma_{i}-\frac{\Gamma_{i}+3}{2}\frac{k_{\parallel}^{2}}{k_{\perp}^{2}}\right)\,. \tag{4.11}\] It implies that all such modes with \[k_{\perp}>\left(\frac{3+\Gamma_{i}}{2\Gamma_{i}}\right)^{1/2}k_{\parallel} \tag{4.12}\] will be unstable, a prediction that is consistent with the unstable region observed in figure 11a. When \(\Gamma_{i}\gg 1\), but \(\Gamma_{i}<(m_{i}/m_{e})^{1/2}\), the region of \((k_{\parallel},k_{\perp})\) space in which mirror modes are unstable is qualitatively similar to the \(\Gamma_{i}\sim 1\) case, albeit more extended (figure 12a). We find that in this limit, the maximum growth rate \(\gamma_{\rm max}\) becomes directly proportional to \(\Delta\) (see figure 12b), in contrast to the marginal case (4.7): \[\gamma_{\rm max}\approx 0.2\Delta\Omega_{i}\,. \tag{4.13}\] This growth is attained at parallel and perpendicular wavenumbers \[(k_{\perp}\rho_{i})_{\rm peak}\approx 1.2\,,\quad(k_{\parallel}\rho_{i})_{ \rm peak}\approx 0.7\,, \tag{4.14}\] which depend only weakly on \(\Delta\beta_{i}\). Some understanding of these results can be derived by considering the dispersion relation of mirror modes on sub-ion Larmor scales. Adopting the ordering \[k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\sim(\Delta_{i}\beta_{i})^{1/2} \gg 1,\quad\frac{\gamma}{\Omega_{i}}\sim\Delta_{i}\,, \tag{4.15}\] Figure 12: _Mirror instability at \(\Gamma_{i}=\Delta\beta_{i}\gg 1\)_. **a)** Growth rates of unstable mirror modes resulting from the CE ion-shear term in the CE distribution function (4.1) for \(\Gamma_{i}=29\gg 1\) (\(\Delta\beta_{i}=30\)). The growth rates of all modes are calculated in the same way as in figure 10. The dot-dashed white lines indicate the parallel/perpendicular wavenumbers (4.14) at which peak growth is achieved, while the dotted line indicates the analytical prediction (4.12) for the perpendicular wavenumber above which long-wavelength (\(k_{\parallel}\rho_{i}\lesssim k_{\perp}\rho_{i}\ll 1\)) mirror modes become unstable. **b)** Normalised maximum positive growth rate \(\gamma_{\rm max}/\Delta\Omega_{i}\) (solid red line) of the unstable mirror mode as a function of \(\Delta\beta_{i}\) along with the parallel (solid blue line) and perpendicular (solid yellow line) wavenumbers, \((k_{\parallel}\rho_{i})_{\rm peak}\) and \((k_{\perp}\rho_{i})_{\rm peak}\) respectively, at which that growth is attained. The analytical prediction (4.7) of \(\gamma_{\rm max}\) for marginally unstable modes, as well as the analogous predictions (4.9) for \((k_{\parallel}\rho_{i})_{\rm peak}\) and \((k_{\perp}\rho_{i})_{\rm peak}\), are shown as dashed lines. while assuming that \(\Delta_{i}\beta_{i}\ll\mu_{e}^{-1/2}\), one finds (see appendix K.3.2) that \[\frac{\gamma}{\Omega_{i}}\approx\frac{k_{\parallel}}{k}\sqrt{\left(\frac{k^{2} \rho_{i}^{2}}{\beta_{i}}-\Delta_{i}\frac{k_{\parallel}^{2}-k_{\perp}^{2}}{k^{2 }}\right)\left(\Delta_{i}\frac{k_{\parallel}^{2}}{k^{2}}-\frac{k^{2}\rho_{i}^{ 2}}{\beta_{i}}\right)}\,. \tag{4.16}\] This can be re-written in terms of the wavevector angle \(\theta=\tan^{-1}\left(k_{\perp}/k_{\parallel}\right)\) as \[\frac{\gamma}{\Omega_{i}}\approx\cos\theta\sqrt{\left[\frac{k^{2}\rho_{i}^{2} }{\beta_{i}}-\Delta_{i}\left(\cos^{2}\theta-\sin^{2}\theta\right)\right] \left(\Delta_{i}\cos^{2}\theta-\frac{k^{2}\rho_{i}^{2}}{\beta_{i}}\right)}\,. \tag{4.17}\] Analysing this expression leads to three conclusions. First, for \(\theta>45^{\circ}\), there is an instability at all wavenumbers satisfying \(k\rho_{i}<(\Delta_{i}\beta_{i})^{1/2}\cos\theta\), explaining the expansion of the unstable region of \((k_{\parallel},k_{\perp})\)-space with increasing \(\Delta_{i}\beta_{i}\). For \(\theta\leq 45^{\circ}\), growth only occurs over a more limited range of wavenumbers \(\sqrt{\cos^{2}\theta-\sin^{2}\theta}<k\rho_{i}/(\Delta_{i}\beta_{i})^{1/2}< \cos\theta\). Secondly, growth in this limit is maximised when \(k\rho_{i}\ll(\Delta_{i}\beta_{i})^{1/2}\), with the maximal growth rate \[\gamma_{\rm max}=\frac{1}{3\sqrt{3}}\Delta_{i}\Omega_{i}\approx 0.19\Delta_{ i}\Omega_{i} \tag{4.18}\] attained at \(\cos\theta=1/\sqrt{3}\) (\(\theta\approx 55^{\circ}\)). This expression for \(\gamma_{\rm max}\) is (surprisingly) close to the numerically measured peak growth rate (4.13). For \(k\rho_{i}\sim(\Delta_{i}\beta_{i})^{1/2}\), the maximum growth rate is smaller than (4.18) by an order-unity factor. Finally, when \(k\rho_{i}\gg(\Delta_{i}\beta_{i})^{1/2}\), viz., in a wavenumber regime where there are no unstable mirror modes, (4.16) becomes imaginary, implying that the modes have a real frequency given by \[\omega\approx\pm k_{\parallel}k_{\perp}\rho_{e}\frac{\Omega_{e}}{\beta_{i}}\,. \tag{4.19}\] This is the dispersion relation of kinetic Alfven waves (KAWs) in a high-\(\beta\) plasma1. In short, at \(\Delta_{i}\beta_{i}\gg 1\), KAWs are also destabilised by positive ion pressure anisotropy in addition to longer-wavelength mirror modes. We note that KAWs can also be destabilised by positive electron anisotropy, but the characteristic wavelength of such modes is preferentially comparable to electron Larmor scales (see section 4.3.4). Footnote 1: We note that (4.19) is also the same dispersion relation as that of oblique whistler waves (see, e.g., Galtier & Meyrand 2015). However, as was discussed in section 3.3.1, in a high-\(\beta\) plasma (\(\beta_{e}\gg\mu_{e}^{-1/2}\)), the small frequency (\(\omega\ll k_{\parallel\rm thh}\)) of perturbations prohibits all but parallel perturbations from not interacting significantly with the ions, and thus we believe that the modes are more accurately identified as KAWs. #### 4.3.2 Whistler instability The CES whistler instability arises when the free energy associated with positive electron-pressure anisotropy \(\Delta_{e}\) of the electron CE distribution function destabilises whistler waves, overwhelming both the electron cyclotron damping (which is the dominant stabilisation mechanism for whistler waves with \(k_{\parallel}\rho_{e}\sim 1\)) and the Landau damping due to the ion species (the domininant stabilisation mechanism for waves with \(k_{\parallel}\rho_{e}\ll 1\)). In the special case of static ions, electron cyclotron damping can be overcome by a positive electron-pressure anisotropy of any magnitude for whistler waves with sufficiently long wavelengths. Retaining mobile ions, the instability operates only if \(\Delta_{e}\) exceeds a threshold of order \((\Delta_{e})_{\rm c}\sim\beta_{e}^{-1}\). When \(\Delta_{e}>(\Delta_{e})_{\rm c}\), gyroresonant interactions between electrons with \(v_{\parallel}=\pm\Omega_{e}/k_{\parallel}\) and whistler waves allow for free energy to pass from the former to the latter, and so an increasingly broad spectrum of unstable parallel and oblique modes emerges on electron Larmor scales. The analogue of this instability in a bi-Maxwellian plasma was found by Kennel & Petschek (1966), and it has since been studied numerically in moderately high-\(\beta\) plasma (\(\beta_{e}\sim 1\)-10) by several authors (e.g., Gary & Wang, 1996; Guo _et al._, 2014; Riquelme _et al._, 2016). Similarly to the CET whistler instability, the simplest characterisation of the CES whistler instability is for unstable parallel whistler modes (viz., \(k\approx k_{\parallel}\)). Assuming that these modes satisfy the orderings \[\tilde{\omega}_{e\parallel}=\frac{\omega}{k_{\parallel}v_{\mathrm{t}he}} \sim\Delta_{e}\sim\frac{1}{\beta_{e}}\,,k_{\parallel}\rho_{e}\sim 1, \tag{104}\] it can be shown (see appendix K.3.3) that their real frequency \(\varpi\) and growth rate \(\gamma\) satisfy \[\frac{\varpi\beta_{e}}{\Omega_{e}} = \pm\Delta_{e}\beta_{e}\pm\frac{k_{\parallel}\rho_{e}\left[ \Delta_{e}\beta_{e}\left(1+\mu_{e}^{1/2}\right)-k_{\parallel}^{2}\rho_{e}^{2} \right]\mathrm{Re}\;Z\big{(}1/k_{\parallel}\rho_{e}\big{)}}{\left[\mathrm{Re} \;Z\big{(}1/k_{\parallel}\rho_{e}\big{)}\right]^{2}+\pi\exp\left(-2/k_{ \parallel}^{2}\rho_{e}^{2}\right)}, \tag{105a}\] \[\frac{\gamma\beta_{e}}{\Omega_{e}} = \frac{k_{\parallel}\rho_{e}\left[\exp\left(-1/k_{\parallel}^{2} \rho_{e}^{2}\right)+\mu_{e}^{1/2}\right]\left(\Delta_{e}\beta_{e}-k_{ \parallel}^{2}\rho_{e}^{2}\right)+\mu_{e}^{1/2}\Delta_{e}\beta_{e}\mathrm{Re} \;Z\big{(}1/k_{\parallel}\rho_{e}\big{)}}{\left[\mathrm{Re}\;Z\big{(}1/k_{ \parallel}\rho_{e}\big{)}\right]^{2}/\sqrt{\pi}+\sqrt{\pi}\exp\left(-2/k_{ \parallel}^{2}\rho_{e}^{2}\right)}, \tag{105b}\] where the terms proportional to \(\mu_{e}^{1/2}\) are associated with the ion species1. In the limit \(\mu_{e}\to 0\), formally there is always instability provided \(\Delta_{e}\beta_{e}>0\); however, for a hydrogen plasma (\(\mu_{e}\approx 1/1836\)), it can be shown numerically that the numerator of (105b) only becomes positive (over a narrow interval of parallel wavenumbers around \(k_{\parallel}\rho_{e}\approx 0.60\)) for \(\Delta_{e}\beta_{e}>0.56\). The dispersion curves \(\varpi(k_{\parallel})\) and \(\gamma(k_{\parallel})\) of the unstable whistler waves in a hydrogen plasma for three different values of \(\Delta_{e}\beta_{e}\) that are above the necessary value for instability are shown in figure 13. When \(\Delta_{e}\beta_{e}\gtrsim 1\), the growth rate is postive for a range \(\Delta k_{\parallel}\sim\rho_{e}^{-1}\) around \(k_{\parallel}\rho_{e}\sim 1\), attaining a characteristic magnitude \(\gamma\sim\varpi\sim\Omega_{e}/\beta_{e}\). Footnote 1: Formally, these terms are \(\textit{O}(\mu_{e}^{1/2})\) under our assumed ordering, and so should be dropped. However, because of the exponential dependence of the other damping/growth terms on \(k_{\parallel}\rho_{e}\), these terms play an important role for moderate values of \(k_{\parallel}\rho_{e}\), viz. \(\mu_{e}^{1/2}\exp\left(1/k_{\parallel}^{2}\rho_{e}^{2}\right)\geq 1\) for \(k_{\parallel}\rho_{e}\leq\sqrt{2}/\sqrt{\log m_{i}/m_{e}}\approx 0.5\), so we retain them. As before, we characterise the growth rate for various values of \(\Delta_{e}\beta_{e}\) by taking subsidiary limits. First, for \(\Delta_{e}\beta_{e}\ll 1\), a necessary (though not always sufficient) condition for positive growth is \(k_{\parallel}\rho_{e}<(\Delta_{e}\beta_{e})^{1/2}\ll 1\). We therefore expand (105) in \(k_{\parallel}\rho_{e}\sim(\Delta_{e}\beta_{e})^{1/2}\ll 1\), finding that \[\varpi \approx \frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\Omega_{e}\,, \tag{106a}\] \[\gamma \approx \frac{\sqrt{\pi}}{k_{\parallel}\rho_{e}}\left\{\exp\left(-\frac {1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)\left(\Delta_{e}-\frac{k_{\parallel} ^{2}\rho_{e}^{2}}{\beta_{e}}\right)-\mu_{e}^{1/2}\frac{k_{\parallel}^{2} \rho_{e}^{2}}{\beta_{e}}\right\}\Omega_{e}. \tag{106b}\] Similarly to what we showed in section 3.3.1 for the CET whistler instability, we have once again found unstable whistler waves. For comparison's sake, the approximate expressions (106) are plotted in figure 13 in addition to their exact analogues (105); it is clear that there is reasonable agreement for a moderately small value of \(\Delta_{e}\beta_{e}\), but that the approximations become less accurate for \(k_{\parallel}\rho_{e}\gtrsim 0.5\) and \(\Delta_{e}\beta_{e}>1\). In the limit \(\mu_{e}\to 0\), the expression (106b) for the growth rate is very similar to that of the whistler (electron-cyclotron) instability in a plasma with a bi-Maxwellian distribution and positive electron pressure anisotropy (Davidson 1983). In this case, whistler modes with \(k_{\parallel}\rho_{e}<(\Delta_{e}\beta_{e})^{1/2}\) are always unstable, although the growth rate of such modes is exponentially small in \(\Delta_{e}\beta_{e}\ll 1\) as compared to the frequency (4.22), and so \(\gamma\ll\varpi\sim\Omega_{e}/\beta_{e}\). By contrast, with small but finite \(\mu_{e}=m_{e}/m_{i}\), it can be shown analytically that, for (4.22) to be positive, \(\Delta_{e}>(\Delta_{e})_{\rm c}\), where \[(\Delta_{e})_{\rm c} =\frac{1}{\beta_{e}W_{\rm Lam}\left[\mu_{e}^{-1/2}\exp{(-1)}\right]}\] \[\approx\frac{1}{\beta_{e}}\frac{1}{\log{(\mu_{e}^{-1/2})}-1-\log {[\log{(\mu_{e}^{-1/2})}-1]}}\,. \tag{4.23}\] Here, \(W_{\rm Lam}(x)\) denotes the Lambert W function (Corless _et al._ 1996). Unstable modes first develop around \((k_{\parallel}\rho_{e})_{c}=(\Delta_{e})_{\rm c}^{1/2}/[(\Delta_{e})_{\rm c}+1 /\beta_{e}]^{1/2}\). In a hydrogen plasma, this gives \((\Delta_{e})_{\rm c}\approx 0.49/\beta_{e}\) and \((k_{\parallel}\rho_{e})_{c}\approx 0.57\), which are similar to the instability threshold and wavenumber, respectively, determined numerically if \(\gamma\) is computed for arbitrary values of \(k_{\parallel}\rho_{e}\); the small discrepancy is due to the finite value of \(k_{\parallel}\rho_{e}\) at which instability first emerges. Formally, \((\Delta_{e})_{\rm c}\to 0\) as \(\mu_{e}\to 0\), but the limit converges only logarithmically in \(\mu_{e}\), suggesting that in an actual plasma, the CES whistler instability will generically have a threshold at a finite value of \(\Delta_{e}\beta_{e}\). Let us now turn to the opposite subsidiary limit \(\Delta_{e}\beta_{e}\gg 1\). We find from (4.21) that maximal growth occurs at \(k_{\parallel}\rho_{e}\sim(\Delta_{e}\beta)^{1/2}\gg 1\): \[\varpi \approx\frac{1}{\pi}\left[\Delta_{e}\left(\pi-2\right)+\frac{k_{ \parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\right]\Omega_{e}\,, \tag{4.24}\] \[\gamma \approx\frac{k_{\parallel}\rho_{e}}{\sqrt{\pi}}\left(\Delta_{e}- \frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\right)\Omega_{e}\,. \tag{4.24}\] Figure 13: _Parallel CES whistler instability._ Dispersion curves of unstable whistler modes whose instability is driven by the electron-shear term in CE distribution function (4.1), for wavevectors that are co-parallel with the background magnetic field (viz., \(\mathbf{k}=k_{\parallel}\hat{\mathbf{z}}\)). The frequency (solid blue) and growth rate (solid red) of the modes are calculated using (4.21) and (4.21), respectively. The resulting frequencies and growth rates, when normalised as \(\gamma\beta_{e}/\Omega_{e}\), are functions of the dimensionless quantity \(\Delta_{e}\beta_{e}\); we show the dispersion curves for three different values of \(\Delta_{e}\beta_{e}\). The approximations (4.22) and (4.22) for the frequency (dotted blue) and growth rate (dotted red) in the limit \(k_{\parallel}\rho_{e}\ll 1\) are also plotted, as are the approximations (4.24) and (4.24) for the frequency (dashed blue) and growth rate (dashed red) in the limit \(k_{\parallel}\rho_{e}\gg 1\). Alongside \(k_{\parallel}\rho_{e}\ll 1\) approximations, these approximations are plotted in figure 13, and agree well with the numerical results for \(\Delta_{e}\beta_{e}\gtrsim 3\) and \(k_{\parallel}\rho_{e}\gtrsim 2\). The maximum growth rate \[\gamma_{\rm max}=\frac{2}{3\sqrt{3\pi}}\Delta_{e}(\Delta_{e}\beta_{e})^{1/2} \Omega_{e}\approx 0.22\Delta_{e}(\Delta_{e}\beta_{e})^{1/2}\Omega_{e} \tag{4.25}\] is attained at the parallel wavenumber \[(k_{\parallel}\rho_{e})_{\rm peak}=(\Delta_{e}\beta_{e}/3)^{1/2}. \tag{4.26}\] A notable feature of the CES whistler instability in this subsidary limit is that the fastest-growing modes are on sub-electron-Larmor scales; thus, such modes are arguably better conceptualised not as whistler modes, but as unstable, unmagnetised plasma modes (see section 4.3.3). Similarly to the CET whistler instability, analytical expressions for the frequency and growth rate of unstable modes that have an oblique wavevector angle are much less simple that the analogous expressions for parallel whistler modes. It can be shown (see appendix K.2) that the complex frequency of such modes is given by \[\omega=\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{-{\rm i}B_{ \rm S}\pm\sqrt{-B_{\rm S}^{2}+4A_{\rm S}C_{\rm S}}}{2A_{\rm S}}\,, \tag{4.27}\] where the functions \(A_{\rm S}=A_{\rm S}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\Delta_{e}\beta_{e})\), \(B_{\rm S}=B_{\rm S}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\Delta_{e}\beta_{ e})\), and \(C_{\rm S}=C_{\rm S}(k_{\parallel}\rho_{e},k_{\perp}\rho_{e},\Delta_{e}\beta_{ e})\) are composed of the sums and products of special mathematical functions. When \(\Delta_{e}\beta_{e}\sim 1\), (4.27) implies that if there is an instability, its growth rate will be of order \(\gamma\sim\Omega_{e}/\beta_{e}\) at \(k_{\parallel}\rho_{e},k_{\perp}\rho_{e}\sim 1\). To confirm this expectation, in figure 14 we plot the maximum growth rate (obtained numerically) of oblique modes across the \((k_{\parallel},k_{\perp})\)-plane for two of the values of \(\Delta_{e}\beta_{e}\) used in figure 13. For \(\Delta_{e}\beta_{e}\) not far beyond the threshold of the CES whistler instability (figure 14a), the unstable modes are quasi-parallel and have growth rates Figure 14: _Oblique unstable modes at \(\Delta_{e}\beta_{e}\sim 1\)_**:****a)**_\(\Delta_{e}\beta_{e}=0.75\). **b)**_\(\Delta_{e}\beta_{e}=3\). Maximum positive growth rates of linear perturbations resulting from CE ion- and electron-shear terms in the CE distribution function (4.1) for \(\Delta_{e}\beta_{e}\sim 1\). Here, a temperature-equilibrated hydrogen plasma is considered, viz. \(\Delta_{e}=\mu_{e}^{1/2}\Delta_{i}\), and \(\beta_{i}=\beta_{e}\). The growth rates of all modes are calculated using the approach outlined in appendix K.3. The growth rates are calculated on a \(400^{2}\) grid, with logarithmic spacing between wavenumbers in both perpendicular and parallel directions. The resulting growth rates, when normalised as \(\gamma\beta_{e}/\Omega_{e}\), are functions of \(\Delta_{e}\beta_{e}\), or, equivalently, \(\epsilon_{e}\beta_{e}\). The vertical dashed lines indicate \(k_{\parallel}\rho_{i}=1\) and \(k_{\parallel}\rho_{e}=1\), respectively, while the horizontal ones indicate \(k_{\perp}\rho_{i}=1\) and \(k_{\perp}\rho_{e}=1\). (cf. figure 13, left panel). For \(\Delta_{e}\beta_{e}\gtrsim 1\), a broader spectrum of wavenumbers becomes unstable (figure 14b). The parallel mode remains the fastest growing in this case; however, oblique modes with \(k_{\perp}\lesssim k_{\parallel}/2\) also have growth rates of comparable magnitude: e.g., the fastest-growing mode with wavevector angle \(\theta=10^{\circ}\) has \(\gamma_{\rm max}/\gamma_{\rm max}(k_{\perp}=0)\approx 0.93\), and for a wavevector angle \(\theta=10^{\circ}\), \(\gamma_{\rm max}/\gamma_{\rm max}(k_{\perp}=0)\approx 0.76\). For more oblique angles, the growth rate is reduced significantly: e.g., for \(\theta=30^{\circ}\), \(\gamma_{\rm max}/\gamma_{\rm max}(k_{\perp}=0)\approx 0.22\). Thus, we conclude that a spectrum of oblique modes in addition to parallel ones is indeed destabilised, with \(\gamma\sim\Omega_{e}/\beta_{e}\lesssim\gamma(k_{\perp}=0)\). We note that, in addition to oblique CES whistler modes, whose characteristic wavenumber domain is \(k_{\perp}\rho_{e}\lesssim k_{\parallel}\rho_{i}\sim 1\), we observe two other unstable modes in figure 14a with different characteristic values of \(k_{\parallel}\) and \(k_{\perp}\). The first of these, which exists on ion scales, is the CES mirror instability, which we already discussed in section 4.3.1. The second is the CES electron mirror instability - we shall consider this instability in section 4.3.4. #### 4.3.3 Parallel transverse instability As was shown in section 4.2, in the limit \(\Delta_{e}\beta_{e}\gg 1\), the fastest-growing CES microinstability is essentially unmagnetised, and is a variant of the so-called transverse instability (Kahn, 1962, 1964; Albright, 1970_b_). This instability is also sometimes referred to as the resonant (electron) Weibel instability, or the Weibel instability at small anisotropy (Weibel, 1959; Fried, 1959). Both the linear theory of this instability and its physical mechanism have been explored extensively for bi-Maxwellian plasmas (see, e.g. Lazar _et al._, 2009; Ibscher _et al._, 2012), and various studies (both analytical and numerical) of its nonlinear evolution have also been performed (Albright, 1970\(a\); Davidson _et al._, 1972; Lemons _et al._, 1979; Califano _et al._, 1998, 2002; Kato, 2005; Pokhotelov & Amariutei, 2011; Ruyer _et al._, 2015). For the small anisotropy case that is relevant to CE plasma, the mechanism of the instability is somewhat subtle, involving both non-resonant and Landau-resonant wave-particle interactions. In a Maxwellian plasma, transverse modes are non-propagating and Landau-damped by electrons with velocities \(v\approx\omega/k_{\parallel}\). However, this damping can be reversed by the free energy associated with positive electron-pressure anisotropy at wavenumbers that satisfy \(kd_{e}\lesssim\Delta_{e}^{1/2}\); the electron Landau damping increases more rapidly with \(k\) than the instability's drive, which in turn sets the wavenumber at which peak growth occurs. The requirement for the corresponding scale to be well below the electron Larmor scale - and thus for the plasma to be quasi-unmagnetised with respect to the transverse modes - sets the restriction \(\Delta_{e}\beta_{e}\gg 1\) on the instability's operation. In general, transverse modes whose wavevectors are co-parallel to the velocity-space direction along which the temperature is smallest are the fastest growing; in the case of a CE electron distribution function of the form (4.1) with \(\Delta_{e}>0\), these modes' wavevectors are parallel to the magnetic field. However, a broad spectrum of oblique transverse modes is also destabilised when \(\Delta_{e}>0\). To characterise the transverse instability's growth analytically, we first assume \(\Delta_{e}\beta_{e}\gg 1\), and then take directly the unmagnetised limit of the full CES dispersion relation (see appendix K.3.4) under the orderings \[k_{\perp}\rho_{e}\sim k_{\parallel}\rho_{e}\sim\left(\Delta_{e}\beta_{e} \right)^{1/2}\gg 1\,,\quad\tilde{\omega}_{e\parallel}=\frac{\omega}{k_{ \parallel}v_{\rm the}}\sim\Delta_{e}\,. \tag{4.28}\] We obtain two non-propagating modes (real frequency \(\varpi=0\)) that have growth rates \[\gamma_{1}=\frac{kv_{\rm the}}{\sqrt{\pi}}\left(\Delta_{e}\frac{k_{\parallel}^ {2}-k_{\perp}^{2}}{k^{2}}-\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\right)\,,\] (4.29_a_) \[\gamma_{2}=\frac{kv_{\rm the}}{\sqrt{\pi}}\left(\Delta_{e}\frac{k_{\parallel}^{2}}{ k^{2}}-\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\right)\,. \tag{4.29b}\] For \(\Delta_{e}>0\), the growth rate of the second mode is always positive and larger than that of the first mode; the first mode only has a positive growth rate provided \(k_{\perp}<k_{\parallel}\). Now taking the subsidiary limit \(k_{\parallel}\rho_{e}\gg k_{\perp}\rho_{e}\gg 1\), we find that both roots have the same growth rate: \[\gamma\approx\frac{k_{\parallel}v_{\rm the}}{\sqrt{\pi}}\left(\Delta_{e}-\frac{ k_{\parallel}^{2}\rho_{e}^{2}}{\beta}\right)\,, \tag{4.29c}\] which is identical to (4.24b). We note by comparison with (4.24a) that the unmagnetised limit fails to recover the non-zero real frequencies of the \(k_{\parallel}\rho_{e}\gg 1\) whistler modes; this is because the ratio of these modes' real frequency \(\varpi\) to their growth rate \(\gamma\) is \(\varpi/\gamma\sim 1/k_{\parallel}\rho_{e}\ll 1\). The maximum growth rate \(\gamma_{\rm max}\) of the second mode (4.29b) for an oblique wavevector with angle \(\theta\) is \[\gamma_{\rm max}=\frac{2}{3\sqrt{3\pi}}\cos^{3}\theta\,\Delta_{e}(\Delta_{e} \beta_{e})^{1/2}\Omega_{e}, \tag{4.30}\] attained at the (total) wavenumber \[(k\rho_{e})_{\rm peak}=\cos\theta\,(\Delta_{e}\beta_{e}/3)^{1/2}. \tag{4.31}\] The parallel and perpendicular wavenumbers of this maximum growth are then \[(k_{\parallel}\rho_{e})_{\rm peak}=\cos^{2}\theta\,(\Delta_{e}\beta_{e}/3)^{ 1/2},\quad(k_{\perp}\rho_{e})_{\rm peak}=\cos\theta\sin\theta\,(\Delta_{e} \beta_{e}/3)^{1/2}. \tag{4.32}\] In the special case of parallel modes (\(\theta=0^{\circ}\)), this recovers the peak growth rate (4.25) of the CES whistler instability at \(k_{\parallel}\) in the limit \(\Delta_{e}\beta_{e}\gg 1\). In figure 15, we demonstrate that the fastest-growing unstable modes in the limit \(\Delta_{e}\beta_{e}\gg 1\) are indeed transverse ones. This figure shows the numerically determined growth rate as a function of \(k_{\parallel}\) and \(k_{\perp}\)), for a particular large value of \(\Delta_{e}\beta_{e}\). A broad range of sub-electron-Larmor scale modes are unstable (figure 15a), with the parallel wavenumber of the fastest-growing ones closely agreeing with the analytical prediction (4.32). The analytical expression (4.29b) for the transverse instability's growth rate also agrees well with the numerical result as a function of both \(k_{\parallel}\) and \(k_{\perp}\) (figure 15b). #### 4.3.4 Electron mirror instability The oblique microinstability evident in figure 14b at sub-ion-Larmor scales is the CES electron mirror instability: the destabilisation of KAWs by excess perpendicular electron pressure (viz., \(\Delta_{e}>0\)) associated with the CE electron-shear term. The instability (which has also been referred to as the field-swelling instability - see Basu & Coppi 1984) is perhaps confusingly named, given that its physical mechanism is rather different to that of the (ion-scale) mirror instability: non-resonant interactions between the anisotropic distribution of electrons and the KAWs causes the restoring force underpinning the latter's characteristic oscillation to be negated if \(\Delta_{e}>1/\beta_{e}\). The electron mirror instability has been extensively explored in \(\beta_{e}\sim 1\) plasma (see Hellinger & Stverak 2018, and references therein); in plasmas with \(\beta_{e}\gg 1\), it has been analytically characterised and its physical mechanism elucidated in the quasi-perpendicular (\(k_{\parallel}\ll k_{\perp}\)) limit of gyrokinetics (Kunz _et al._ 2018). Here, we find that once its marginality condition (\(\Delta_{e}=1/\beta_{e}\)) is surpassed sufficiently, oblique modes with \(k_{\parallel}\lesssim k_{\perp}\) are also destabilised. As with the mirror instability, a simple analytic characterisation of the CES electron mirror instability can be performed in the case of marginal instability. We define the marginality parameter \(\Gamma_{e}\equiv\Delta_{e}\beta_{e}-1\), and adopt the ordering \[k_{\perp}^{2}\rho_{e}^{2}\sim k_{\parallel}\rho_{e}\sim\tilde{\omega}_{e} \beta_{e}\sim\Gamma_{e}\ll 1, \tag{108}\] with the additional assumption that \(\Gamma_{e}\gg\mu_{e}^{1/2}\) in order that the effect of ion pressure anisotropy can be neglected. Then, it can be shown (see appendix K.3.5) that the growth rate is \[\frac{\gamma}{\Omega_{e}}=\frac{k_{\parallel}\rho_{e}}{\beta_{e}}\left[-\frac{ 3\sqrt{\pi}}{4}k_{\perp}^{2}\rho_{e}^{2}+\sqrt{\frac{3}{2}\Gamma_{e}k_{\perp}^ {2}\rho_{e}^{2}-\frac{9}{4}k_{\parallel}^{2}\rho_{e}^{2}+\frac{9}{16}\left( \pi-2\right)k_{\perp}^{4}\rho_{e}^{4}}\right]. \tag{109}\] It follows that the maximum growth rate is \[\gamma_{\rm max}=\frac{\left[\pi-8+\sqrt{\pi\left(16+\pi\right)}\right]^{3/2} }{48\left(\pi-2\right)}\left[\sqrt{\frac{\pi+4+\sqrt{\pi\left(16+\pi\right)}} {\pi-8+\sqrt{\pi\left(16+\pi\right)}}}-\sqrt{\frac{\pi}{\pi-2}}\right]\approx 0.055\frac{\Gamma_{e}^{2}}{\beta_{e}}, \tag{110}\] attained at \[(k_{\parallel}\rho_{e})_{\rm peak} =\sqrt{\frac{\pi-8+\sqrt{\pi\left(16+\pi\right)}}{36\left(\pi-2 \right)}}\Gamma_{e}\approx 0.27\Gamma_{e}, \tag{111}\] \[(k_{\perp}\rho_{e})_{\rm peak} =\sqrt{\frac{\pi-8+\sqrt{\pi\left(16+\pi\right)}}{6\left(\pi-2 \right)}}\Gamma_{e}^{1/2}\approx 0.65\Gamma_{e}^{1/2}. \tag{112}\] Figure 16 demonstrates that these predictions are accurate by comparing them to numerical results for a particular (small) value of \(\Gamma_{e}\). More specifically, figure 16a shows that the location in the \((k_{\parallel},k_{\perp})\) plane at which the maximum growth of the electron mirror instability is attained closely matches the analytical prediction (4.37), while figure 16b confirms that the wavenumber dependence of the growth rate agrees with (4.35) for \(k_{\perp}\rho_{e}\gtrsim\mu_{e}^{1/4}\). We note that, in addition to the electron mirror, another instability operating at smaller characteristic values of \(k_{\perp}\rho_{e}\) is evident in figure 16. These are the \(k_{\perp}\rho_{i}\gtrsim 1\) mirror modes driven unstable by the CE ion-shear term that were discussed in section 4.3.1; for \(1\ll k\rho_{i}\ll\mu_{e}^{-1/4}\), the ion-pressure anisotropy associated with the CE ion-shear terms remains a greater free-energy source for KAW instabilities than the CE electron-shear term, even when \(\Delta_{e}>1/\beta_{e}\). For \(\Gamma_{e}\gtrsim 1\), our near-marginal theory anticipates that peak growth occurs at electron Larmor scales (\(k_{\parallel}\rho_{e}\lesssim k_{\parallel}\rho_{e}\sim 1\)), with \(\gamma_{\rm max}\sim\Omega_{e}/\beta_{e}\). These expectations are indeed realised numerically, as shown in figure 17 (see also figure 14). The expression (4.35) for the growth rate as a function of wavenumber that was derived in the case of \(\Gamma_{e}\ll 1\) remains qualitatively - but not quantitatively - accurate (see figure 17b). Figure 18 shows that a similar conclusion holds for the expression (4.36) for the peak growth rate, and also for the expressions (4.37_a_) and (4.37_b_) of the parallel and perpendicular wavenumbers at which that growth occurs. To confirm our prior claim in section 4.2 that the CES parallel whistler instability is faster growing than the electron mirror instability, we show the former's numerically computed growth rate on figure 18 (left panel); as it approaches the asymptotic value (4.25) that is valid in the limit \(\Delta_{e}\beta_{e}\gg 1\), we observe that the electron mirror's growth rate is a factor of \(\sim\)3 smaller (cf. figure 15a). The parallel wavenumber at which peak growth of the whistler instability occurs is also larger than the analogous quantity for the electron mirror by an order-unity factor. While we cannot derive a simple analytic expression for the growth rate of the dominant electron mirror modes when \(\Gamma_{e}\gtrsim 1\), we can calculate this quantity for long-wavelength (viz., \(k\rho_{e}\ll 1\)) modes. For this calculation, we assume that \(k\rho_{e}\sim\mu_{e}^{1/4}\ll 1\), \(k_{\perp}\sim k_{\parallel}\) Figure 16: _Electron mirror instability at \(\Gamma_{e}=\Delta_{e}\beta_{e}-1\ll 1\)._**a)** Growth rates of unstable electron mirror modes associated with the CE distribution function (4.1) for \(\Gamma_{e}=1/3\) (\(\Delta_{e}\beta_{e}=4/3\)). The growth rates of all modes are calculated in the same way as figure 14. The dashed white lines indicate the analytical prediction (4.37) for the parallel/perpendicular wavenumber at which peak growth is achieved. **b)** Plot of the electron mirror mode’s growth rate (solid line) as a function of \(k_{\parallel}\rho_{e}\) with \(k_{\perp}\rho_{e}=0.65\Gamma_{e}^{1/2}\) (top), and as a function of \(k_{\perp}\rho_{e}\) with \(k_{\parallel}\rho_{e}=0.27\Gamma_{e}\) (bottom). The dashed lines show the analytical prediction (4.35) for this quantity. and the ordering \[\tilde{\omega}_{e\parallel}=\frac{\omega}{k_{\parallel}v_{\mathrm{ th}e}}\sim\frac{k\rho_{e}}{\beta_{e}}\sim|\Delta_{e}|k\rho_{e}\,. \tag{4.38}\] Under these assumptions, we obtain (see appendix K.3.5) two modes whose complex Figure 17: _Electron mirror instability at \(\Gamma_{e}=\Delta_{e}\beta_{e}-1\sim 1\)._**a)** Growth rates of unstable electron mirror modes associated with the CE distribution function (4.1) for \(\Gamma_{e}=1\) (\(\Delta_{e}\beta_{e}=2\)). The growth rates of all modes are calculated in the same way as figure 14. The dashed white lines indicate the analytical prediction (4.37) for the parallel/perpendicular wavenumber at which peak growth is achieved, while the dotted line indicates the analytical prediction (4.43) for the total wavenumber below which oblique long-wavelength (\(k_{\parallel}\rho_{e}<k_{\perp}\rho_{e}\ll 1\)) electron mirror modes become unstable. **b)** The electron mirror mode’s growth rate (solid line) as a function of \(k_{\parallel}\rho_{e}\) with \(k_{\perp}\rho_{e}=0.65\Gamma_{e}^{1/2}\) (top), and as a function of \(k_{\perp}\rho_{e}\) with \(k_{\parallel}\rho_{e}=0.27\Gamma_{e}\) (bottom). The dashed lines show the analytical prediction (4.35) for this quantity. Figure 18: _The maximum growth of the electron mirror instability._ The maximum normalised growth rate \(\gamma_{e}\beta_{e}/\Omega_{e}\) (left panel, solid red line) of unstable electron mirror modes as a function of \(\Delta_{e}\beta_{e}\), as well as parallel (middle panel, solid blue line) and perpendicular (right panel, solid yellow line) wavenumbers, \((k_{\parallel}\rho_{e})_{\mathrm{peak}}\) and \((k_{\perp}\rho_{e})_{\mathrm{peak}}\), respectively, at which that growth is attained. The analytical prediction (4.36) of \(\gamma_{\mathrm{max}}\) for marginally unstable electron mirror modes, as well as the analogous predictions (4.37) for \((k_{\parallel}\rho_{e})_{\mathrm{peak}}\) and \((k_{\perp}\rho_{e})_{\mathrm{peak}}\), are shown as dashed lines. The dotted lines are the maximum growth rate and (parallel) wavenumber of peak growth for the CET parallel whistler instability (see section 4.3.2) as functions of \(\Delta_{e}\beta_{e}\). frequencies \(\omega\) are given by \[\omega\approx\pm k_{\parallel}\rho_{e}\Omega_{e}\Bigg{\{}\left[\frac{ 1}{\beta_{e}}+\Delta_{e}\left(\frac{1}{2}-\mu_{e}^{1/2}\frac{k_{\parallel}^{2} \rho_{e}^{2}-k_{\perp}^{2}\rho_{e}^{2}}{k^{4}\rho_{e}^{4}}\right)\right]\] \[\times\left[\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}-\Delta_{e}\left(k_ {\perp}^{2}\rho_{e}^{2}+\mu_{e}^{1/2}\frac{k_{\parallel}^{2}}{k^{2}}-\frac{1}{ 2}k_{\parallel}^{2}\rho_{e}^{2}\right)\right]\Bigg{\}}^{1/2}. \tag{4.39}\] The terms proportional to \(\mu_{e}^{1/2}\Delta_{e}\) are associated with the CE ion-shear term, which plays a non-negligible role for \(k\rho_{e}\lesssim\mu_{e}^{1/4}\). In the subsidiary limit \(k\rho_{e}\ll\mu_{e}^{1/4}\), (4.39) becomes the dispersion relation (4.18) obtained in section 4.3.1 for unstable mirror modes in the limit \(\Delta_{i}\beta_{i}\gg 1\). In the opposite subsidiary limit \(k\rho_{e}\gg\mu_{e}^{1/4}\) (but \(k\rho_{e}\ll 1\)), (4.39) simplifies to \[\omega\approx\pm k_{\parallel}\rho_{e}\Omega_{e}\sqrt{\left(\frac{1}{\beta_{ e}}+\frac{\Delta_{e}}{2}\right)\left[\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}- \Delta_{e}\left(k_{\perp}^{2}\rho_{e}^{2}-\frac{1}{2}k_{\parallel}^{2}\rho_{e }^{2}\right)\right]}\,. \tag{4.40}\] For \(k_{\parallel}\ll k_{\perp}\), this recovers the high-\(\beta\) limit of the dispersion relation for unstable KAWs previously derived in the gyrokinetic calculations of Kunz _et al._ (2018); our calculations show that this dispersion relation also applies to oblique (\(k_{\parallel}\lesssim k_{\perp}\)) electron mirror modes. For \(\Delta_{e}>0\), we (as expected) have an unstable root if and only if \[\Delta_{e}>\frac{1}{\beta_{e}}, \tag{4.41}\] with the unstable mode's growth rate being \[\gamma\approx k_{\parallel}\rho_{e}\Omega_{e}\sqrt{\left(\frac{1}{\beta_{e}} +\frac{\Delta_{e}}{2}\right)\left[\Delta_{e}\left(k_{\perp}^{2}\rho_{e}^{2}- \frac{1}{2}k_{\parallel}^{2}\rho_{e}^{2}\right)-\frac{k^{2}\rho_{e}^{2}}{\beta _{e}}\right]}\,. \tag{4.42}\] We can now provide an analytical demonstration that a broad spectrum of electron mirror modes is unstable if \(\Gamma_{e}\gtrsim 1\). It follows directly from (4.39) that instability arises for all modes with \(k_{\perp}>k_{\parallel}\) if the following constraint on the total wavenumber \(k\) is satisfied: \[k\rho_{i}<\sqrt{\frac{2\mu_{e}^{1/2}\left(\Gamma_{e}+1\right)\cos^{2}\theta}{ \left(\Gamma_{e}+3\right)\cos^{2}\theta-2\Gamma_{e}\sin^{2}\theta}}\,, \tag{4.43}\] where \(\theta=\tan^{-1}\left(k_{\perp}/k_{\parallel}\right)\) is, as normal, the wavevector angle. The validity of this bound is illustrated in figure 17a. (4.43) is particularly simple to interpret in the subsidiary limit \(k\rho_{e}\gg\mu_{e}^{1/4}\), yielding a lower bound on \(\theta\) alone: \[\theta>\tan^{-1}\sqrt{\frac{\Gamma_{e}+3}{2\Gamma_{e}}}. \tag{4.44}\] For \(\Gamma_{e}\ll 1\) (but \(\Gamma_{e}>0\)), this implies that the only unstable electron mirror modes are quasi-perpendicular, as anticipated from our calculations pertaining to the marginal state of the instability. On the other hand, for \(\Gamma_{e}\gtrsim 1\), modes with a wide range of wavevector angles will be destabilised. ### CES microinstability classification: negative pressure anisotropy (\(\epsilon_{i}<0\)) #### 4.4.1 Firehose instability The best-known instability to be triggered by either negative ion or electron pressure anisotropy associated with the CE ion- and electron-shear terms, respectively, is the CES firehose instability. The linear theory of the firehose (or garden-hose) instability in high-\(\beta\) plasma, the first studies of which were completed over half a century ago (Rosenbluth, 1956; Parker, 1958; Chandrasekhar _et al._, 1958; Vedenov & Sagdeev, 1958), has previously been explored in the contexts of plasmas with bi-Maxwellian distributions (e.g., Kennel & Sagdeev, 1967; Davidson & Volk, 1968; Yoon _et al._, 1993; Hellinger & Matsumoto, 2000), CE distributions (e.g., Schekochihin _et al._, 2005), and even characterisations that are independent of the ion distribution function (e.g., Schekochihin _et al._, 2010; Kunz _et al._, 2015). Its physical mechanism is well established: negative pressure anisotropies reduce the elasticity of magnetic-field lines that gives rise to Alfven waves, and can completely reverse it when \(\Delta_{i}\) is negative enough. The long-wavelength 'fluid' firehose instability (whose mechanism is independent of the particular ion distribution function) is non-resonant in nature; however, resonant damping mechanisms such as Barnes damping or cyclotron damping play an important role in regulating the growth of modes on scales comparable to the ion-Larmor scale, and thereby set the scale of peak firehose growth. Beyond linear theory, nonlinear analytical studies of the parallel firehose instability in high-\(\beta\) plasma have been completed (e.g., Rosin _et al._, 2011), as well as numerical ones (e.g., Kunz _et al._, 2014; Melville _et al._, 2016; Riquelme _et al._, 2018). While there is much in common between firehose modes across all wavevector angles, there are certain differences that, on account of their significance for determining the fastest-growing firehose mode, are important to highlight. Based on these differences, firehose modes can be categorised into three different types: _quasi-parallel_, _oblique_, and _critical-line_ firehose modes. Quasi-parallel firehose modes, which are destabilised left-handed and/or right-handed high-\(\beta\) Alfven waves (Kennel & Sagdeev, 1967; Davidson & Volk, 1968), exist inside a narrow cone of wavevector angles \(\theta\lesssim\beta_{i}^{-1/4}\)(Achterberg, 2013). The peak wavenumber of their growth (\(k_{\parallel}\rho_{i}\sim|\Delta_{i}+2/\beta_{i}|^{1/2}\)) is determined by gyroviscosity, an FLR effect (Schekochihin _et al._, 2010). For \(\theta\gtrsim\beta_{i}^{-1/4}\), the characteristic low-frequency (viz., \(\omega\ll\Omega_{i}\)) waves that exist above ion-Larmor-scales in high-\(\beta\) plasma are shear-Alfven waves and (compressible) slow modes; the former remains susceptible to firehose instability, but, on account of its FLR coupling to the slow mode, its instability proceeds quite differently at sufficiently small wavenumbers (\(k\rho_{i}\gtrsim|\Delta_{i}+2/\beta_{i}|^{1/2}\)), with peak growth occurring at smaller scales (\(k_{\parallel}\rho_{i}\sim|\Delta_{i}+2/\beta_{i}|^{1/4}\ll 1\)). Finally, along a 'critical line' in the (\(k_{\parallel},k_{\perp}\)) plane (\(k_{\perp}\approx\sqrt{2/3}k_{\parallel}\), \(\theta\approx 39^{\circ}\)), the FLR coupling between the slow mode and shear-Alfven wave becomes anomalously weak due to two opposing FLR effects cancelling each other out. This results in much weaker collisionless damping on critical-line firehose modes, and so they can exist on scales that are close to (though, as we prove here for the first time, not strictly at) the ion-Larmor scale. Thus critical-line firehose modes are generically the fastest-growing ones in high-\(\beta\) plasma (Schekochihin _et al._, 2005). We support this claim with figure 19, which shows the maximum growth rate of the firehose-unstable modes as a function of both \(k_{\parallel}\) and \(k_{\perp}\) for two different (unstable) values of \(\Delta_{i}\beta_{i}\) (and with the same value of \(\beta_{i}\) as was used to calculate the stability maps presented in section 4.2). Both examples confirm that, although a broad spectrum of unstable parallel and oblique firehose modes emerge when \(\Delta_{i}\beta_{i}+2\lesssim-1\), it is the critical-line firehose modes that are the fastest growing. The value of \(\Delta_{i}\) required to trigger the CES firehose instability is, as with the case of the firehose instability in a plasma with a bi-Maxwellian ion distribution, dependent on the scale of the unstable firehose modes. For long-wavelength firehose modes (i.e. those with \(k\rho_{i}\ll 1\)), the threshold is \(\Delta_{i}<(\Delta_{i})_{\rm c}=-2/\beta_{i}\); it can be shown that this result is independent of the particular form of the ion distribution function (Schekochihin _et al._, 2010). However, our numerical solutions for the wavenumber-dependent growth rate of firehose modes in CE plasma when \(\Delta_{i}>-2/\beta_{i}\) (see figure 20a) suggest that oblique ion-Larmor-scale firehose modes can be destabilised at less negative pressure anisotropies. This is consistent with the findings of previous studies of the oblique firehose in \(\beta\sim 1\) plasma (Hellinger & Matsumoto, 2000; Hellinger & Travniceek, 2008; Astfalk & Jenko, 2016), although this finding has not until now been comprehensively studied in plasma Figure 19: _CES firehose instability when \(\Delta_{i}\beta_{i}+2\lesssim-1\)_. Maximum positive growth rates of linear perturbations resulting from the CE ion-shear term in the CE distribution function (4.1) with \(\Delta_{i}\) negative enough to surpass the long-wavelength firehose-instability threshold \(\Delta_{i}=-2/\beta_{i}\) by at least an order-unity factor. The growth rates of all modes are calculated using the approach outlined in appendix K.3. The growth rates are calculated on a \(400^{2}\) grid, with logarithmic spacing in both perpendicular and parallel directions between wavenumbers. The resulting growth rates are normalised as \(\gamma\beta_{i}/\Omega_{i}\) are functions of two dimensionless parameters: \(\Delta_{i}\beta_{i}\) and \(\beta_{i}\). The dashed white lines indicate the analytical predictions (4.67) for the parallel/perpendicular wavenumber at which peak growth is achieved, while the dotted line indicates the critical line \(k_{\perp}=k_{\parallel}\sqrt{2/3}\) along which the firehose growth rate is predicted to be maximal. **a)**\(\Delta_{i}\beta_{i}=-3\). **b)**\(\Delta_{i}\beta_{i}=-30\). In both cases, \(\beta_{i}=10^{4}\). Figure 20: _Onset of the CES firehose instability._**a)** Maximum positive growth rates of linear perturbations resulting from the CE ion-shear term in the CE distribution function (4.1) with \(\beta_{i}=10^{4}\) and \(\Delta_{i}=-1.7/\beta_{i}\) (which is below the long-wavelength firehose instability threshold \(\Delta_{i}=-2/\beta_{i}\)). The growth rates of all modes are calculated in the same way as figure 19. **b)** Threshold value \((\Delta_{i}\beta_{i})_{c}\) of \(\Delta_{i}\beta_{i}\) at which modes with parallel and perpendicular wavenumber \(k_{\parallel}\) and \(k_{\perp}\), respectively, become firehose unstable. Regions of \((k_{\parallel},k_{\perp})\) that are shaded black are stable. with \(\beta\gg 1\). We can, in fact, calculate the threshold semi-analytically for the CES firehose instability as a function of wavenumber (see appendix K.2.2); the results, which are shown in figure 20b show that oblique firehose modes with \(k_{\parallel}\rho_{i}\approx 0.45\), \(k_{\perp}\rho_{i}\approx 0.3\) become unstable when \(\Delta_{i}\approx-1.35/\beta_{i}\). The reduced threshold of ion-Larmor-scale firehose modes, which can be shown to depend only on fourth- and higher-order moments of the ion distribution function, is considered in greater depth in Bott _et al._ (2023, in prep.). The growth of the three different sub-categories of unstable CES firehose modes (quasi-parallel, oblique, and critical-line firehoses) can be described analytically. However, the relative orderings of \(\tilde{\omega}_{i\parallel}\), \(k_{\parallel}\rho_{i}\), \(k_{\perp}\rho_{i}\), \(\beta_{i}\) and \(|\Delta_{i}|\) for these sub-categories are different, so it is necessary to treat them separately. #### 4.4.2 Quasi-parallel firehose instability The relevant orderings of parameters in for quasi-parallel firehose modes is \[\tilde{\omega}_{i\parallel}=\frac{\omega}{k_{\parallel}v_{\rm thi}}\sim\beta_ {i}^{-1/2}\sim|\Delta_{i}|^{1/2}\sim k_{\parallel}\rho_{i}\,, \tag{4.45}\] with the additional small wavenumber-angle condition \[k_{\perp}\rho_{i}\ll\beta_{i}^{-1/4}k_{\parallel}\rho_{i}\sim\beta_{i}^{-3/4}. \tag{4.46}\] Under the ordering (4.45), we find (see appendix K.3.6) that there are four modes with complex frequencies given by \[\frac{\omega}{\Omega_{i}}=\pm k_{\parallel}\rho_{i}\left(\frac{1}{4}k_{ \parallel}\rho_{i}\pm\sqrt{\frac{1}{16}k_{\parallel}^{2}\rho_{i}^{2}+\frac{1 }{\beta_{i}}+\frac{\Delta_{i}}{2}}\right)\,, \tag{4.47}\] where the \(\pm\) signs can be chosen independently. This is the standard parallel firehose dispersion relation (Kennel & Sagdeev, 1967; Davidson & Volk, 1968; Schekochihin _et al._, 2010). To (re-)identify the modes that are destabilised by the negative ion-pressure anisotropy, we set \(\Delta_{i}=0\): the resulting dispersion relation agrees with Foote & Kulsrud (1979), recovering the dispersion relation of Alfven waves in the limit \(k_{\parallel}\rho_{i}\ll\beta_{i}^{-1/2}\) [see see their eqn. (19)] and the dispersion relation of the slow and fast hydromagnetic waves in the limit \(k_{\parallel}\rho_{i}\gg\beta_{i}^{-1/2}\) [see see their eqn. (20)]. The growth rate of the unstable parallel firehose modes that follows from (4.47) is shown in figure 21 for several different values of \(\Delta_{i}\) and \(\beta_{i}\); the results closely match the analogous result determined numerically1. Footnote 1: An inquisitive reader might wonder why the numerical solution suggests that, in addition to the long-wavelength parallel firehose modes, parallel ion-Larmor scale modes are also unstable in some cases (see figure 21, middle panel), albeit with a much smaller growth rate. This instability is the _CES resonant parallel firehose instability_, so named because of its mediation via gyroresonant interactions between ions and ion-Larmor-scale modes (Yoon _et al._, 1993). In a \(\beta_{i}\sim 1\) plasma, this instability can have a growth rate comparable to (or even larger than) the longer-wavelength non-resonant firehose modes; however, because of the exponential dependence of the resonant parallel firehose instability’s growth rate on \(|\Delta_{i}|^{-1}\sim\beta_{i}\), the instability is generically much weaker than the non-resonant firehose in plasma with \(\beta_{i}\gg 1\) (see Bott _et al._, in prep.). In the language of section 2.3.4, resonant parallel firehose modes are quasi-cold in CE plasma. We therefore do not consider this instability further in this paper. For non-zero \(\Delta_{i}\) and fixed \(k_{\parallel}\rho_{i}\), (4.47) implies that we have instability provided \[|\Delta_{i}|>\frac{2}{\beta_{i}}+\frac{1}{8}k_{\parallel}^{2}\rho_{i}^{2}\,. \tag{4.48}\] The fastest-growing mode \[\frac{\gamma_{\rm max}}{\Omega_{i}}=\left|\frac{2}{\beta_{i}}+\Delta_{i}\right| \tag{4.49}\] occurs at the characteristic wavenumber \[(k_{\parallel}\rho_{i})_{\rm peak}=2\left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{ 1/2}\,. \tag{112}\] For \(k_{\parallel}\rho_{i}>2\sqrt{2}\left|2\beta_{i}^{-1}+\Delta_{i}\right|^{1/2}\), the unstable mode is stabilised. This agrees with previous analytical characterisations of the firehose instability (Rosin _et al._, 2011). #### 4.4.3 Oblique firehose instability In this case, we order \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{1/2}}\sim|\Delta_{i}|^{1/2 }\sim k_{\parallel}^{2}\rho_{i}^{2}\sim k_{\perp}^{2}\rho_{i}^{2}\,. \tag{113}\] Aside from the finite propagation angle of oblique modes, the key difference between the oblique and quasiparallel cases is the larger magnitude of the typical wavenumber \(k\rho_{i}\sim\beta_{i}^{-1/4}\). The unstable oblique firehose modes have the complex frequency (see appendix K.3.7) \[\frac{\omega}{\Omega_{i}} =-k_{\parallel}\rho_{i}\Biggl{[}\frac{\mathrm{i}}{8\sqrt{\pi}k_{ \perp}^{2}\rho_{i}^{2}}\left(k_{\parallel}^{2}\rho_{i}^{2}-\frac{3}{2}k_{ \perp}^{2}\rho_{i}^{2}\right)^{2}\] \[\qquad\pm\sqrt{\frac{1}{\beta_{i}}+\frac{\Delta_{i}}{2}-\frac{1}{ 64\pi\mathsf{k}_{\perp}^{4}\rho_{i}^{4}}\left(k_{\parallel}^{2}\rho_{i}^{2}- \frac{3}{2}k_{\perp}^{2}\rho_{i}^{2}\right)^{4}}\Biggr{]}\,. \tag{114}\] Setting \(|\Delta_{i}|=0\), and considering the subsidiary limit \(k\rho_{i}\ll\beta_{i}^{-1/4}\), we recover the dispersion relation of the shear Alfven mode (Foote & Kulsrud, 1979). Similarly to the quasi-parallel firehose instability, the instability condition is still \[\Delta_{i}<-\frac{2}{\beta_{i}}\,. \tag{115}\] Figure 21: _Parallel CES firehose instability_. Growth rates of Alfvén waves whose instability is driven by the CE ion-shear term in the CE distribution function (108), for wavevectors co-parallel with the background magnetic field (viz., \(\mathbf{k}=k_{\parallel}\mathbf{z}\)). The growth rates (solid lines) of all modes are calculated in the same way as figure 19. We show the growth rates for a selection of different values of \(\Delta_{i}\beta_{i}\) and \(\beta_{i}\). The approximation (103) for the growth rate (dashed red) in the limit \(k_{\parallel}\rho_{i}\ll 1\) is also plotted. If this condition is met, the maximum growth rate of the instability is \[\frac{\gamma_{\rm max}}{\Omega_{i}}\approx\left(\frac{8\uppi}{27}\right)^{1/4} \left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{3/4}\tan\theta\left[1-\frac{3}{2} \tan^{2}\theta\right]^{-1}\,, \tag{100}\] and is attained at (parallel) wavenumber \[(k_{\parallel}\rho_{i})_{\rm peak}\approx\left(\frac{32\uppi}{3}\right)^{1/4} \left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{1/4}\tan\theta\left[1-\frac{3}{2 }\tan^{2}\theta\right]^{-1}\,, \tag{101}\] where \(\theta=\tan^{-1}(k_{\perp}/k_{\parallel})\) is (again) the wavevector angle with respect to the magnetic field. In contrast to the quasi-parallel case, if the condition (101) is met, the instability persists for all wavenumbers satisfying \(k\rho_{i}\lesssim 1\), albeit with an decreasing growth rate beyond the parallel wavenumber given by (101). We notice that along the critical line \(k_{\perp}=k_{\parallel}\sqrt{2/3}\) (\(\theta\approx 39^{\circ}\)), the maximum growth rate (100) of the oblique firehose diverges. This divergence is mathematically the result of failing to take into account higher-order terms in the \(k\rho_{i}\ll 1\) expansion, but, as was discussed earlier in this section, it is indicative of a physical effect (viz., much faster growth of firehose modes with \(k_{\perp}=k_{\parallel}\sqrt{2/3}\)). The degree to which the growth rate of unstable modes determined from (100) follows a numerical solution for a particular choice of \(\theta\) is demonstrated in figure 22. The agreement is reasonable, although an increasingly large discrepancy develops as \(k\rho_{i}\) approaches unity due to FLR effects. #### 4.4.4 Critical-line firehose instability In this third and final case, we set \(k_{\perp}=k_{\parallel}\sqrt{2/3}\). The FLR coupling between the shear Alfven mode and the Barnes'-damped slow-mode then vanishes to leading order in \(k\rho_{i}\ll 1\), and next order FLR terms must be considered. Depending on the value of \(\beta_{i}\), we find two sub-cases. First, for \(\beta_{i}\sim\Delta_{i}^{-1}\gg 10^{6}\) - a numerical bound that we will justify _a posteriori_ following our calculations - the FLR term responsible for setting the wavenumber of the fastest-growing mode is the second-order correction to the FLR coupling between the shear Alfven and slow modes. The appropriate ordering to adopt then depends on the Figure 22: _Oblique CES firehose instability._ Growth rates of the shear Alfvén mode whose instability is driven by the CE ion-shear term in the CE distribution function (101), for wavevectors at an angle \(\theta=60^{\circ}\) with the background magnetic field (viz., \(k_{\perp}=\sqrt{3}k_{\parallel}\)). The growth rates (solid lines) of all modes are calculated in the same way as figure 19. We show the growth rates for a selection of different values of \(\Delta_{i}\beta_{i}\) and \(\beta_{i}\). The approximation (100) for the growth rate (dashed red) in the limit \(k_{\parallel}\rho_{i}\ll 1\) is also plotted. relative magnitude of \(\Delta_{i}\) and \(\beta_{i}^{-1}\). For \(\Delta_{i}\beta_{i}+2\lesssim-1\), we use the ordering \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{1/2}}\sim|\Delta_{i}|^{1/2} \sim k_{\parallel}^{6}\rho_{i}^{6}\,. \tag{100}\] In this case, we find (see appendix K.3.8) that the frequency of the two shear Alfven modes is given by \[\frac{\omega}{\Omega_{i}}=-k_{\parallel}\rho_{i}\Bigg{[}\frac{6889\mathrm{i}k _{\parallel}^{6}\rho_{i}^{6}}{27648\sqrt{\pi}}\pm\sqrt{\left(\frac{1}{\beta_{ i}}+\frac{\Delta_{i}}{2}\right)-\frac{6889^{2}}{27648^{2}\pi}k_{\parallel}^{12} \rho_{i}^{12}}\Bigg{]}\,. \tag{101}\] The wavelength at which the growth rate is maximised scales with an extraordinarily low power of \(\left|2\beta_{i}^{-1}+\Delta_{i}\right|\): \[(k_{\parallel}\rho_{i})_{\mathrm{peak}}\approx\frac{2^{19/12}3^{1/2}\pi^{1/1 2}}{83^{1/3}35^{1/12}}\left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{1/12}\approx 0.97\left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{1/12}\,, \tag{102}\] with associated maximum growth rate \[\frac{\gamma_{\mathrm{max}}}{\Omega_{i}}\approx\frac{2^{13/12}3^{1/2}\pi^{1/1 2}}{83^{1/3}35^{1/12}}\left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{7/12} \approx 0.58\left|\frac{2}{\beta_{i}}+\Delta_{i}\right|^{7/12}\,. \tag{103}\] As discussed in section 4.4.1, the instability threshold for critical-line firehose modes is not (101), but is a less stringent value. We can demonstrate this analytically by showing that, for \(\Delta_{i}\simeq-2/\beta_{i}\), critical-line firehose modes are still unstable. Adopting the ordering \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{3/5}}\sim k_{\parallel}^{6 }\rho_{i}^{6}\,, \tag{104}\] it follows (see appendix K.3.8) that the growth rate of the critical-line firehose modes is \[\frac{\gamma}{\Omega_{i}}=-k_{\parallel}\rho_{i}\Bigg{[}\frac{6889k_{\parallel }^{6}\rho_{i}^{6}}{27648\sqrt{\pi}}\pm\sqrt{\frac{5}{4\beta_{i}}k_{\parallel} ^{2}\rho_{i}^{2}+\frac{6889^{2}}{27648^{2}\pi}k_{\parallel}^{12}\rho_{i}^{12}} \Bigg{]}\,. \tag{105}\] The maximum growth rate of such modes is then given by \[\frac{\gamma_{\mathrm{max}}}{\Omega_{i}}\approx\frac{2^{3}5^{7/10}3^{3/2}\pi^{ 1/5}}{83^{4/5}7^{7/10}}\beta_{i}^{-7/10}\approx 1.2\beta_{i}^{-7/10} \tag{106}\] obtained at parallel wavenumber \[(k_{\parallel}\rho_{i})_{\mathrm{peak}}\approx\frac{25^{1/10}3^{1/2}\pi^{1/10 }}{83^{2/5}7^{1/10}}\beta_{i}^{-1/10}\approx 0.64\beta_{i}^{-1/10}\,. \tag{107}\] When \(\beta_{i}\sim\Delta_{i}^{-1}\ll 10^{6}\) the fastest-growing critical-line firehose modes have a sufficiently large wavenumber that the effect of FLR coupling between shear Alfven and slow modes is sub-dominant to the effect of cyclotron damping. Assuming that \(\Delta_{i}\beta_{i}+2\lesssim-1\) and adopting the ordering \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{1/2}}\sim|\Delta_{i}|^{1/2 }\,,\quad k_{\parallel}\rho_{i}\sim\frac{1}{\sqrt{\log 1/\left|\beta_{i}^{-1}+ \Delta_{i}/2\right|}}, \tag{108}\] we show in appendix K.3.8 that the frequency of the shear Alfven modes becomes \[\frac{\omega}{\Omega_{i}}=-\frac{\mathrm{i}\sqrt{\pi}}{2k_{\parallel}\rho_{i} }\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)\pm k_{\parallel} \rho_{i}\sqrt{\left(\frac{1}{\beta_{i}}+\frac{\Delta_{i}}{2}\right)-\frac{\pi }{4k_{\parallel}^{4}\rho_{i}^{4}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^ {2}}\right)}\,. \tag{109}\] In this case, the maximum growth rate \[\frac{\gamma_{\rm max}}{\Omega_{i}}\approx(k_{\parallel}\rho_{i})_{\rm peak} \left|\frac{1}{\beta_{i}}+\frac{\Delta_{i}}{2}\right|^{1/2} \tag{100}\] is attained at \[(k_{\parallel}\rho_{i})_{\rm peak}\approx\frac{\sqrt{2}}{\sqrt{\log 1/\left| \beta_{i}^{-1}+\Delta_{i}/2\right|}}\left[1-\frac{4\log\left(\log 1/\sqrt{ \left|\beta_{i}^{-1}+\Delta_{i}/2\right|}\right)}{\log 1/\left|\beta_{i}^{-1}+ \Delta_{i}/2\right|}\right]\,. \tag{101}\] Figure 19 corroborates that the analytical approximation (101) provides a reasonable estimate of the parallel wavenumber at which peak growth occurs. Similarly to the \(\beta_{i}\gg 10^{6}\) regime, when \(\beta_{i}\ll 10^{6}\), critical-line firehose modes still grow when \(\Delta_{i}\approx-2/\beta_{i}\). Their growth rate as a function of wavenumber is given by \[\frac{\gamma}{\Omega_{i}}=-\frac{\sqrt{\pi}}{2k_{\parallel}\rho_{i}}\exp\left( -\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)\pm k_{\parallel}\rho_{i}\sqrt{ \frac{5}{4\beta_{i}}k_{\parallel}^{2}\rho_{i}^{2}+\frac{\pi}{4k_{\parallel}^{4 }\rho_{i}^{4}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)}\,. \tag{102}\] The maximum of (102), \[\frac{\gamma_{\rm max}}{\Omega_{i}}\approx\frac{\sqrt{5}}{2}(k_{\parallel} \rho_{i})_{\rm peak}^{2}\beta_{i}^{-1/2}, \tag{103}\] is achieved at \[(k_{\parallel}\rho_{i})_{\rm peak}\approx\frac{\sqrt{2}}{\sqrt{\log\left(\pi \beta_{i}/20\right)}}\left\{1-\frac{3\log\left[\log\left(\pi\beta_{i}/20 \right)/2\right]}{\log\left(\pi\beta_{i}/20\right)}\right\}\,. \tag{104}\] By comparing the expressions (100) and (101) for the complex frequency of shear Alfven modes - specifically, the ratio of the final terms - the dependence on \(\beta_{i}\) (equivalently, \(\Delta_{i}\)) of the relative importance of FLR slow-mode coupling and cyclotron damping can be determined. This ratio is \(\sim\)0.16\(k_{\parallel}^{8}\rho_{i}^{6}\exp\left(-1/k_{\parallel}^{2}\rho_{i}^{2}\right)\), with equality being achieved when \(k_{\parallel}\rho_{i}\approx 0.3\). Using (100) to estimating the value of \(\left|2\beta_{i}^{-1}+\Delta_{i}\right|\) at which this value of \(k_{\parallel}\rho_{i}\) is achieved, we find that \(\left|2\beta_{i}^{-1}+\Delta_{i}\right|\approx 8\times 10^{-7}\). Assuming \(|\Delta_{i}\beta_{i}^{-1}+2|\sim 1\), we conclude that, for \(\beta_{i}\lesssim 10^{6}\), cyclotron damping will determine the wavenumber cutoff, with this transition value of \(\beta_{i}\) proportional to the value of \(|\Delta_{i}|\beta_{i}\). This estimate can be validated numerically by comparing (100) and (101) with the numerically determined growth rate (see figure 23). We indeed find that, for \(\beta_{i}\sim\Delta_{i}^{-1}\ll 10^{6}\), the effect of cyclotron damping sets the wavenumber of peak growth, while FLR slow-mode coupling does so for \(\beta_{i}\sim\Delta_{i}^{-1}\gg 10^{6}\). In both cases, the superior of the two analytic approximations closely matches the numerical growth rate. These results suggest that, for very large \(\beta_{i}\), the wavenumber of the maximum growth of the firehose instability satisfies \(k\rho_{i}\ll 1\), rather than \(k\rho_{i}\sim 1\). This result might seem to contradict previous authors who claim to have found numerical evidence that the fastest growth rates of the firehose instability occur at \(k\rho_{i}\sim 1\)(Yoon _et al._, 1993; Schekochihin _et al._, 2005; Kunz _et al._, 2014); however, given the logarithmic dependence of the characteristic wavenumber (101), we conclude that it would take simulations at very high \(\beta_{i}\) to be able to distinguish between \(k\rho_{i}\sim 1\) and \(k\rho_{i}\sim\beta_{i}^{-1/12}\ll 1\). In addition, the results presented in figure 20b indicate that firehose modes with \(k\rho_{i}\sim 1\) have a less stringent instability threshold on \(\Delta_{i}\) than (100), providing an opportunity for such modes to grow significantly before longer-wavelength modes can do so. In short, it seems reasonable to assume for all practical purposes that the dominant firehose modes occur at \(k\rho_{i}\sim 1\), provided \(\beta_{i}\) is not extremely large. #### 4.4.5 Sub-ion-Larmor-scale firehose instability Figure 19b also suggests that, once \(|\Delta_{i}|\beta_{i}\gg 1\), firehose modes on sub-ion-Larmor scales develop - albeit with a smaller growth rate than the critical-line ones. Similarly to sub-ion-Larmor-scale mirror modes (see the end of section 4.3.1), we can characterise these modes analytically by adopting the ordering \[k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\sim(|\Delta_{i}|\beta_{i})^{1/2} \gg 1,\quad\frac{\gamma}{\Omega_{i}}\sim\Delta_{i}\,. \tag{108}\] If we also assume that \(|\Delta_{i}|\beta_{i}\ll\mu_{e}^{-1/2}\), it is shown in appendix K.3.2 that the growth rate of these modes is given by \[\frac{\gamma}{\Omega_{i}} \approx \frac{k_{\parallel}}{k}\sqrt{\left(-\Delta_{i}\frac{k_{\perp}^{2 }-k_{\parallel}^{2}}{k^{2}}-\frac{k^{2}\rho_{i}^{2}}{\beta_{i}}\right)\left( \frac{k^{2}\rho_{i}^{2}}{\beta_{i}}-\Delta_{i}\frac{k_{\parallel}^{2}}{k^{2} }\right)} \tag{109}\] \[= \cos\theta\sqrt{\left[-\Delta_{i}\left(\sin^{2}\theta-\cos^{2} \theta\right)-\frac{k^{2}\rho_{i}^{2}}{\beta_{i}}\right]\left(\frac{k^{2}\rho _{i}^{2}}{\beta_{i}}-\Delta_{i}\cos^{2}\theta\right)}\,.\] If \(\Delta_{i}<0\), we have an instability for all modes with \(\theta>45^{\circ}\) whose total wavenumber satisfies \[k\rho_{i}<\sqrt{|\Delta_{i}|\beta_{i}\left(\sin^{2}\theta-\cos^{2}\theta\right) }\,. \tag{110}\] Analogously to the sub-ion-Larmor-scale mirror modes [cf. (100)], the growth is maximised when \(k\rho_{i}\ll(|\Delta_{i}|\beta_{i})^{1/2}\) and \(\theta\approx 55^{\circ}\), with \[\gamma_{\rm max}=\frac{1}{3\sqrt{3}}|\Delta_{i}|\Omega_{i}\approx 0.19|\Delta_ {i}|\Omega_{i}\,. \tag{111}\] In contrast to the case of the mirror instability, this growth rate is asymptotically small in \(\Delta_{i}\ll 1\) compared to the peak growth rate of the critical-line firehose modes [cf. (101) Figure 23: _Critical-line CES firehose instability._ Growth rates of shear Alfvén modes whose instability is driven by the CE ion-shear term in the CE distribution function (105), for wavevectors at an angle \(\theta\approx 39^{\circ}\) with the background magnetic field (viz., \(k_{\perp}=\sqrt{2/3}k_{\parallel}\)). The growth rates (solid lines) of all modes are calculated in the same way as figure 19. We show the growth rates for a selection of different values of \(\Delta_{i}\beta_{i}\) and \(\beta_{i}\). The approximations (108) and (100) for the growth rate (dashed and dotted red, respectively) in the limit \(k_{\parallel}\rho_{i}\ll 1\) are also plotted. and (4.67)], and thus the instability of sub-ion-Larmor-scale firehose modes is always subdominant. For completeness, we note that, once \(|\Delta_{i}|\beta_{i}\sim\mu_{e}^{-1/2}\), the electron-pressure anisotropy associated with the CE electron-shear term begins to play a comparable role to the ion-pressure anisotropy for modes with \(k\rho_{i}\sim(|\Delta_{i}|\beta_{i})^{1/2}\). In this case, the expression for the growth rate becomes \[\frac{\gamma}{\Omega_{i}} \approx \frac{k_{\parallel}}{k}\Bigg{\{}\left[-\Delta_{i}\frac{k_{\perp} ^{2}-k_{\parallel}^{2}}{k^{2}}-k^{2}\rho_{i}^{2}\left(\frac{1}{\beta_{i}}+ \frac{\mu_{e}^{1/2}\Delta_{i}}{2}\right)\right] \tag{4.75}\] \[\qquad\times\left[\frac{k^{2}\rho_{i}^{2}}{\beta_{i}}-\Delta_{i} \left(\mu_{e}^{1/2}k_{\perp}^{2}\rho_{i}^{2}-\frac{1}{2}\mu_{e}^{1/2}k_{ \parallel}^{2}\rho_{i}^{2}+\frac{k_{\parallel}^{2}}{k^{2}}\right)\right]\Bigg{\}} ^{1/2}\,.\] The bound (4.73) on the total wavenumber required for the instability of modes with \(k_{\perp}>k_{\parallel}\) is then \[k\rho_{i}<\sqrt{\frac{|\Delta_{i}|\beta_{i}\left(\sin^{2}\theta-\cos^{2}\theta \right)}{1+\mu_{e}^{1/2}\Delta_{i}\beta_{i}/2}}\,. \tag{4.76}\] Because the denominator tends to zero as \(\Delta_{i}\rightarrow-2\mu_{e}^{-1/2}\beta_{i}^{-1}\), the bound becomes increasingly weak, and so the region of \((k_{\parallel},k_{\perp})\)-space in which there is instability extends significantly towards electron Larmor scales. This extension precedes the onset of the oblique electron firehose instability (see section 4.4.7). #### 4.4.6 Parallel electron firehose instability The CES parallel electron firehose instability arises when the negative electron-pressure anisotropy (\(\Delta_{e}<0\)) associated with the CE electron-shear term becomes a sufficiently large free-energy source to overcome the relatively weak collisionless damping mechanisms that act on long-wavelength (\(k_{\parallel}\rho_{e}\ll 1\)) quasiparallel whistler waves by changing their handedness from right- to left-handed. More specifically, whistler waves with quasiparallel wavevectors do not have a component of electric field parallel to \(\mathbf{B}_{0}\), and so are not subject to electron Landau damping. Electron cyclotron damping does occur, but is very inefficient for \(k_{\parallel}\rho_{e}\ll 1\). The resonant interaction primarily responsible for damping is that between the whistler waves and Maxwellian ions in the CE plasma streaming along field lines with \(v_{\parallel}\ll v_{\rm thi}\). When the handedness of the whistler waves changes, this interaction instead leads to the waves' growth. Because the resonant interaction driving the instability involves the plasma's ions, the CES parallel electron firehose instability has a rather small growth rate compared to other CES electron-scale microinstabilities, with growth disappearing entirely in the special case of cold ions. The parallel wavenumber of peak growth, which is a small but finite fraction of the electron Larmor scale, viz., \((k_{\parallel}\rho_{e})_{\rm peak}\approx 0.4\) for \(\Delta_{e}\lesssim-2/\beta_{e}\), is set by electron cyclotron damping, which prevents shorter-wavelength modes from becoming unstable. The CES parallel electron firehose instability was first identified by Hollweg & Volk (1970) and has been studied subsequently using theory and simulations in plasma with \(\beta_{e}\sim 1\)-20 by a number of authors (e.g., Paesold & Benz 1999; Li & Habbal 2000; Messmer 2002; Gary & Nishimura 2003; Camporeale & Burgess 2008; Camporeale & Burgess 2010; Riquelme _et al._ 2018). To characterise the parallel electron firehose instability analytically, we can simply use the expressions (4.21_a_) and (4.21_b_) given in section 4.3.2 for the real frequency \(\varpi\) and growth rate \(\gamma\), respectively, of the parallel whistler waves that satisfy the ordering \[\tilde{\omega}_{e\parallel}=\frac{\omega}{k_{\parallel}v_{\mathrm{the}}}\sim \Delta_{e}\sim\frac{1}{\beta_{e}}\,, \tag{100}\] and have \(k_{\parallel}\rho_{e}\sim 1\), but this time with \(\Delta_{e}\beta_{e}<0\). Plots of the dispersion curves \(\varpi(k_{\parallel})\) and \(\gamma(k_{\parallel})\) of CES parallel electron firehose modes are then shown in figure 24 for a selection of different (negative) values of \(\Delta_{e}\beta_{e}\). In a hydrogen plasma, we find an instability for \(\Delta_{e}<(\Delta_{e})_{\mathrm{c}}\approx-1.7/\beta_{e}\). For \(\Delta_{e}\lesssim-2/\beta_{e}\), modes with \(k_{\parallel}\rho_{e}\lesssim 0.4\) become unstable. Figure 24 also shows that parallel electron firehose modes generically have a real frequency that is much greater than their growth rate (\(\varpi\sim\Omega_{e}/\beta_{e}\gg\gamma\)); however, this frequency changes sign at a wavenumber which, when \(\Delta_{e}\lesssim-2/\beta_{e}\), is comparable to the wavenumber \((k_{\parallel}\rho_{e})_{\mathrm{peak}}\) at which peak growth occurs. These results can be elucidated by considering the expressions (99) in the subsidiary limit \[k_{\parallel}\rho_{i}\sim\frac{1}{\sqrt{\log\left(2\mu_{e}^{-1/2}|1+2/\Delta_ {e}\beta_{e}|\right)}}\ll 1\,. \tag{101}\] Then (99) simplifies to \[\varpi =\pm\left[\left(1+\frac{\Delta_{e}\beta_{e}}{2}\right)k_{ \parallel}^{2}\rho_{e}^{2}-\mu_{e}^{1/2}\Delta_{e}\beta_{e}\right]\frac{\Omega _{e}}{\beta_{e}}, \tag{102a}\] \[\gamma =\frac{\sqrt{\pi}}{k_{\parallel}\rho_{e}}\left[\Delta_{e}\exp \left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)-\left(\frac{\Delta_{e}} {2}+\frac{1}{\beta_{e}}\right)\mu_{e}^{1/2}k_{\parallel}^{2}\rho_{e}^{2} \right]\Omega_{e}. \tag{102b}\] These approximations are plotted alongside (99) in figure 24; the agreement is qualitative rather than quantitative for \(\Delta_{e}\sim-2/\beta_{e}\), but becomes increasingly good as \(\Delta_{e}\) is decreased further. Using these simplified expressions, we can derive approximate analytical expressions for Figure 24: _Parallel CES electron firehose instability_. Dispersion curves of unstable whistler modes, whose instability is driven by the negative electron-pressure anisotropy associated with the electron-shear term in CE distribution function (89), for wavevectors that are co-parallel with the background magnetic field (viz., \(\mathbf{k}=k_{\parallel}\mathbf{\hat{z}}\)). The frequency (solid blue) and growth rates (solid red) of the modes are calculated using (99) and (99), respectively. The resulting frequencies and growth rates, when normalised as \(\gamma\beta_{e}/\Omega_{e}\), are functions of the dimensionless quantity \(\Delta_{e}\beta_{e}\); we show the dispersion curves for three different values of \(\Delta_{e}\beta_{e}\). The \(k_{\parallel}\rho_{e}\ll 1\) approximations (102a) for the frequency (dotted-blue) and (102b) growth rate (dotted-red) are also plotted. the instability's threshold \((\Delta_{e})_{\rm c}\), as well as its peak growth rate and the wavenumber at which that growth occurs. First considering the sign of (4.79), it is easy to show that there exists a range of wavenumbers \(k_{\parallel}\) at which \(\gamma>0\) if and only if \(\Delta_{e}<-2/\beta_{e}\), so \((\Delta_{e})_{\rm c}\approx-2/\beta_{e}\). This is somewhat more stringent than the numerically observed threshold, a discrepancy attributable to FLR effects, not taken into account by the approximation (4.79_b_). When \(\Delta_{e}<-2/\beta_{e}\), it can be proven that the growth rate (4.79_b_) is maximised at \[(k_{\parallel}\rho_{e})_{\rm peak}\approx\frac{1}{\sqrt{\log\left(\mu_{e}^{-1 /2}|1/2+1/\Delta_{e}\beta_{e}|\right)}}\left\{1-\frac{\log\left[\sqrt{2}\log \left(\mu_{e}^{-1/2}|1/2+1/\Delta_{e}\beta_{e}|\right)\right]}{\log\left(\mu_{ e}^{-1/2}|1/2+1/\Delta_{e}\beta_{e}|\right)}\right\}\,, \tag{4.80}\] attaining the value \[\gamma_{\rm max}=\sqrt{\pi}\mu_{e}^{1/2}(k_{\parallel}\rho_{e})_{\rm peak} \left|\frac{\Delta_{e}}{2}+\frac{1}{\beta_{e}}\right|\Omega_{e}\,. \tag{4.81}\] Comparing (4.81) with the characteristic magnitude of \(\varpi\) evaluated using (4.79_a_) at \(k_{\parallel}\rho_{e}=(k_{\parallel}\rho_{e})_{\rm peak}\) (and assuming that \((k_{\parallel}\rho_{e})_{\rm peak}\gtrsim\mu_{e}^{1/4}\)), we conclude that \(\gamma\lesssim\mu_{e}^{1/4}\varpi\), thereby explaining our previous observation that the growth rate of parallel electron firehose modes is generically much smaller than the real frequency of those modes. We can also show that the one exception to this occurs when \((k_{\parallel}\rho_{e})_{\rm peak}\approx\mu_{e}^{1/4}[2\Delta_{e}\beta_{e}/(1 +2\Delta_{e}\beta_{e})]^{1/2}\), an approximate expression for the wavenumber below which \(\varpi\) changes sign. As we will see, the characteristic growth rate of the CES parallel electron firehose is typically much smaller than its oblique relative in high-\(\beta\) plasma (see section 4.4.7), a conclusion that also applies in \(\beta_{e}\sim 1\) plasmas with bi-Maxwellian distributions (see Li & Habbal, 2000). #### 4.4.7 Oblique electron firehose instability In spite of its similar name, the CES oblique electron firehose instability is quite distinct from its parallel cousin: it is a non-propagating mode than arises from the destabilisation of oblique KAWs by a sufficiently negative electron pressure anisotropy. The linear theory of the analogous instability in \(\beta_{e}\sim 1\) plasma with bi-Maxwellian electrons was first presented by Li & Habbal (2000), with a number of simulation studies of this instability having been conducted subsequently (Gary & Nishimura, 2003; Camporeale & Burgess, 2008; Camporeale & Burgess, 2010; Riquelme _et al._, 2018). The high-\(\beta\) variant of the (linear) instability for general anisotropic electron distribution functions was studied in the \(k_{\parallel}\ll k_{\perp}\) limit of gyrokinetics by Kunz _et al._ (2018). In contrast to the findings of Gary & Nishimura (2003), who showed that the oblique electron firehose instability in a bi-Maxwellian plasma at \(\beta_{e}\sim 1\) involves gyroresonant wave-particle interactions between electrons and the unstable modes, instability of CES oblique electron firehose modes at \(\beta_{e}\gg 1\) is essentially non-resonant, with sufficient large negative electron pressure anisotropies negating the restoring force that underpins the oscillation of high-\(\beta\) KAWs. Similarly to the parallel electron firehose instability, the CES oblique electron firehose instability is triggered when \(\Delta_{e}\lesssim-2/\beta_{e}\). The precise value of the threshold depends on the wavevector of the mode being destabilised. Analogously to the parallel electron firehose, long-wavelength oblique electron firehose modes are unstable when \(\Delta_{e}<(\Delta_{e})_{\rm c}=-2/\beta_{e}\). However, figure 25a shows that there is positive growth of \(k\rho_{e}\sim 1\) oblique electron firehose modes for less negative values of \(\Delta_{e}\), illustrating that the threshold is less stringent for such modes. This phenomenon is reminiscent of the ion firehose instability (see figure 20): ion-Larmor-scale oblique firehose modes also have a less stringent threshold than longer-wavelength modes. In addition to the \(k\rho_{e}\sim 1\) modes, a region of unstable KAWs with characteristic wavenumbers \(\mu_{e}^{1/2}\ll k\rho_{e}\ll\mu_{e}^{1/4}\), \(k_{\perp}\sim k_{\parallel}\), is evident in figure 25a. These modes, which were discussed at the end of section 4.4.1, are destabilised by negative ion pressure anisotropy; the extent of this region closely matches the analytic prediction (4.76). Using a similar semi-analytic approach to that employed for the case of the ion firehose instability (see appendix K.2.2), we can determine the approximate threshold for the oblique electron firehose instability as a function of \(k_{\parallel}\rho_{e}\) and \(k_{\perp}\rho_{e}\). The results are shown in figure 25b; modes with \(k_{\parallel}\rho_{e}\sim 0.5\), \(k_{\perp}\rho_{e}\sim 0.4\) have the least stringent threshold (\(\Delta_{e}\approx-1.4/\beta_{e}\)). Well into the unstable regime, i.e., when \(\Delta_{e}\beta_{e}+2\lesssim-1\), electron firehose modes across a broad range of wavevectors are destabilised (see figure 26a). The fastest-growing electron firehose modes are oblique and occur at electron Larmor scales (\(k_{\perp}\rho_{e}\sim 1>k_{\parallel}\rho_{e}\)), with characteristic growth rate \(\gamma\sim|\Delta_{e}|\Omega_{e}\sim\Omega_{e}/\beta_{e}\). This growth rate is much larger than the peak growth rate of the parallel electron firehose instability (4.81). Similarly to the electron mirror instability, a simple analytic expression for the growth rate of the fastest-growing electron firehose modes when \(\Delta_{e}\beta_{e}+2\lesssim-1\) is challenging to establish. We can, however, characterise the growth of two particular classes of electron firehose modes analytically. The first of these are long-wavelength (viz., \(k\rho_{e}\ll 1\)) electron firehose modes. For these, we adopt the same ordering (4.38) as was considered when characterising long-wavelength electron mirror modes: \[k_{\parallel}\rho_{e}\sim k_{\perp}\rho_{e}\sim\mu_{e}^{1/4}\ll 1,\qquad\tilde{ \omega}_{e\parallel}=\frac{\omega}{k_{\parallel}v_{\rm the}}\sim\frac{k \rho_{e}}{\beta_{e}}\sim|\Delta_{e}|k\rho_{e}\,. \tag{4.82}\] We then obtain a closed-form expression [cf. (4.39), and also (4.75)] for the complex Figure 25: _Onset of the CES oblique electron firehose instability._**a)** Maximum positive growth rates of linear perturbations resulting from both the CE ion- and electron-shear term in the CE distribution function (4.1) with \(\beta_{i}=10^{4}\) and \(\Delta_{e}=-1.7/\beta_{e}\) (which is above the long-wavelength oblique electron-firehose instability-threshold \(\Delta_{e}=-2/\beta_{e}\)). The growth rates of all modes are calculated in the same way as figure 19. The resulting growth rates are normalised as \(\gamma\beta_{e}/\Omega_{e}\) are functions of the dimensionless parameter \(\Delta_{e}\beta_{e}\). The dotted line denotes the instability boundary (4.76) on KAWs driven unstable by ion pressure anisotropy of the CE ion-shear term. **b)** Threshold value of \(\Delta_{e}\beta_{e}\) at which modes with parallel and perpendicular wavenumber \(k_{\parallel}\) and \(k_{\perp}\), respectively, become unstable. Regions of \((k_{\parallel},k_{\perp})\) that are shaded black are stable. frequencies of the electron firehose modes: \[\omega\approx\pm k_{\parallel}\rho_{e}\Omega_{e} \Bigg{\{}\left[\frac{1}{\beta_{e}}+\Delta_{e}\left(\frac{1}{2}-\mu _{e}^{1/2}\frac{k_{\parallel}^{2}\rho_{e}^{2}-k_{\perp}^{2}\rho_{e}^{2}}{k^{4} \rho_{e}^{4}}\right)\right]\] \[\times\left[\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}- \Delta_{e}\left(k_{\perp}^{2}\rho_{e}^{2}+\mu_{e}^{1/2}\frac{k_{\parallel}^{ 2}}{k^{2}}-\frac{1}{2}k_{\parallel}^{2}\rho_{e}^{2}\right)\right]\Bigg{\}}^{1/2}. \tag{4.83}\] If \(\Delta_{e}<-2/\beta_{e}\), the right-hand side of (4.83) is purely imaginary for \(k_{\perp}>k_{\parallel}\), and so we have positive growth for all long-wavelength electron firehose modes with \(\theta>45^{\circ}\)1. This approximation should be compared with the numerically determined growth rate in figure 26b. If it is further assumed that \(\mu_{e}^{1/4}\ll k\rho_{e}\ll 1\), \(k_{\perp}\sim k_{\parallel}\), it is shown in section 4.3.4 that (4.83) simplifies to an analogue of (4.40), viz., Footnote 1: In fact, this condition is stronger than necessary to guarantee instability – but the exact condition is somewhat complicated, so we omit discussion of it. \[\omega\approx\pm k_{\parallel}\rho_{e}\Omega_{e}\sqrt{\left(\frac{1}{\beta_{e} }+\frac{\Delta_{e}}{2}\right)\left(k^{2}\rho_{e}^{2}\frac{1}{\beta_{e}}+\frac {\Delta_{e}}{2}\left[k_{\parallel}^{2}\rho_{e}^{2}-2k_{\perp}^{2}\rho_{e}^{2} \right]\right)}\,. \tag{4.84}\] This result is again in agreement with the gyrokinetic calculations of Kunz _et al._ (2018). Extrapolating (4.84) to \(k_{\parallel}\rho_{e}\sim k_{\perp}\rho_{e}\sim 1\), we recover that \(\gamma\sim\Omega_{e}/\beta_{e}\) when \(|\Delta_{e}\beta_{e}+2|\gtrsim 1\). A second sub-category of electron firehose modes that can be described analytically are quasi-perpendicular ones. For any fixed \(k_{\parallel}\rho_{e}\ll 1\), the most rapidly growing modes are strongly anisotropic: they occur when the perpendicular wavelength is comparable to the electron Larmor radius, \(k_{\perp}\rho_{e}\sim 1\). These modes can therefore be elucidated analytically by considering their dispersion relation under the ordering \[\tilde{\omega}_{e\parallel}\sim|\Delta_{e}|\sim\frac{1}{\beta_{e}} \tag{4.85}\] Figure 26: _Oblique electron firehose instability at \(\Delta_{e}\beta_{e}+2\lesssim-1\)_. **a)** Maximum positive growth rates of linear perturbations resulting from CE ion- and electron-shear terms in the CE distribution function (4.1) for \(\Delta_{e}\beta_{e}=-3\). Here, a temperature-equilibrated hydrogen plasma is considered, viz., \(\Delta_{e}=\mu_{e}^{1/2}\Delta_{i}\), and \(\beta_{i}=\beta_{e}\). The growth rates of all modes are calculated in the same way as figure 25. **b)** Plots of the oblique electron firehose mode growth rate (solid line) as a function of \(k_{\parallel}\rho_{e}\) with \(k_{\perp}\rho_{e}=0.2\) (top), and as a function of \(k_{\perp}\rho_{e}\) with \(k_{\parallel}\rho_{e}=0.2\) (bottom). The dotted and dashed lines show the analytical predictions (4.83) and (4.86), respectively. in the wavenumber domain \(\mu_{e}^{1/2}\ll k_{\parallel}\rho_{e}\ll k_{\perp}\rho_{e}\sim 1\). We solve the dispersion relation (see appendix K.3.10) to find \[\frac{\omega}{\Omega_{e}}=\frac{k_{\parallel}\rho_{e}}{\mathcal{F}(k_{\perp} \rho_{e})}\Bigg{\{}-\mathrm{i}\frac{\sqrt{\pi}}{2}\left[\frac{k_{\perp}^{2} \rho_{e}^{2}}{\beta_{e}}+\Delta_{e}\mathcal{H}(k_{\perp}\rho_{e})\ \right]\pm\sqrt{\mathfrak{D}\left(k_{\perp}\rho_{e},\beta_{e},\Delta_{e} \right)}\Bigg{\}}\,, \tag{111}\] where the discriminant is \[\mathfrak{D}\left(k_{\perp}\rho_{e},\beta_{e},\Delta_{e}\right) \equiv \left[\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}+\Delta_{e} \mathcal{H}(k_{\perp}\rho_{e})\right] \tag{112}\] \[\times\left\{\frac{1}{\beta_{e}}\left(1-\frac{\pi}{4}k_{\perp}^{ 2}\rho_{e}^{2}\right)-\Delta_{e}\left[\frac{\pi}{4}\mathcal{H}(k_{\perp}\rho_ {e})+\mathcal{F}(k_{\perp}\rho_{e})\right]\right\}\,,\] and the two auxiliary functions are [cf. (3.24)] \[\mathcal{F}(\alpha) = \exp\left(-\frac{\alpha^{2}}{2}\right)\left[I_{0}\left(\frac{ \alpha^{2}}{2}\right)-I_{1}\left(\frac{\alpha^{2}}{2}\right)\right]\,, \tag{113}\] \[\mathcal{H}(\alpha) \equiv 1-\exp\left(-\frac{\alpha^{2}}{2}\right)I_{0}\bigg{(}\frac{ \alpha^{2}}{2}\bigg{)}\,. \tag{114}\] As a sanity check, we observe that in the subsidiary limit \(k_{\perp}\rho_{e}\ll 1\), (111) becomes \[\omega\approx\pm k_{\perp}k_{\parallel}\rho_{e}^{2}\Omega_{e}\sqrt{\left(\frac {1}{\beta_{e}}+\frac{\Delta_{e}}{2}\right)\left(\frac{1}{\beta_{e}}-\Delta_{e }\right)}\,, \tag{115}\] returning us to the dispersion relation (112) of unstable kinetic Alfven waves taken in the limit \(k_{\parallel}\ll k_{\perp}\). In the case when \(\Delta_{e}<-2\beta_{e}^{-1}\), one of the modes described by (111) can be destabilised by sufficiently negative pressure anisotropy, and become purely growing. The wavenumbers susceptible to this instability are those satisfying \[k_{\perp}^{2}\rho_{e}^{2}\left[1-\exp\left(-\frac{k_{\perp}^{2}\rho_{e}^{2}}{ 2}\right)I_{0}\bigg{(}\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\bigg{)}\right]^{-1}< |\Delta_{e}|\beta_{e}. \tag{116}\] Provided \(\Delta_{e}<-2\beta_{e}^{-1}\) and \(|\Delta_{e}|\beta_{e}\sim 1\), this gives a range of unstable perpendicular wavenumbers \(k_{\perp}\rho_{e}\lesssim 1\). That these wavenumbers are indeed unstable follows immediately from the observation that if (116) holds, then the discriminant (112) satisfies \[\mathfrak{D}\left(k_{\perp}\rho_{e},\beta_{e},\Delta_{e}\right) = -\pi\left[\Delta_{e}\mathcal{H}(k_{\perp}\rho_{e})-\frac{k_{\perp }^{2}\rho_{e}^{2}}{\beta_{e}}\right]\left[\Delta_{e}\mathcal{H}(k_{\perp}\rho _{e})-\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}+\frac{1}{\beta_{e}}+|\Delta _{e}|\mathcal{F}(k_{\perp}\rho_{e})\right] \tag{117}\] \[< \pi\left[\Delta_{e}\mathcal{H}(k_{\perp}\rho_{e})-\frac{k_{\perp }^{2}\rho_{e}^{2}}{\beta_{e}}\right]^{2}\,,\] from which it follows that the imaginary part of (111) for the '+' root is positive. When \(|\Delta_{e}\beta_{e}+2|\sim 1\), the characteristic growth rate of the instability is \[\gamma_{\mathrm{max}}\sim k_{\parallel}\rho_{e}|\Delta_{e}|\Omega_{e}\,, \tag{118}\] which is consistent with the numerical findings shown in figure 26a. Indeed, (111) agrees reasonably with the numerically determined growth rate for small values of \(k_{\parallel}\rho_{i}\) (see figure 26b). One particularly interesting subsidiary limit of (115) is \(|\Delta_{e}|\beta_{e}\gg 1\), in which it can be shown that, under the ordering \(k_{\perp}\rho_{e}\sim(|\Delta_{e}|\beta_{e})^{1/2}\gg 1\), the growth rate is \[\gamma\approx\pi k_{\parallel}k_{\perp}^{3}\rho_{e}^{4}\left(|\Delta_{e}|-\frac{ k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}\right)\Omega_{e}\,. \tag{104}\] This implies that the perpendicular wavelength of peak growth transitions smoothly to values below the electron Larmor radius as \(|\Delta_{e}|\beta_{e}\) is increased beyond order-unity values. As we shall discuss in the next section, these unstable sub-electron-Larmor scale modes are best regarded as a distinct instability from the electron firehose, and so we introduce it properly in a new section. #### 4.4.8 Electron-scale-transition (EST) instability When \(|\Delta_{e}|\beta_{e}\) is increased significantly past unity, the fastest-growing microinstability changes character from that of a destabilised KAW, and instead becomes a destabilised non-propagating mode. The authors of this paper are not aware of this instability having been identified previously; we call it the _electron-scale-transition (EST)_ instability, on account of it providing a smooth transition between unstable KAWs with \(k_{\perp}\rho_{e}\ll 1\), and microinstabilities on sub-electron scales (\(k_{\perp}\rho_{e}\gtrsim 1\)). Unstable EST modes are quasi-perpendicular (\(k_{\parallel}\rho_{e}<1\lesssim k_{\perp}\rho_{e}\lesssim\beta_{e}^{1/7}\)), with the parallel wavenumber of the fastest-growing modes determined by a balance between the instability's drive and the electron-cyclotron damping that arises at sufficiently large \(k_{\parallel}\rho_{e}\). In contrast to the oblique electron firehose instability, Landau-resonant electrons with \(v_{\parallel}\approx\omega/k_{\parallel}\) also play a role in the EST instability's physical mechanism. To demonstrate that the EST modes are not unstable KAWs, we consider the expression (103) in a Maxwellian plasma (viz., \(\Delta_{e}=0\)). It is easy to show that in this case, \(\mathfrak{D}\left(k_{\perp}\rho_{e},\beta_{e},\Delta_{e}\right)\leqslant 0\) if and only if \[k_{\perp}\rho_{e}\geqslant\frac{2}{\sqrt{\pi}}\,. \tag{105}\] Thus, for sufficiently large values of \(k_{\perp}\rho_{e}\), KAWs cease to be able to propagate, and we obtain two purely damped non-propagating modes. Thus, any microinstabilities for \(\Delta_{e}<0\) associated with these modes can no longer be considered to be unstable KAWs. Substituting (105) into the threshold condition (101), we estimate that EST modes first become unstable when \(\Delta_{e}<(\Delta_{e})_{c}\approx-3/\beta_{e}\). As \(\Delta_{e}\) is decreased below \((\Delta_{e})_{c}\), the EST modes quickly acquire a faster growth rate than all the other CES microinstabilities that can operate for such values of \(\Delta_{e}\). We illustrate this numerically in figure 27a by showing the maximum growth rate of all CES microinstabilities as a function of \((k_{\parallel},k_{\perp})\) for a particular value of \(\Delta_{e}<0\). The EST modes with \(k_{\parallel}\rho_{e}\), \(k_{\perp}\rho_{e}>1\) are the fastest growing, with \(\gamma\gg\Omega_{e}/\beta_{e}\). In the limit \(|\Delta_{e}|\beta_{e}\gg 1\) (but \(|\Delta_{e}|\beta_{e}\ll\beta_{e}^{2/7}\)), the maximum growth rate of the EST instability can be estimated analytically. Adopting the orderings \[k_{\parallel}\rho_{e}\sim\frac{1}{\sqrt{\log|\Delta_{e}|\beta_{e}}},\quad k_{ \perp}\rho_{e}\sim(|\Delta_{e}|\beta_{e})^{1/2},\quad\frac{\omega}{k_{\parallel }v_{\rm the}}\sim|\Delta_{e}|^{5/2}\beta_{e}^{3/2}, \tag{106}\] it can be shown (see appendix K.3.11) that the EST mode has the growth rate \[\frac{\gamma}{\Omega_{e}}=\pi k_{\parallel}k_{\perp}^{3}\rho_{e}^{4}\left(| \Delta_{e}|-\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}\right)\left\{1+\frac{ \pi k_{\perp}^{2}\rho_{e}^{2}}{k_{\parallel}^{2}\rho_{e}^{2}}\left[4\exp\left( -\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+\sqrt{\pi}\mu_{e}^{1/2}k_{ \parallel}^{3}\rho_{e}^{3}\right]\right\}^{-1}, \tag{107}\] where the term proportional to \(\mu_{e}^{1/2}\) is associated with Landau damping on the ion species. Taking the subsidiary limit \(k_{\parallel}\rho_{e}\ll 1/\sqrt{\log|\Delta_{e}|\beta_{e}}\), we recover (4.94). The EST mode's growth rate is, therefore, anticipated to be positive provided \(k_{\perp}\rho_{e}<\left(|\Delta_{e}|\beta_{e}\right)^{1/2}\). It can then be shown that (4.97) has the approximate maximum value \[\gamma_{\max}\approx\frac{6\sqrt{3}\pi}{25\sqrt{5}}(k_{\parallel}\rho_{e})_{ \rm peak}\left[1-\frac{3\pi^{3/2}}{5}\mu_{e}^{1/2}(k_{\parallel}\rho_{e})_{\rm peak }|\Delta_{e}|\beta_{e}\right]|\Delta_{e}|\left(|\Delta_{e}|\beta_{e}\right)^{3/ 2}\Omega_{e}\,, \tag{4.98}\] at the wavenumbers \[(k_{\perp}\rho_{e})_{\rm peak} =\left(\frac{3|\Delta_{e}|\beta_{e}}{5}\right)^{1/2}, \tag{4.99a}\] \[(k_{\parallel}\rho_{e})_{\rm peak} =\frac{1}{\sqrt{\log\left(24\pi|\Delta_{e}|\beta_{e}/5\right)}} \left[1-\frac{\log\log\left(24\pi|\Delta_{e}|\beta_{e}/5\right)}{\log 24\pi| \Delta_{e}|\beta/5}\right]\,. \tag{4.99b}\] The growth rate (4.97) is plotted in figure 27b along with the numerically determined growth rate; reasonable agreement is found. We note that, for perpendicular wavenumbers \(k_{\perp}\rho_{e}\gtrsim\beta_{e}^{1/7}\), the characteristic quasi-perpendicular plasma modes in a Maxwellian plasma are not EST modes, but are instead whisper waves (see section 4.4.10). Therefore, when \(|\Delta_{e}|\beta_{e}\gtrsim\beta_{e}^{2/7}\) [see (4.106)], the expressions (4.98) and (4.99a) for the EST mode's maximum growth rate and the perpendicular wavenumber at which that growth is attained are no longer valid. Instead, when \(|\Delta_{e}|\beta_{e}\gtrsim\beta_{e}^{2/7}\), the fastest-growing EST modes (which coexist with faster-growing unstable whisper waves) are those close to the scale \(k_{\perp}\rho_{e}\sim\Delta_{e}^{-1/5}\); extrapolating from (4.97), we find that \(\gamma_{\max}\sim|\Delta_{e}|^{2/5}\Omega_{e}/\sqrt{\log|\Delta_{e}|\beta_{e}}\). #### 4.4.9 Oblique transverse instability The transverse instability (whose physical mechanism was discussed in section 4.3.3) can be excited for sufficiently large negative electron pressure anisotropies as well as positive ones; however, when \(\Delta_{e}<0\), the fastest-growing modes are highly oblique Figure 27: _CES electron-scale-transition (EST) instability._**a)** Maximum positive growth rates of linear perturbations resulting from CE ion- and electron-shear terms in the CE distribution function (4.1) for \(\Delta_{e}\beta_{e}=-10\) and \(\beta_{e}=10^{4}\). Here, a temperature-equilibrated hydrogen plasma is considered viz., \(\Delta_{e}=\mu_{e}^{1/2}\Delta_{i}\), and \(\beta_{i}=\beta_{e}\). The growth rates of all modes are calculated in the same way as figure 25. **b)** Plot of the EST mode growth rate (solid line) as a function of \(k_{\parallel}\rho_{e}\) with \(k_{\perp}\rho_{e}=(3|\Delta_{e}|\beta_{e}/5)^{1/2}\) (top), and as a function of \(k_{\perp}\rho_{e}\) with \(k_{\parallel}\rho_{e}=(k_{\parallel}\rho_{e})_{\rm peak}\) (bottom), where \((k_{\parallel}\rho_{e})_{\rm peak}\) is given by (4.99b). The dotted and dashed lines show the analytical prediction (4.97). with respect to the background magnetic field as opposed to parallel to it. In contrast to the \(\Delta_{e}>0\) case, the oblique transverse instability does not become the fastest-growing CES microinstability for all \(\Delta_{e}\ll-\beta_{e}^{-1}\), only becoming so once its maximum growth rate exceeds the electron Larmor frequency (which occurs when \(\Delta_{e}\lesssim-\beta_{e}^{-1/3}\)). While \(\Delta_{e}>-\beta_{e}^{-1/3}\), the fastest-growing oblique transverse modes, which have \(k_{\perp}\rho_{e}\sim(|\Delta_{e}|\beta_{e})^{1/2}\), are confined to the parallel wavenumbers satisfying \(k_{\parallel}\rho_{e}\gtrsim 1\). Their growth is outcompeted by the EST and whisper instabilites (see sections 4.4.8 and 4.4.10, respectively), which have \(k_{\parallel}\rho_{e}<1\); this is illustrated numerically in figure 28a for a particular large, negative value of \(\Delta_{e}\beta_{e}\). As for their analytical characterisation, transverse modes have identical growth rates to those obtained in the \(\Delta_{e}>0\) case, given by (4.29\(a\),_b_). For \(\Delta_{e}<0\), only the first mode can have positive growth, and such growth is only realised if \(k_{\perp}>k_{\parallel}\). Now taking the quasi-perpendicular unmagnetised limit \(k_{\perp}\rho_{e}\gg k_{\parallel}\rho_{e}\gg 1\), we find that this mode has the growth rate \[\gamma\approx\frac{k_{\perp}v_{\text{the}}}{\sqrt{\pi}}\left(-\Delta_{e}-\frac {k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}\right)\,. \tag{4.100}\] This expression is mathematically identical to the parallel transverse instability (4.30) (section 4.3.3), except with substitution \(k_{\parallel}\to k_{\perp}\); the maximum growth rate of the oblique transverse instability is, therefore, \[\gamma_{\text{max}}=\frac{2}{3\sqrt{3\pi}}(|\Delta_{e}|\beta_{e})^{1/2}|\Delta _{e}|\Omega_{e} \tag{4.101}\] at the (perpendicular) wavenumber \[(k_{\perp}\rho_{e})_{\text{peak}}=(\Delta_{e}\beta_{e}/3)^{1/2}\,. \tag{4.102}\] (4.100) is compared with the numerically determined growth rate in figure 28b; we find that the approximation is excellent provided \(k_{\parallel}\rho_{e}\gtrsim 1\). We note that, based on our analysis, the oblique transverse mode is anticipated always Figure 28: _CES oblique transverse instability._**a)** Maximum positive growth rates of linear perturbations resulting from CE ion- and electron-shear terms in the CE distribution function (4.1) for \(\Delta_{e}\beta=-100\) and \(\beta_{e}=10^{4}\). Here, a temperature-equilibrated hydrogen plasma is considered viz., \(\Delta_{e}=\mu_{e}^{1/2}\Delta_{i}\), and \(\beta_{i}=\beta_{e}\). The growth rates of all modes are calculated in the same way as figure 25. **b)** Plot of the oblique transverse mode’s growth rate (solid line) as a function of \(k_{\parallel}\rho_{e}\) with \(k_{\perp}\rho_{e}=(|\Delta_{e}|\beta_{e}/3)^{1/2}\) (top), and as a function of \(k_{\perp}\rho_{e}\) with \(k_{\parallel}\rho_{e}=2\) (bottom). The dotted and dashed lines show the analytical prediction (4.97). to have a smaller growth rate than the EST instability (4.98) when \(1\ll|\Delta_{e}|\beta_{e}\lesssim\beta_{e}^{2/7}\): \[\frac{\gamma_{\rm EST}}{\gamma_{\rm trans}}\sim\frac{|\Delta_{e}|\beta_{e}}{ \sqrt{\log|\Delta_{e}|\beta_{e}}}\gg 1\,. \tag{4.103}\] #### 4.4.10 Whisper instability When \(\Delta_{e}\lesssim-\beta_{e}^{-5/7}\) (but \(\Delta_{e}\gg-\beta_{e}^{-1/3}\)), the dominant CES microinstability is the CES whisper instability. The instability is so named, because it consists in the destabilisation of the whisper wave, a plasma wave whose existence has not previously been identified: it is therefore of some interest. The likely reason for its previous neglect relates to the somewhat esoteric regime in which such a wave exists - a magnetised plasma with \(\beta_{e}\gg 1\) that might naively be expected to support essentially unmagnetised perturbations at \(k\rho_{e}\gg 1\). The energetically dominant magnetic component of the wave is perpendicular to both \(\mathbf{k}\) and \(\mathbf{B}_{0}\) (viz., \(\delta B_{y}\)), and the wave itself has no electron-number-density perturbation unless \(\beta_{e}\) is extremely large. Its operation (and also the operation of its instability in a CE plasma) involves both resonant and non-resonant interactions between electrons and the wave. More specifically, it is the non-resonant interaction of electrons at the edge of their Larmor orbits with the parallel electric field associated with the whisper wave that gives rise to the phase-shifted current perturbation necessary for wave propagation, while the primary damping mechanisms (Landau and Barnes' damping, respectively) of whisper waves are mediated by resonant wave-particle interactions. The physical mechanism of this wave and its instability (which is most clearly explored within the quasi-perpendicular limit of gyrokinetics) will be discussed further in a future paper. We characterise the whisper instability's growth analytically in the limits \(\mu_{e}^{1/2}\ll k_{\parallel}\rho_{e}\ll 1\), \(k_{\perp}\rho_{e}\gg 1\) and \(\Delta_{e}\beta_{e}\gg 1\) under the orderings \[\tilde{\omega}_{e\parallel}=\frac{\omega}{k_{\parallel}v_{\rm thc}}\sim\frac{ 1}{\beta_{e}^{2/7}}\sim\frac{1}{k_{\perp}^{2}\rho_{e}^{2}}\sim\frac{1}{\Delta_ {e}\beta_{e}},\quad k_{\parallel}\rho_{e}\sim\frac{1}{\sqrt{\log|\Delta_{e}| \beta_{e}}}\ll 1\,. \tag{4.104}\] It can be shown (see appendix K.3.12) that such modes have complex frequencies \[\frac{\omega}{\Omega_{e}} = -{\rm i}\left[\frac{\sqrt{\pi}}{2k_{\parallel}\rho_{e}}\exp \left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+\frac{k_{\parallel}\rho _{e}}{8\sqrt{\pi}k_{\perp}^{2}\rho_{e}^{2}}\right] \tag{4.105}\] \[\pm k_{\parallel}\rho_{e}\sqrt{\frac{\sqrt{\pi}}{4}k_{\perp}\rho_{e} \left(\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}+\Delta_{e}\right)-\left[ \frac{\sqrt{\pi}}{2k_{\parallel}^{2}\rho_{e}^{2}}\exp\left(-\frac{1}{k_{ \parallel}^{2}\rho_{e}^{2}}\right)+\frac{1}{8\sqrt{\pi}k_{\perp}^{2}\rho_{e}^ {2}}\right]^{2}}.\] It is a simple matter to ascertain that the right-hand-side of (4.105) is either purely real or purely imaginary, and thus modes are approximately either non-propagating with growth rate \(\gamma\) or purely oscillating with frequency \(\varpi\). The dispersion curves \(\varpi(k_{\perp})\) and \(\gamma(k_{\perp})\) are plotted in figure 29. To interpret (4.105), we take subsidiary limits. We first consider \(1\ll k_{\perp}\rho_{e}\sim(|\Delta_{e}|\beta_{e})^{1/2}\ll\beta_{e}^{1/7}\): in this case, the expression for the '+' root simplifies to the dispersion relation (4.97) of the EST instability. However, when \(k_{\perp}\rho_{e}\gtrsim\beta_{e}^{1/7}/2^{4/7}\pi^{1/7}\approx 0.57\beta_{e}^{ 1/7}\), this simplification is no longer justifiable, and so when \[|\Delta_{e}|\beta_{e}\gtrsim\frac{5^{6/7}}{2^{10/7}3^{4/7}\pi^{3/7}}\beta_{e} ^{2/7}\approx 0.79\beta_{e}^{2/7}, \tag{4.106}\] the perpendicular wavenumber (4.99\(a\)) of the EST instability's peak growth derived from (4.97) is so large that (4.97) is no longer, in fact, a valid description of the EST mode's growth rate. Now considering the subsidiary limit \(k_{\perp}\rho_{e}\sim(|\Delta_{e}|\beta_{e})^{1/2}\gg\beta_{e}^{1/7}\) and \(k_{\parallel}\rho_{e}\ll 1/\sqrt{\log|\Delta_{e}|\beta_{e}}\) of (4.105), we find two propagating modes: \[\frac{\omega}{\Omega_{e}}\approx\pm\frac{\pi^{1/4}}{2}k_{\parallel}\rho_{e} \sqrt{k_{\perp}\rho_{e}\left(\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}+ \Delta_{e}\right)}\,. \tag{4.107}\] If we set \(\Delta_{e}=0\) in order to identify the underlying Maxwellian mode, this reduces to \[\frac{\omega}{\Omega_{e}}\approx\pm\frac{\pi^{1/4}}{2}k_{\parallel}\rho_{e} \frac{(k_{\perp}\rho_{e})^{3/2}}{\beta_{e}^{1/2}}\,, \tag{4.108}\] This dispersion relation, which does not coincide with any previously identified plasma wave, is that of the whisper wave. The presence of this wave in the case of \(\Delta_{e}<0\) results in a purely unstable mode provided \(\beta_{e}^{-1/7}\ll k_{\perp}\rho_{e}<(|\Delta_{e}|\beta_{e})^{1/2}\) and retaining finite \(k_{\parallel}\rho_{e}\). In this subsidiary limit, the growth rate of the instability is \[\frac{\gamma}{\Omega_{e}} = -\frac{\sqrt{\pi}}{2k_{\parallel}\rho_{e}}\exp\left(-\frac{1}{k_ {\parallel}^{2}\rho_{e}^{2}}\right) \tag{4.109}\] \[\pm k_{\parallel}\rho_{e}\sqrt{\frac{\sqrt{\pi}}{4}k_{\perp}\rho_ {e}\left(|\Delta_{e}|-\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}\right)+\frac {\pi}{2k_{\parallel}^{4}\rho_{e}^{4}}\exp\left(-\frac{2}{k_{\parallel}^{2}\rho _{e}^{2}}\right)}\,.\] This has the maximum value \[\gamma_{\rm max}\approx\frac{\pi^{1/4}}{\sqrt{2}}(k_{\parallel}\rho_{e})_{\rm peak }\left(|\Delta_{e}|\beta_{e}\right)^{1/4}|\Delta_{e}|^{1/2}\Omega_{e}\,, \tag{4.110}\] at the wavenumbers \[(k_{\perp}\rho_{e})_{\rm peak}=\left(\frac{|\Delta_{e}|\beta_{e}}{3}\right)^{ 1/2}, \tag{4.111a}\] Figure 29: _CES EST vs. whisper instability_. Growth rates of EST and whisper modes whose instability is driven by the CE electron-shear term in the CE distribution function (4.1), for quasi-perpendicular (\(k_{\parallel}\ll k_{\perp}\)) wavevectors with respect to the background magnetic field. The growth rates (solid lines) of all modes are calculated in the same way as figure 25 for a selection of different values of \(\Delta_{e}\beta_{e}\) and \(\beta_{e}\), and \(k_{\parallel}\rho_{e}=0.35\). The approximations (4.97), (4.105), and (4.109) for the real frequency (dotted, dot-dashed and dashed blue, respectively) and growth rate (dotted, dot-dashed and dashed red, respectively) in the limit \(k_{\parallel}\rho_{e}\ll 1\), \(k_{\perp}\rho_{e}\gg 1\), are also plotted. \[(k_{\parallel}\rho_{e})_{\rm peak}=\frac{2}{\sqrt{3\log|\Delta_{e}|\beta_{e}}} \left[1-\frac{4\log 3\left(\log|\Delta_{e}|\beta_{e}/4\right)}{3\log|\Delta_{e}|\beta_ {e}}\right]\,. \tag{111b}\] Thus, the maximum growth rate of whisper instability has different scalings with \(|\Delta_{e}|\) and \(\beta_{e}\) than either the EST instability (4.98) or the oblique transverse instability (4.101). When \(|\Delta_{e}|\beta_{e}\gtrsim\beta_{e}^{2/7}\), (4.105) implies that the growth rate \(\gamma\) continues to increase beyond the maximum value of \(k_{\perp}\rho_{e}\) at which the EST modes can exist, and thus the whisper instability, if it is operating, is always dominant over the EST instability. Whether it is also dominant over the oblique transverse instability depends on the choice of \(\beta_{e}\) and \(\Delta_{e}\). We can quantify this explicitly, by considering the ratio of the oblique transverse instability's growth rate (4.101) to that of the whisper instability: \[\frac{\gamma_{\rm trans}}{\gamma_{\rm whisper}}\sim\sqrt{\log\left(|\Delta_{e }|\beta_{e}\right)}\left(|\Delta_{e}|\beta_{e}\right)^{1/4}|\Delta_{e}|^{1/2}\,. \tag{112}\] We see that for \(|\Delta_{e}|^{3}\beta_{e}\ll 1\), \(\gamma_{\rm trans}\ll\gamma_{\rm whisper}\). Thus for \(|\Delta_{e}|^{-7/5}\ll\beta_{e}\ll|\Delta_{e}|^{-3}\), the whisper instability dominates. This condition certainly holds for the particular value of \(\Delta_{e}\) considered in figure 28; to support our claim, in figure 30a we plot the analytical approximation (4.109) along with the numerically determined growth rate for the fixed values of \(k_{\perp}\rho_{e}\) and \(k_{\parallel}\rho_{e}\), respectively, at which the whisper instability is predicted to achieve its maximum growth. The growth rate of the whisper instability, which is correctly captured by our analytic approximation, does indeed exceed that of the transverse instability by an appreciable factor. For \(\beta_{e}\gtrsim|\Delta_{e}|^{-3}\), (4.110) implies that, in fact, \(\gamma/k_{\parallel}v_{\rm the}\sim 1\). This violates the condition of validity of the method that we have generally used to evaluate CES microinstability growth rates numerically (see section 2.5.8, and also appendix K). The divergence of the true growth rates (calculated by solving the full hot-plasma dispersion relation numerically) from those arising from the solution of the low-frequency (\(\omega\ll k_{\parallel}v_{\rm thc}\)) dispersion relation (123) for increasing \(\beta_{e}\) is illustrated in figure 30b. For \(\gamma\gtrsim\Omega_{e}\), we find that the distinction between \(k_{\parallel}\rho_{e}<1\) modes and \(k_{\parallel}\rho_{e}>1\) modes vanishes; futhermore, all modes (including the modes with \(k_{\parallel}=0\)) come to resemble the transverse instability when \(\beta_{e}\gg|\Delta_{e}|^{-3}\); this feature, which indicates the emergence of yet another distinct CES instability, is discussed in the next section. #### 4.4.11 Ordinary-mode instability The final instability we consider in this paper is the CES ordinary-mode (electromagnetic) instability: the destabilisation of the ordinary mode at sub-electron-Larmor scales by negative electron pressure anisotropy. The bi-Maxwellian equivalent of the instability was first identified by Davidson & Wu (1970); for a more recent linear study of the instability, see Ibscher _et al._ (2012). For the characteristically small electron pressure anisotropies that are associated with the CE electron-shear term, this instability can only arise at very large values of \(\beta_{e}\). For purely perpendicular modes (\(k_{\parallel}=0\)) in a magnetised plasma, resonant wave-particle interactions cannot arise, and so the ordinary-mode's instability mechanism is non-resonant. The CES ordinary-mode instability is most simply characterised by considering modes that are exactly perpendicular to the guide magnetic field (viz., \(k_{\parallel}=0\)). In this case, it can be shown (see appendix 13.13) that, if the ordinary mode is destabilised, its growth rate is given by the equation \[\sum_{n=1}^{\infty}\frac{2\gamma^{2}}{\gamma^{2}+n^{2}\Omega_{e}^{2}}\exp \left(-\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^ {2}\rho_{e}^{2}}{2}\right)=-\Delta_{e}-k_{\perp}^{2}d_{e}^{2}-\exp\left(-\frac {k_{\perp}^{2}\rho_{e}^{2}}{2}\right)I_{0}\!\left(\frac{k_{\perp}^{2}\rho_{e}^ {2}}{2}\right)\,. \tag{113}\] This dispersion relation is very similar to that derived by Davidson & Wu (1970) for the ordinary-mode instability in the case of a bi-Maxwellian distribution. If the electron pressure anisotropy is insufficient to destabilise the ordinary mode, the mode is undamped, and its real frequency satisfies \[\sum_{n=1}^{\infty}\frac{2\varpi^{2}}{n^{2}\Omega_{e}^{2}-\varpi^{2}}\exp \left(-\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^ {2}\rho_{e}^{2}}{2}\right)=\Delta_{e}+k_{\perp}^{2}d_{e}^{2}+\exp\left(-\frac{ k_{\perp}^{2}\rho_{e}^{2}}{2}\right)I_{0}\!\left(\frac{k_{\perp}^{2}\rho_{e}^{2}}{2} \right)\,. \tag{114}\] The dispersion curves \(\varpi(k_{\perp})\) and \(\gamma(k_{\perp})\) for a selection of different values of \(\beta_{e}\) and at fixed \(\Delta_{e}\) are shown in figure 31. We can use the ordinary-mode dispersion relation (113) to derive the threshold for this instability at exactly perpendicular wavevectors. We note that the left-hand side of (113) is strictly positive; thus for solutions to exist, it is required that there exist a range of perpendicular wavenumbers over which the right-hand side of (113) is also positive. For \(k_{\perp}\rho_{e}\lesssim 1\), the right-hand side is always negative because \(|\Delta_{e}|\ll 1\). We therefore consider the limit \(k_{\perp}\rho_{e}\gg 1\) (assuming \(\gamma\sim\Omega_{e}\)), for which \[\frac{1}{\sqrt{\pi}k_{\perp}\rho_{e}}\sum_{n=1}^{\infty}\frac{2\gamma^{2}}{ \gamma^{2}+n^{2}\Omega_{e}^{2}}\approx|\Delta_{e}|-\frac{k_{\perp}^{2}\rho_{e }^{2}}{\beta_{e}}-\frac{1}{\sqrt{\pi}k_{\perp}\rho_{e}}\,. \tag{115}\] The right-hand side of (115) is maximal when \[k_{\perp}\rho_{e}=\left(\frac{\beta_{e}}{2\sqrt{\pi}}\right)^{1/3}\,, \tag{116}\] and, when maximal, also greater than zero if and only if \[|\Delta_{e}|^{3}\beta_{e}>\frac{27}{4\uppi}\,. \tag{4.117}\] Therefore the threshold (4.117) is a necessary condition for a purely perpendicular instability to exist. It is also a sufficient condition, because the left-hand side of (4.115) becomes arbitrarily small for small \(\gamma\). Comparing the threshold (4.117) to figure 30b, we conclude that the emergence of an instability with a purely perpendicular wavenumber at around \(\beta_{e}\sim|\Delta_{e}|^{-3}\) is consistent with numerical expectations. One can also show analytically that for \(\gamma\gg\Omega_{e}\), the ordinary-mode instability becomes identical to the oblique transverse instability (section 4.4.9). Motivated by the fact that \(\gamma\ll k_{\perp}v_{\rm the}\) for the oblique transverse instability, or, equivalently, \(\gamma/\Omega_{e}\ll k_{\perp}\rho_{e}\), we first consider (4.113) in the limit \(k_{\perp}\rho_{e}\gg\gamma/\Omega_{e}\sim 1\); we will subsequently take the subsidiary limit \(\gamma/\Omega_{e}\gg 1\). The relevant dispersion relation is (4.115), which can be rewritten as \[\frac{1}{\sqrt{\uppi}k_{\perp}\rho_{e}}\left[\frac{\gamma\uppi}{\Omega_{e}} \coth\left(\frac{\gamma\uppi}{\Omega_{e}}\right)-1\right]\approx-\Delta_{e}- \frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}-\frac{1}{\sqrt{\uppi}k_{\perp} \rho_{e}} \tag{4.118}\] using the summation identity \[\sum_{n=1}^{\infty}\frac{2\gamma^{2}}{\gamma^{2}+n^{2}\Omega_{e}^{2}}=\frac{ \gamma\uppi}{\Omega_{e}}\coth\left(\frac{\gamma\uppi}{\Omega_{e}}\right)-1\,. \tag{4.119}\] Now assuming \(\gamma\gg\Omega_{e}\) and using \(\coth x\approx 1\) for any number \(x\gg 1\), we deduce \[\frac{\gamma}{\Omega_{e}}=-\frac{k_{\perp}\rho_{e}}{\sqrt{\uppi}}\left(\Delta _{e}+\frac{k_{\perp}^{2}\rho_{e}^{2}}{\beta_{e}}\right)\,, \tag{4.120}\] which is equivalent to (4.100). Since \(|\Delta_{e}|\ll 1\), our result is consistent with our initial assumption \(\gamma/\Omega_{e}\ll k_{\perp}\rho_{e}\). Thus, we conclude that, when \(\beta_{e}\gg|\Delta_{e}|^{-3}\), the CES ordinary-mode instability is the dominant CES microinstability, but that in this limit, the instability is essentially identical to the unmagnetised oblique transverse instability already described in section 4.4.9. Figure 31: _CES ordinary-mode instability_. Growth rates of ordinary modes whose instability is driven by the CE electron-shear term in the CE distribution function (4.1), for perpendicular (\(k_{\parallel}=0\)) wavevectors with respect to the background magnetic field. The growth rates (solid lines) of the modes are calculated using (4.113) and (4.114). We show the growth rates for a selection of different values of \(\beta_{e}\) and for \(\Delta_{e}=0.01\). The approximation (4.120) of the growth rate in the limit \(k_{\perp}\rho_{e}\gg\gamma/\Omega_{e}\gg 1\) is also plotted. ## 5 Discussion and conclusions In this paper, we have shown that the Chapman-Enskog description of classical, collisional plasma is valid for a wide range of plasma conditions. Microinstabilities are stabilised in such plasmas by two effects: collisional damping of instabilities, or \(\beta\)-dependent thresholds arising from a non-zero macroscopic magnetic field. By identifying the stable region for the leading-order corrections in the Chapman-Enskog expansion, we have _de facto_ identified the stable region for corrections to arbitrary order: if one of the above effects is enough to maintain stability, any perturbations arising from smaller corrections will be unable to overcome the same effect. However, we have also demonstrated that for \(\beta\gg 1\) there exists a significant region of the \((d_{e}/L,\,\lambda/L)\) parameter space in which fast, small-scale instabilities are both possible and, in fact, generic. Indeed, in the strongly magnetised plasmas (that is, \(\rho_{s}\ll\lambda_{s}\) for both electrons and ions) on which we have focused our investigation, it transpires that collisional damping is never able to prevent the most important kinetic instabilities, and thus strongly magnetised, high-\(\beta\) plasmas cannot be modelled by standard Chapman-Enskog theory if \(\lambda/L\gtrsim 1/\beta\). This finding has significant implications for our understanding of various plasma environments, including those found in astrophysical contexts and also those created in laser-plasma experiments on high-energy laser facilities. When kinetic instabilities do arise in a Chapman-Enskog plasma, we have characterised all of them systematically, deriving simple expressions for their thresholds and growth rates in terms of basic parameters such as \(\beta\), \(\lambda/L\) and the mass ratio \(\mu_{e}=m_{e}/m_{i}\) using a novel analytical approach. Three of the instabilities - the CET whistler instability (section 3.3.1), the CET slow-wave instability (section 3.3.3), and the CET long-wavelength kinetic-Alfven wave (KAW) instability (section 3.3.4) - are driven by heat fluxes in a Chapman-Enskog plasma, while the remaining ten - the CES mirror instability (section 4.3.1), the CES whistler instability (section 4.3.2), the CES transverse instability (sections 4.3.3 and 4.4.9), the CES electron mirror instability (section 4.3.4), the CES firehose instability (sections 4.4.2, 4.4.3, 4.4.4, and 4.4.5), the CES parallel and oblique electron firehose instabilities (sections 4.4.6 and 4.4.7, respectively), the CES electron-scale-transition (EST) instability (section 4.4.8), the CES whisper instability (section 4.4.10), and the CES ordinary-mode instability (section 4.4.11) - are driven by ion-and/or electron-velocity shears. While many of these instabilities, or versions thereof, had been considered previously, four of them (the CET slow-wave, CET long-wavelength KAW, CES EST and CES whisper instabilities) are new; the whisper instability in particular seems to be of some interest both conceptually and practically, because it is associated with a newly discovered plasma wave (the whisper wave), and the instability is much faster than its competitors over quite a wide range of values of \(\lambda/L\) and \(\beta\). An important question to address is that of the dominant microinstability overall: in a given plasma (with fixed \(d_{e}/L,\,\lambda/L,\) and \(\beta\)), amongst the many instabilities that we have found, which is the dominant one? As we explained in section 2.2.3, the answer to this question depends on assumptions about the relative magnitude of temperature- and velocity-gradient scale lengths \(L_{T}\) and \(L_{V}\). Assuming the scalings (2.55) in section 2.2.3 for a Chapman-Enskog plasma whose largest-scale fluid motions are sonic (in other words, \(\mathrm{Ma}\lesssim 1\)), we find that, assuming also \(\mathrm{Ma}\,\lambda/L_{V}\) to be large enough to trigger all of the aforementioned instabilities, the three most competitive ones are on electron scales: the CET whistler, CES whisper, and transverse instabilities. These have growth rates [see (3.10), (4.31) and (4.110), respectively] \[\gamma_{\mathrm{whistler,T}}\sim\eta_{e}\Omega_{e}\sim\mu_{e}^{1/4}\mathrm{Ma }\frac{\lambda_{i}}{L_{V}}\Omega_{e}\,, \tag{5.1a}\] \[\gamma_{\rm whisper} \sim \frac{|\epsilon_{e}|^{3/4}\beta_{e}^{1/4}}{\left[\log|\epsilon_{e}| \beta_{e}\right]^{1/2}}\Omega_{e}\sim\left(\frac{\lambda_{i}}{L_{V}}\right)^{3 /4}\frac{\mu_{e}^{3/8}{\rm Ma}^{3/4}\beta_{e}^{1/4}}{\left[\log\left(\mu_{e}^{1 /2}\beta_{e}{\rm Ma}\,\lambda_{i}/L_{V}\right)\right]^{1/2}}\Omega_{e}\,, \tag{106}\] \[\gamma_{\rm trans} \sim \epsilon_{e}^{3/2}\beta_{e}^{1/2}\Omega_{e}\sim\mu_{e}^{3/4}\left( \frac{\lambda_{i}}{L_{V}}\right)^{3/2}\beta_{e}^{1/2}\Omega_{e}\,. \tag{107}\] Although the threshold for the CET whistler instability is less restrictive than for the whisper instability, at the whisper instability threshold \(|\epsilon_{e}|\beta_{e}\sim\beta_{e}^{2/7}\sim|\epsilon_{e}|^{-2/5}\) it follows that \[\frac{\gamma_{\rm whisler,T}}{\gamma_{\rm whisper}}\sim\frac{\eta_{e}\left[ \log\epsilon_{e}\beta_{e}\right]^{1/2}}{\epsilon_{e}^{2/5}}\sim\mu_{e}^{1/20} \left(\frac{\lambda_{i}}{L_{V}}\right)^{3/5}\left[\log\left(\mu_{e}^{1/2}\beta _{e}{\rm Ma}\,\lambda_{i}/L_{V}\right)\right]^{1/2}\ll 1. \tag{108}\] Thus, the fact that CE plasmas typically support fluid motions on smaller scales than temperature gradients (see section 2.2.3) implies that CES microinstabilities are more potent at sufficiently high plasma \(\beta_{e}\). Yet, for \(\beta_{e}\lesssim\mu_{e}^{-1/2}{\rm Ma}^{-1}\,L_{V}/\lambda_{i}\), the CET whistler instability is the most rapidly growing microinstability. Finally, for \(\beta_{e}\lesssim\mu_{e}^{-1/4}{\rm Ma}^{-1}L_{V}/\lambda_{i}\), none of these electron-scale instabilities is triggered at all, with only the ion-scale firehose and mirror instabilities operating. In short, the dominant microinstability is a complicated function of the parameter regime. For reference, in table 2 of section 1 we show the (approximate) growth rates for all of the instabilities considered in this paper if the scalings (55) are adopted, and figure 1 shows a schematic stability map for the same case 1. Footnote 1: A note of caution is warranted: if a Chapman-Enskog plasma is unstable to microinstabilities, then the heat fluxes and rate-of-strain tensors will be modified, potentially altering both \(L_{T}\) and \(L_{V}\). There is no _a priori_ reason to think that such a plasma will obey Braginskii-type scalings of the form (55) – and so using this ordering to estimate microinstability growth rates is incorrect in kinetically unstable Chapman-Enskog plasmas. We believe that our study - which is the first systematic investigation of the kinetic stability of a classical, collisional, magnetised plasma - provides a significant step forward towards a comprehensive understanding of this state of matter. It is perhaps inevitable, however, given the conceptual extent of the problem, that there remain a number of questions concerning the stability of the Chapman-Enskog distribution function that we have not addressed here. In terms of linear theory, a numerical study using a better collision operator to find the exact stability boundaries could be usefully carried out - although we do not anticipate that this would lead to an alteration of the basic scalings of those boundaries derived in this paper. Another issue not addressed by this work is that of linear coupling between CET and CES microinstabilities; it is not immediately obvious to what extent microinstabilities with similar growth rates might aid each other's growth. The analysis could also be extended to two-species plasmas not in thermal equilibrium, as well as high-\(Z\) plasmas (with important applications in laser-plasma physics). Perhaps the most interesting future development of this work would be the determination of transport coefficients for plasmas falling into the unstable regimes. This requires quasi-linear or nonlinear treatment. Nonetheless, the results presented here can be seen as both a guide and a warning to those wishing to address this fundamental question. They are a guide in the sense that a correct characterisation of transport coefficients requires knowledge of the fastest-growing linear modes, which our study provides. But they are also as a warning in that an isolated treatment of one type of microinstability without reference to the full range of possible others could lead to a mischaracterisation of transport properties. The best hope for a correct calculation of transport in a weakly collisional, high-\(\beta\) plasma is, therefore, the following programme: for a plasma with particular conditions, identify the fastest microinstability, calculate the saturated magnitude of the fluctuations produced by it, determine the anomalous transport coefficients with those fluctuations present, re-calculate of the stability of this plasma, and so on, until a self-consistent picture emerges. It is likely that such a picture will involve a distribution function whose underlying nature depends on macroscopic motions, and hence transport coefficients that are themselves properties of flow shear, temperature gradients, and large-scale magnetic fields. Carrying out such calculations is a non-trivial task, but not impossible. To carry out this research, AFAB was supported by DOE awards DE-SC0019046 and DE-SC0019047 through the NSF/DOE Partnership in Basic Plasma Science and Engineering, and also by UKRI (grant number MR/W006723/1). The work of AAS was supported in part by grants from STFC (ST/N000919/1 and ST/W000903/1) and EPSRC (EP/M022331/1 and EP/R034737/1), as well as by the Simons Foundation via a Simons Investigator award. This research was in part funded by Plan S funders (UKRI, STFC and EPSRC); for the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. AFAB would like to express his deepest gratitude to Matthew Kunz and Eliot Quataert for many helpful discussions about the paper generally, for highlighting some important considerations pertaining to the linear theory of the firehose instability, and for their ongoing support, without which the paper would never have been completed. All the authors would also like to thank Anatoly Spitkovsky for bringing several key early papers on the topic of this paper to the authors' attention. ## Appendix A Glossary of notation used in the paper As an aid to reading, we provide a glossary of the notation that we use in our paper in tables 5 and 6 of this appendix. ## Appendix B Derivation of the Chapman-Enskog distribution function ### The Chapman-Enskog expansion in strongly magnetised plasma There exist a number of lucid explanations of how the CE distribution functions (8) arise in a collisional, strongly magnetised two-species electron-ion plasma (\(\rho_{s}\ll\lambda_{s}\) for \(s=i,e\)) - the monograph of Braginskii (1965), but also (for example) Helander & Sigmar (2005), Chapter 4. For that reason, we do not provide a full derivation of (8). However, in this appendix, we describe a calculation that allows for a direct derivation of the CE distribution function for a strongly magnetised collisional plasma, without first having to perform the CE expansion for arbitrary values of \(\rho_{s}/\lambda_{s}\). The first part of the calculation is the same as in Helander & Sigmar (2005), pp. 76-78. For the reader's convenience, we present a summarised version. We consider the Maxwell-Vlasov-Landau equation (1) of species \(s\) in a frame co-moving with the fluid rest frame of that species. Defining the peculiar velocity variable \(\boldsymbol{v}_{s}^{\prime}=\boldsymbol{v}-\boldsymbol{V}_{s}\) in the fluid rest frame, (1) becomes \[\frac{\mathrm{D}f_{s}}{\mathrm{D}t}+\boldsymbol{v}_{s}^{\prime}\boldsymbol{ \cdot}\boldsymbol{\nabla}f_{s}+\left[\frac{Z_{s}e}{m_{s}}\left(\boldsymbol{ E}^{\prime}+\frac{\boldsymbol{v}_{s}^{\prime}\times\boldsymbol{B}}{c} \right)-\frac{\mathrm{D}\boldsymbol{V}_{s}}{\mathrm{D}t}\right]\boldsymbol{ \cdot}\frac{\partial f_{s}}{\partial\boldsymbol{v}_{s}^{\prime}}-\boldsymbol {v}_{s}^{\prime}\boldsymbol{\cdot}\boldsymbol{\nabla}\boldsymbol{V}_{s}\,^{ \ast}\frac{\partial f_{s}}{\partial\boldsymbol{v}_{s}^{\prime}}=\sum_{s^{ \prime}}\mathfrak{C}(f_{s},f_{s^{\prime}}), \tag{100}\] \begin{table} \begin{tabular}{c|c} Notation & Quantity \\ \hline \(\mathbf{r}\) & Spatial position \\ \(t\) & Time \\ \(e\) & Elementary charge \\ \(c\) & Speed of light \\ \(Z_{s}\)\((Z)\) & Charge of species \(s\) (\(s=i\) in two-species plasma) in units of \(e\) \\ \(m_{s}\) & Mass of a particle of species \(s\) \\ \(\mu_{e}=m_{e}/m_{i}\) & Electron-to-ion mass ratio \\ \(\mathbf{E}\) & Electric field \\ \(\mathbf{B}\) & Magnetic field \\ \(\mathbf{B}_{0}\) & Macroscopic magnetic field \\ \(\hat{\mathbf{z}}\) & Direction vector of the macroscopic magnetic field \\ \(\mathbf{v}\)\((\mathbf{v}_{\perp})\) & Particle velocity (in the direction perpendicular to \(\mathbf{B}_{0}\)) \\ \(v\)\((v_{\perp})\) & Particle speed (in the direction perpendicular to \(\mathbf{B}_{0}\)) \\ \(v_{\parallel}\) & Particle velocity in the direction parallel to \(\mathbf{B}_{0}\) \\ \(\phi\) & Gyrophase angle \\ \(f_{s}(\mathbf{r},\mathbf{v},t)\) & Distribution function of particles of species \(s\) \\ \(\mathfrak{C}(f_{s},f_{s^{\prime}})\) & Collision operator for interactions between species \(s\) and \(s^{\prime}\) \\ \(n_{s}\) & Density of particles of species \(s\)\((2.3a)\) \\ \(\mathbf{V}_{s}\) & Bulk fluid velocity of particles of species \(s\)\((2.3b)\) \\ \(T_{s}\) & Temperature of particles of species \(s\)\((2.3c)\) \\ \(p_{s}\)\((p_{\parallel}/p_{\perp})\) & (Parallel/perpendicular) pressure of particles of species \(s\)\((2.6a)\), \((2.33)\) \\ \(\mathbf{\pi}_{s}\) & Viscosity tensor of particles of species \(s\)\((2.6b)\) \\ \(\Delta_{s}=(p_{s\perp}-p_{s\parallel})/p_{s}\) & Pressure anisotropy of particles of species \(s\)\((2.35)\) \\ \(\mathbf{q}_{s}\)\((q_{s\parallel})\) & (Parallel) heat flux of particles of species \(s\)\((2.6c)\), \((2.16)\) \\ \(\mathbf{R}_{s}\)\((R_{s\parallel})\) & (Parallel) frictional force on species \(s\) due to collisions \((2.6d)\) \\ \(\mathbf{Q}_{s}\) & Heating rate due to inter-species collisions \((2.6e)\) \\ \(\mathbf{u}_{ci}\)\((u_{e\parallel})\) & (Parallel) relative electron-ion drift \\ \(v_{\mathrm{th}s}^{\prime}\) & Thermal speed of particles of species \(s\) \\ \(v_{s\parallel}^{\prime}\) & Peculiar velocity of particles of species \(s\) \\ \(\mathbf{\tilde{v}}_{s}=(\mathbf{v}-\mathbf{V}_{i})/v_{\mathrm{th}s}\) & Peculiar parallel (perpendicular) velocity of particles of species \(s\) \\ \(\tilde{\mathbf{v}}_{s}\)\((\tilde{v}_{s\parallel})\) & Non-dimensionalised particle velocity, ion-fluid rest frame \\ \(\tilde{v}_{s}\)\((\tilde{v}_{s\perp})\) & Non-dim. (perpendicular) particle speed, ion-fluid rest frame \\ \(\lambda_{s}\) & Mean free path of species \(s\) \\ \(\rho_{s}\)\((\tilde{\rho}_{s})\) & (Signed) Larmor radius of species \(s\) \\ \(\tau_{s}\) & Collision time of species \(s\) \\ \(\Omega_{s}\)\((\tilde{\Omega}_{s})\) & (Signed) Larmor frequency of species \(s\) \\ \(L\) & Macroscopic length scale of variation in the direction parallel to \(\mathbf{B}_{0}\)\((2.13a,b)\) \\ \(L_{T}\)\((L_{T_{1}})\) & Electron-(ion)-temperature length scale parallel to \(\mathbf{B}_{0}\)\((2.13c,d)\) \\ \(L_{V}\)\((L_{V_{e}})\) & Ion-(electron)-bulk-flow length scale parallel to \(\mathbf{B}_{0}\)\((2.13c,d)\) \\ \(\tau_{L}\) & Macroscopic time scale over which CE distribution varies \\ \(\eta_{e}=\eta_{e}^{T}\) & Small parameter \((2.11a)\)\(\propto\) CE electron-temperature-gradient term \\ \(\eta_{e}^{a}\) & Small parameter \((2.11b)\)\(\propto\) CE electron-friction term \\ \(\eta_{e}^{a}\) & Small parameter \((2.11c)\)\(\propto\) CE electron-ion-drift term \\ \(\eta_{i}\) & Small parameter \((2.11d)\)\(\propto\) CE ion-temperature-gradient term \\ \(\epsilon_{e}\) & Small parameter \((2.11e)\)\(\propto\) CE electron-shear term \\ \(\epsilon_{i}\) & Small parameter \((2.11f)\)\(\propto\) CE ion-shear term \\ \(A_{e}^{T}(\tilde{v}_{e})\) & Function arising in CE electron-temperature-gradient term \\ \(A_{e}^{R}(\tilde{v}_{e})\) & Function arising in CE electron-friction term \\ \(A_{e}^{e}(\tilde{v}_{e})\) & Function arising in CE electron-ion-drift term \\ \(A_{i}(\tilde{v}_{i})\) & Function arising in CE ion-temperature-gradient term \\ \(C_{e}(\tilde{v}_{e})\) & Function arising in CE electron-shear term \\ \(C_{i}(\tilde{v}_{i})\) & Function arising in CE ion-shear term \\ \end{tabular} \end{table} Table 5: **Glossary of notation I.** \begin{table} \begin{tabular}{c|c} \hline Notation & Quantity \\ \hline \(\mbox{\it W\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\mathbf{E}^{\prime}\equiv\mathbf{E}+\mathbf{V}_{s}\times\mathbf{B}/c\) is the electric field measured in the moving frame, and \[\frac{\mathrm{D}}{\mathrm{D}t}\equiv\frac{\partial}{\partial t}+\mathbf{V}_{s} \mathbf{\cdot}\mathbf{\nabla} \tag{110}\] is the convective derivative. Initially ordering \(\lambda_{s}\sim\rho_{s}\), and assuming the plasma is collisional (\(\lambda_{s}/L\ll 1\)), we rearrange (110) so that the largest terms are grouped together (on the left-hand side): \[\sum_{s^{\prime}}\mathfrak{C}(f_{s},f_{s^{\prime}}) -\frac{Z_{s}e}{m_{s}c}\left(\mathbf{v}_{s}^{\prime}\times\mathbf{B} \right)\mathbf{\cdot}\frac{\partial f_{s}}{\partial\mathbf{v}_{s}^{\prime}}\] \[=\frac{\mathrm{D}f_{s}}{\mathrm{D}t}+\mathbf{v}_{s}^{\prime}\mathbf{ \cdot}\mathbf{\nabla}f_{s}+\left(\frac{Z_{s}e}{m_{s}}\mathbf{E}^{\prime}-\frac{ \mathrm{D}\mathbf{V}_{s}}{\mathrm{D}t}\right)\mathbf{\cdot}\frac{\partial f_{s}}{ \partial\mathbf{v}_{s}^{\prime}}-\mathbf{v}_{s}^{\prime}\mathbf{\cdot}\left(\mathbf{\nabla} \mathbf{V}_{s}\right)\mathbf{\cdot}\frac{\partial f_{s}}{\partial\mathbf{v}_{s}^{\prime}}. \tag{111}\] We then expand the distribution functions \(f_{s}\) in small parameter \(\lambda_{s}/L\ll 1\): \[f_{s}=f_{s}^{(0)}+f_{s}^{(1)}+\ldots, \tag{112}\] and solve (111) order by order in \(\lambda_{s}/L\) for \(f_{s}^{(0)}\) and \(f_{s}^{(1)}\). The subsequent treatment of the collision operator for the electron distribution function is a little different from the ion distribution function, so we treat each case individually. #### b.1.1 Electrons For the electrons, we can rewrite the total collision operator in a convenient form if we assume that \(T_{i}\sim T_{e}\), and \(\mathbf{V}_{i}\sim v_{\mathrm{t}hi}\): \[\sum_{s^{\prime}}\mathfrak{C}(f_{e},f_{s^{\prime}})=\mathfrak{C}_{ee}(f_{e})+ \mathfrak{C}_{ei}^{(0)}(f_{e})+\mathfrak{C}_{ei}^{(1)}(f_{e})\,, \tag{113}\] where the electron-electron collision operator \(\mathfrak{C}_{ee}(f_{e})\) and electron-ion collision operators \(\mathfrak{C}_{ei}^{(0)}(f_{e})\) and \(\mathfrak{C}_{ei}^{(1)}(f_{e})\) are \[\mathfrak{C}_{ee}(f_{e}) \equiv\mathfrak{C}(f_{e},f_{e}), \tag{114a}\] \[\mathfrak{C}_{ei}^{(0)}(f_{e}) \equiv\nu_{ei}(v)v^{3}\frac{\partial}{\partial\mathbf{v}}\mathbf{\cdot} \left[\frac{1}{v}\left(\mathbf{l}-\hat{\mathbf{v}}\hat{\mathbf{v}}\right)\mathbf{\cdot}\frac{ \partial f_{e}}{\partial\mathbf{v}}\right],\] (114b) \[\mathfrak{C}_{ee}^{(1)}(f_{e}) \equiv\nu_{ei}(v)\frac{m_{e}\mathbf{v}_{e}^{\prime}\mathbf{\cdot}\mathbf{u}_{ ei}}{T_{e}}\frac{n_{e}}{\pi^{3/2}v_{\mathrm{t}he}^{3}}\exp\left(-\tilde{v}_{e}^{2} \right). \tag{114c}\] Here \(\nu_{ei}(v)\) is the velocity-dependent collision frequency \[\nu_{ei}(v)\equiv\frac{3\sqrt{\pi}}{4\tau_{e}}\left(\frac{v_{\mathrm{t}he}}{ v}\right)^{3}, \tag{115}\] and the total electron-ion collision operator \(\mathfrak{C}(f_{e},f_{i})\) is given by \(\mathfrak{C}(f_{e},f_{i})=\mathfrak{C}_{ei}^{(0)}(f_{e})+\mathfrak{C}_{ei}^{(1 )}(f_{e})\). This reformulation of the electron-ion collision operator is possible, because the assumptions \(T_{i}\sim T_{e}\), and \(\mathbf{V}_{i}\sim v_{\mathrm{t}hi}\) mean that, from the perspective of the electrons, the ion distribution is sharply peaked around the ion fluid velocity: in other words, \(f_{i}\approx n_{i}\delta(\mathbf{v}-\mathbf{V}_{i})\). Furthermore, the reformulation is convenient because the total electron collision operator (113) becomes independent of the ion distribution function. Thus, the asymptotic expansion (112) for the electron distribution function is decoupled from the ions. Substituting (113), the ordered kinetic equation (111) for the electron distribution becomes \[\mathfrak{C}_{ee}(f_{e})+\mathfrak{C}_{ei}^{(0)}(f_{e})+\frac{e}{m_{e} \mathfrak{C}}\left(\mathbf{v}_{e}^{\prime}\times\mathbf{B}\right)\mathbf{\cdot}\frac{ \partial f_{e}}{\partial\mathbf{v}_{e}^{\prime}}\] \[=\frac{\mathrm{D}f_{e}}{\mathrm{D}t}+\mathbf{v}_{e}^{\prime}\mathbf{\cdot}\mathbf{\nabla}f_{e}- \left(\frac{e}{m_{e}}\mathbf{E}^{\prime}+\frac{\mathrm{D}\mathbf{V}_{e}}{\mathrm{D}t} \right)\mathbf{\cdot}\frac{\partial f_{e}}{\partial\mathbf{v}_{e}^{\prime}}-\mathbf{v}_{e}^ {\prime}\mathbf{\cdot}(\mathbf{\nabla}\mathbf{V}_{e})\mathbf{\cdot}\frac{\partial f_{e}}{ \partial\mathbf{v}_{e}^{\prime}}-\mathfrak{C}_{ei}^{(1)}(f_{e}), \tag{10a}\] where we note that under assumptions \(T_{i}\sim T_{e}\), and \(\mathbf{V}_{i}\sim v_{\mathrm{th}i}\), \(\mathfrak{C}_{ei}^{(1)}(f_{e})\sim\mu_{e}^{1/2}\mathfrak{C}_{ei}^{(0)}(f_{e})\) is much smaller than \(\mathfrak{C}_{ei}^{(0)}(f_{e})\). Then applying expansion (10) with \(s=e\) gives \[\mathfrak{C}_{ee}(f_{e}^{(0)})+\mathfrak{C}_{ei}^{(0)}(f_{e}^{(0)})+\frac{e}{m _{e}c}\left(\mathbf{v}_{e}^{\prime}\times\mathbf{B}\right)\mathbf{\cdot}\frac{\partial f_ {e}^{(0)}}{\partial\mathbf{v}_{e}^{\prime}}=0\,. \tag{10b}\] It can be shown (Helander & Sigmar, 2005) that the only solution of (10b) is (as expected) a Maxwellian distribution: \[f_{e}^{(0)}=\frac{n_{e}}{\pi^{3/2}v_{\mathrm{the}}^{3}}\exp\left(-\frac{|\mathbf{v }_{e}^{\prime}|^{2}}{v_{\mathrm{the}}^{2}}\right)\,. \tag{10c}\] After some algebraic manipulation, it can also be shown that the leading-order perturbed electron distribution function \(f_{e}^{(1)}(\mathbf{v})\) satisfies \[\mathfrak{C}_{ee}(f_{e}^{(1)}) +\mathfrak{C}_{ei}^{(0)}(f_{e}^{(1)})+\frac{e}{m_{e}c}\left(\mathbf{ v}_{e}^{\prime}\times\mathbf{B}\right)\mathbf{\cdot}\frac{\partial f_{e}^{(1)}}{ \partial\mathbf{v}_{e}^{\prime}}\] \[=\Bigg{\{}\left(\frac{|\mathbf{v}_{e}^{\prime}|^{2}}{v_{\mathrm{the} }^{2}}-\frac{5}{2}\right)\mathbf{v}_{e}^{\prime}\mathbf{\cdot}\nabla\log T_{e}\] \[\qquad\qquad+\mathbf{v}_{e}^{\prime}\mathbf{\cdot}\left[\frac{\mathbf{R}_{e}} {p_{e}}+\frac{m_{e}\mathbf{u}_{ei}\nu_{ei}(v)}{T_{e}}\right]+\frac{m_{e}}{2T_{e}} \left(\mathbf{v}_{e}^{\prime}\mathbf{v}_{e}^{\prime}-\frac{|\mathbf{v}_{e}^{\prime}|^{2}}{ 3}\mathbf{I}\right)\mathbf{:}\mathbf{W}_{e}\Bigg{\}}f_{e}^{(0)}, \tag{10d}\] where \(\mathbf{R}_{e}\) and so on are defined in the main text, in equations (12). #### b.1.2 Electrons in strongly magnetised limit We now solve for \(f_{e}^{(1)}\) in a strongly magnetised plasma, i.e., \(\rho_{e}\ll\lambda_{e}\). In this subsidiary limit, both the collision integrals on the left-hand-side of (10d) and the terms on its right-hand side are much smaller than the term proportional to the magnetic field; in other words, \[\mathbf{v}_{e}^{\prime}\times\mathbf{B}\mathbf{\cdot}\frac{\partial f_{e}^{(1)}}{\partial \mathbf{v}_{e}^{\prime}}\approx 0\,. \tag{10e}\] We then define coordinate system \(\left\{v_{e\parallel}^{\prime},v_{e\perp}^{\prime},\phi^{\prime}\right\}\) by \(v_{e\parallel}^{\prime}\equiv\hat{\mathbf{z}}\mathbf{\cdot}\mathbf{v}_{e}^{\prime}\), \(\mathbf{v}_{e\perp}^{\prime}=\mathbf{v}_{e}^{\prime}-v_{e\parallel}^{\prime}\hat{\mathbf{z}}\), \(v_{e\perp}^{\prime}=|\mathbf{v}_{e\perp}^{\prime}|\) and \(\phi^{\prime}=\phi\), where \(\hat{\mathbf{z}}=\mathbf{B}/|\mathbf{B}|\) and \(\phi\) is the gyrophase angle. The velocity gradient operator in this system is \[\frac{\partial f_{e}^{(1)}}{\partial\mathbf{v}_{e}^{\prime}}=\hat{\mathbf{z}}\frac{ \partial f_{e}^{(1)}}{\partial v_{e\parallel}^{\prime}}+\frac{\mathbf{v}_{e\perp}^{ \prime}}{v_{e\perp}^{\prime}}\frac{\partial f_{e}^{(1)}}{\partial v_{e\perp}^{ \prime}}+\frac{1}{v_{e\perp}^{\prime 2}}\mathbf{v}_{e}^{\prime}\times\hat{\mathbf{z}}\frac{ \partial f_{e}^{(1)}}{\partial\phi^{\prime}}\,. \tag{10e}\] This, when combined with (10e), implies that \(f_{e}^{(1)}\) is approximately gyrotropic: \[f_{e}^{(1)}(\mathbf{v}^{\prime})\approx\langle f_{e}^{(1)}\rangle_{\phi^{\prime}} \langle v_{\parallel}^{\prime},v_{\perp}^{\prime}\rangle, \tag{10e}\] where we have defined the gyro-average \(\langle f_{e}^{(1)}\rangle_{\phi^{\prime}}\) of the electron distribution function by \[\langle f_{e}^{(1)}\rangle_{\phi^{\prime}}\equiv\frac{1}{2\pi}\int_{0}^{2\pi} \mathrm{d}\phi^{\prime}\,f_{e}^{(1)}\,. \tag{10e}\] Now gyro-averaging (116), we obtain \[\mathfrak{C}_{ee}\left(\langle f_{e}^{(1)}\rangle_{\phi^{\prime}}\right) + \mathfrak{C}_{ei}^{(0)}\left(\langle f_{e}^{(1)}\rangle_{\phi^{ \prime}}\right) \tag{117}\] \[= \Bigg{\{}\left[\left(\frac{|\boldsymbol{v}_{e}^{\prime}|^{2}}{v_{ \mathrm{the}}^{2}}-\frac{5}{2}\right)\nabla_{\parallel}\log T_{e}+\frac{R_{e \parallel}}{p_{e}}+\frac{m_{e}u_{ei\parallel}\nu_{ei}(v)}{T_{e}}\right]v_{e \parallel}^{\prime}\] \[+\left(\hat{\boldsymbol{z}}\hat{\boldsymbol{z}}-\frac{1}{3} \boldsymbol{I}\right)\boldsymbol{:W}_{e}\left(\frac{v_{e\parallel}^{\prime 2}}{v_{ \mathrm{the}}^{2}}-\frac{v_{e\perp}^{\prime 2}}{2v_{\mathrm{the}}^{2}}\right) \Bigg{\}}f_{e}^{(0)}\,,\] where we have used the gyrophase isotropy of the collision operators to commute the order of gyro-averaging on the left-hand side. (117) is a linear equation for \(\langle f_{e}^{(1)}\rangle_{\phi^{\prime}}\), so by tensor invariance, it must have a solution of the form \[\langle f_{e}^{(1)}\rangle_{\phi^{\prime}}=\tau_{e}\Bigg{\{} \bigg{[}A_{e}^{T}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{ \mathrm{th}e}}\bigg{)}\,\nabla_{\parallel}\log T_{e}+A_{e}^{R}\bigg{(}\frac{| \boldsymbol{v}_{e}^{\prime}|}{v_{\mathrm{th}e}}\bigg{)}\,\frac{R_{e \parallel}}{p_{e}}+\left(A_{e}^{u}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|} {v_{\mathrm{the}}}\bigg{)}-1\right)\frac{m_{e}u_{ei\parallel}}{T_{e}\tau_{e}} \bigg{]}v_{e\parallel}^{\prime}\] \[+C_{e}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{\mathrm{ th}e}}\bigg{)}\left(\hat{\boldsymbol{z}}\hat{\boldsymbol{z}}-\frac{1}{3} \boldsymbol{I}\right)\boldsymbol{:W}_{e}\left(\frac{v_{e\parallel}^{\prime 2}}{v_{ \mathrm{the}}^{2}}-\frac{v_{e\perp}^{\prime 2}}{2v_{\mathrm{the}}^{2}}\right) \Bigg{\}}f_{e}^{(0)}\,,\] where \(\tau_{e}\) is defined by equation (151) in the main text, and the isotropic functions \(A_{e}^{T}(|\boldsymbol{v}_{e}^{\prime}|/v_{\mathrm{th}e})\), \(A_{e}^{R}(|\boldsymbol{v}_{e}^{\prime}|/v_{\mathrm{th}e})\) and \(C(|\boldsymbol{v}_{e}^{\prime}|/v_{\mathrm{th}e})\) are determined by inverting the collision operators (see appendix B.2 for an example of how this calculation is done for a simple choice of collision operator). The total electron CE distribution function becomes \[f_{e}(v_{e\parallel}^{\prime},v_{e\perp}^{\prime})=\Bigg{\{}1+ \tau_{e}\bigg{[}A_{e}^{T}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{ \mathrm{th}e}}\bigg{)}\,\nabla_{\parallel}\log T_{e}+A_{e}^{R}\bigg{(}\frac{| \boldsymbol{v}_{e}^{\prime}|}{v_{\mathrm{th}e}}\bigg{)}\,\frac{R_{e \parallel}}{p_{e}}\] \[+\left(A_{e}^{u}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{ \mathrm{th}e}}\bigg{)}-1\right)\frac{m_{e}u_{ei\parallel}}{T_{e}\tau_{e}} \bigg{]}v_{e\parallel}^{\prime}\] \[+C_{e}\bigg{(}\frac{|\boldsymbol{v}_{e}^{\prime}|}{v_{ \mathrm{th}e}}\bigg{)}\left(\hat{\boldsymbol{z}}\hat{\boldsymbol{z}}-\frac{1}{ 3}\boldsymbol{I}\right)\boldsymbol{:W}_{e}\left(\frac{v_{e\parallel}^{\prime 2}}{v_{ \mathrm{the}}^{2}}-\frac{v_{e\perp}^{\prime 2}}{2v_{\mathrm{the}}^{2}}\right) \Bigg{\}}f_{e}^{(0)}\,.\] We emphasize that this quantity is expressed in the rest frame of the electron fluid1. Footnote 1: Reintroducing the parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\) and \(\epsilon_{e}\) into (116) gives the expression (17) that is quoted in section 2.2.2. Finally, we recover (81) by transforming (116) into the frame co-moving with the ion fluid. Since \(u_{ei\parallel}\sim\lambda_{e}v_{\mathrm{the}}/L\ll v_{\mathrm{the}}\), this transformation applied to the non-Maxwellian component \(f_{e}^{(1)}\) of the electron distribution function only produces corrections of magnitude \(\sim(\lambda_{e}/L)f_{e}^{(1)}\), and thus any correction terms are negligible. The only important contribution is from the shifted Maxwellian: \[\exp\left(-\frac{|\boldsymbol{v}_{e}^{\prime}|^{2}}{v_{\mathrm{the}}^{2}} \right)\approx\exp\left(-\tilde{v}_{e}^{2}\right)\left[1+2\tilde{v}_{e \parallel}\frac{u_{ei\parallel}}{v_{\mathrm{the}}}\right]+\ldots, \tag{118}\] where \(\tilde{\boldsymbol{v}}_{e}=(\boldsymbol{v}-\boldsymbol{V}_{i})/v_{\mathrm{ th}e}\). Combining (118) with (116), we deduce \[f_{e}(\tilde{v}_{e\parallel},\tilde{v}_{e\perp})=\Bigg{\{}1+ \bigg{[}A_{e}^{T}(\tilde{v}_{e})\,\lambda_{e}\nabla_{\parallel}\log T_{e}+A_{e }^{R}(\tilde{v}_{e})\,\lambda_{e}\frac{R_{e\parallel}}{p_{e}}+A_{e}^{u}(\tilde {v}_{e})\,\lambda_{e}\frac{m_{e}u_{ei\parallel}}{T_{e}\tau_{e}}\bigg{]}\, \tilde{v}_{e\parallel}\] \[+\tau_{e}C_{e}(\tilde{v}_{e})\left(\hat{\boldsymbol{z}}\hat{ \boldsymbol{z}}-\frac{1}{3}\boldsymbol{I}\right)\boldsymbol{:W}_{e}\left( \tilde{v}_{e\parallel}^{2}-\frac{\tilde{v}_{e\perp}^{2}}{2}\right)\Bigg{\}}f_{ e}^{(0)}\,. \tag{119}\] Introducing the parameters \(\eta_{e}^{T}\), \(\eta_{e}^{R}\), \(\eta_{e}^{u}\) and \(\epsilon_{e}\) defined by equations (11_a_), (11_b_), (11_c_) and (11_e_) gives the final result (8_a_). #### b.1.3 Ions The derivation of the equivalent result (8_b_) for the ion distribution is mostly similar, but with one key difference: the total ion collision operator is dominated by the ion-ion collision operator \(\mathfrak{C}_{ii}(f_{i})\equiv\mathfrak{C}(f_{i},f_{i})\): \[\sum_{s^{\prime}}\mathfrak{C}(f_{i},f_{s^{\prime}})=\mathfrak{C}_{ii}(f_{i})+ \mathfrak{C}(f_{i},f_{e})\approx\mathfrak{C}_{ii}(f_{e})\,. \tag{116}\] This is because ion-electron collisions are small in the mass ratio compared to ion-electron collisions. After some algebra, it can be shown that the equivalent of (115) for the perturbed ion distribution \(f_{i}^{(1)}\) is \[\mathfrak{C}_{ii}(f_{i}^{(1)}) - \frac{Z_{i}e}{m_{i}c}\left(\mathbf{v}_{i}^{\prime}\times\mathbf{B} \right)\mathbf{\cdot}\,\frac{\partial f_{i}^{(1)}}{\partial\mathbf{v}_{i}^{\prime}} \tag{117}\] \[= \Bigg{[}\left(\frac{|\mathbf{v}_{i}^{\prime}|^{2}}{v_{\mathrm{th}i}^ {2}}-\frac{5}{2}\right)\mathbf{v}_{i}^{\prime}\mathbf{\cdot}\nabla\log T_{i}+\frac{m_ {i}}{2T_{i}}\left(\mathbf{v}_{i}^{\prime}\mathbf{v}_{i}^{\prime}-\frac{|\mathbf{v}_{i}^{ \prime}|^{2}}{3}\mathbf{I}\right)\mathbf{:}\mathbf{W}_{i}\Bigg{]}f_{i}^{(0)}\,,\] where the lowest-order distribution is Maxwellian: \[f_{i}^{(0)}(\mathbf{v})=\frac{n_{i}}{\pi^{3/2}v_{\mathrm{th}i}^{3}}\exp\left(- \frac{|\mathbf{v}_{i}^{\prime}|^{2}}{v_{\mathrm{th}i}^{2}}\right)\,. \tag{118}\] We emphasise that the main differences between (115) and (117) are the presence of only one collision operator on the left-hand side of (117) and the absence of any term proportional to the ion-electron friction force \(\mathbf{R}_{ie}\) on the right-hand-side of (117). Once (117) has been written down, the method for obtaining the ion CE distribution function (8_b_) in a strongly magnetised plasma is near-identical to that of the electron distribution function. Gyro-averaging gives \[\mathfrak{C}_{ii}(f_{i}^{(1)})=\Bigg{[}\left(\frac{|\mathbf{v}_{i}^{\prime}|^{2}}{ v_{\mathrm{th}i}^{2}}-\frac{5}{2}\right)v_{i}^{\prime}\nabla_{\parallel}\log T _{i}+\left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{I}\right)\mathbf{:}\mathbf{W}_{i} \left(\frac{v_{i\parallel}^{\prime 2}}{v_{\mathrm{th}i}^{2}}-\frac{v_{i\perp}^{ \prime 2}}{2v_{\mathrm{th}i}^{2}}\right)\Bigg{]}f_{i}^{(0)}\,, \tag{119}\] from which it follows that \[f_{i}(v_{i\parallel}^{\prime},v_{i\perp}^{\prime})=\Bigg{[}1+ \tau_{i}A_{i}\bigg{(}\frac{|\mathbf{v}_{i}^{\prime}|}{v_{\mathrm{th}i}}\bigg{)}\,v _{i\parallel}^{\prime}\nabla_{\parallel}\log T_{i}\] \[+C_{i}\bigg{(}\frac{|\mathbf{v}_{i}^{\prime}|}{v_{\mathrm{th}i}}\bigg{)} \left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{I}\right)\mathbf{:}\mathbf{W}_{i}\left( \frac{v_{i\parallel}^{\prime 2}}{v_{\mathrm{th}i}^{2}}-\frac{v_{i\perp}^{ \prime 2}}{2v_{\mathrm{th}i}^{2}}\right)\Bigg{]}f_{i}^{(0)}\,. \tag{120}\] On substituting for parameters \(\eta_{i}\) and \(\epsilon_{i}\) defined by (11_d_) and (11_f_), respectively, we obtain (8_b_). ### Deriving isotropic functions of velocity for the CE solution In this appendix, we illustrate how to calculate the isotropic functions \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), \(A_{e}^{u}(\tilde{v}_{e})\), \(A_{i}(\tilde{v}_{i})\), \(C_{e}(\tilde{v}_{e})\) and \(C_{i}(\tilde{v}_{i})\) arising in the electron and ion CE distribution functions for the particular cases of two simplified collision operators: the Krook collision operator and the Lorentz collision operator. #### b.2.1 Krook collision operator The Krook collision operator (Bhatnagar _et al._, 1954) for species \(s\) is given by \[\mathfrak{C}_{K}(f_{s})\equiv-\frac{1}{\tau_{s}}\left(f_{s}-f_{s}^{(0)}\right), \tag{141}\] where \(\tau_{s}\) is the collision time of species \(s\) (assumed velocity-independent), and \[f_{s}^{(0)}=\frac{n_{s}}{\uppi^{3/2}v_{\rm ths}^{3}}\exp\left(-\frac{|\mathbf{v}_{e }^{\prime}|^{2}}{v_{\rm ths}^{2}}\right) \tag{142}\] is a Maxwellian distribution with density \(n_{s}\), mean velocity \(\mathbf{V}_{e}\) and temperature \(T_{s}\) determined from \(f_{s}\) via (3). For this choice of collision operator, i.e., assuming \[\sum_{s^{\prime}}\mathfrak{C}(f_{s},f_{s^{\prime}})=\mathfrak{C}_{K}(f_{s}) \tag{143}\] for all particle species, calculating the CE distribution function is particularly simple. Substituting equation (141) for the electron CE distribution function into the electron Krook collision operator, we find \[\mathfrak{C}_{K}(f_{e})=-\Bigg{\{}\bigg{[}A_{e}^{T}\bigg{(}\frac {|\mathbf{v}_{e}^{\prime}|}{v_{\rm the}}\bigg{)}\,\nabla_{\parallel}\log T_{e}+A_ {e}^{R}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{\rm the}}\bigg{)}\,\frac{R_{ \parallel}}{p_{e}}+\bigg{(}A_{e}^{u}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{ \rm the}}\bigg{)}-1\bigg{)}\,\frac{m_{e}u_{ei\parallel}}{T_{e}\tau_{e}}\bigg{]} v_{e\parallel}^{\prime}\] \[+C_{e}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{\rm the}}\bigg{)} \left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{I}\right)\mathbf{:W}_{e}\left(\frac{ v_{e\parallel}^{\prime 2}}{v_{\rm the}^{2}}-\frac{v_{e\perp}^{\prime 2}}{2v_{\rm the}^{2}} \right)\Bigg{\}}f_{e}^{(0)}\,. \tag{144}\] By comparison to (140), which, on substituting the Krook operator, becomes \[\mathfrak{C}_{K}(f_{e}^{(1)})=\Bigg{\{}\left[\left(\frac{|\mathbf{v}_ {e}^{\prime}|^{2}}{v_{\rm the}^{2}}-\frac{5}{2}\right)\nabla_{\parallel} \log T_{e}+\frac{R_{e\parallel}}{p_{e}}+\frac{m_{e}u_{ei\parallel}}{T_{e} \tau_{e}}\right]v_{e\parallel}^{\prime}\] \[+\left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{I}\right)\mathbf{:W}_{e }\left(\frac{v_{e\parallel}^{\prime 2}}{v_{\rm the}^{2}}-\frac{v_{e\perp}^{ \prime 2}}{2v_{\rm the}^{2}}\right)\Bigg{\}}f_{e}^{(0)}\,, \tag{145}\] we can immediately deduce that \[A_{e}^{T}(\tilde{v}_{e})=-\left(\tilde{v}_{e}^{2}-\frac{5}{2} \right)\,, \tag{146a}\] \[A_{e}^{R}(\tilde{v}_{e})=-1\,,\] (146b) \[A_{e}^{u}(\tilde{v}_{e})=0\,,\] (146c) \[C_{e}(\tilde{v}_{e})=-1\,. \tag{146d}\] The CE electron-ion-drift term vanishes for a Krook operator because the operator neglects inter-species collisions; by the same token, neither \(T_{i}\) and \(T_{e}\) nor \(\mathbf{V}_{i}\) and \(\mathbf{V}_{e}\) will equilibrate. For the ion CE distribution, it follows from (141) substituted into (141) that \[\mathfrak{C}_{K}(f_{i})=-\Bigg{[}A_{i}\bigg{(}\frac{|\mathbf{v}_{i\parallel}^{ \prime}|}{v_{\rm thi}}\bigg{)}\,v_{i\parallel}^{\prime}\nabla_{\parallel}\log T _{i}+C_{i}\bigg{(}\frac{|\mathbf{v}_{i\parallel}^{\prime}|}{v_{\rm thi}}\bigg{)} \left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{I}\right)\mathbf{:W}_{i}\left(\frac{ v_{i\parallel}^{\prime 2}}{v_{\rm thi}^{2}}-\frac{v_{i\perp}^{\prime 2}}{2v_{\rm thi}^{2}}\right)\Bigg{]}f_{i}^{( 0)}, \tag{147}\] which gives, on comparison with (141), that \[A_{i}(\tilde{v}_{i})=-\left(\tilde{v}_{i}^{2}-\frac{5}{2}\right)\,, \tag{148a}\] \[C_{i}(\tilde{v}_{i})=-1\,. \tag{119}\] #### b.2.2 Lorentz collision operator The Lorentz collision operator for species \(s\) is defined by \[\mathfrak{C}_{L}(f_{s})\equiv\nu_{s}(v)v^{3}\frac{\partial}{\partial\mathbf{v}} \mathbf{\cdot}\left[\frac{1}{v}\left(\mathbf{l}-\hat{\mathbf{v}}\hat{\mathbf{v}}\right)\mathbf{ \cdot}\frac{\partial f_{s}}{\partial\mathbf{v}}\right], \tag{120}\] where \(\nu_{s}(v)\) is a velocity-dependent scattering rate. We emphasise that the Lorentz collision operator is still simplified and physically complete compared to the full Landau collision operator, as it merely isotropises the distribution function over long times. However, such an operator does arise as the largest component of the electron-ion collision operator [see (116) in appendix B.1], and is, in fact, the exact electron collision operator in the limit of highly-charged ions: the so called 'Lorentz approximation' (Helander & Sigmar 2005). To calculate the electron CE distribution function, we substitute (121) into the collision operator (120) (with \(s=e\)). Using the identities \[\frac{\partial}{\partial\mathbf{v}}\mathbf{\cdot}\left[\frac{1}{v}\left( \mathbf{l}-\hat{\mathbf{v}}\hat{\mathbf{v}}\right)\mathbf{\cdot}\frac{\partial}{\partial\mathbf{ v}}\left(\mathbf{a}\mathbf{\cdot}\mathbf{v}\right)\right] = -\frac{2\mathbf{a}\mathbf{\cdot}\mathbf{v}}{v^{3}}\,, \tag{121a}\] \[\frac{\partial}{\partial\mathbf{v}}\mathbf{\cdot}\left[\frac{1}{v}\left( \mathbf{l}-\hat{\mathbf{v}}\hat{\mathbf{v}}\right)\mathbf{\cdot}\frac{\partial}{\partial\mathbf{ v}}\left(\mathbf{v}\mathbf{\cdot}\mathbf{\mathbf{A}}\mathbf{\cdot}\mathbf{v}\right)\right] = -\frac{6\mathbf{v}\mathbf{\cdot}\mathbf{\mathbf{A}}\mathbf{\cdot}\mathbf{v}}{v^{3}} \tag{121b}\] for any constant vector \(\mathbf{a}\) and any symmetric, traceless, constant matrix \(\mathbf{\mathbf{A}}\), it follows that \[\mathfrak{C}_{L}(f_{e})=-\hat{\nu}_{e}(\tilde{v}_{e})\Bigg{\{} \bigg{[}2A_{e}^{T}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{\rm th\/}}\bigg{)} \,\nabla_{\parallel}\log T_{e}+2A_{e}^{R}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|} {v_{\rm th\/}e}\bigg{)}\,\frac{R_{e\parallel}}{p_{e}}\] \[+2\left(A_{e}^{u}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{\rm th \/}e}\bigg{)}-1\right)\frac{m_{e}u_{e\parallel}}{T_{e}\tau_{e}}\Bigg{]}v_{e \parallel}^{\prime}\] \[+6C_{e}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|}{v_{\rm th\/}e}\bigg{)} \left(\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{l}\right)\mathbf{\cdot}\mathbf{W}_{e} \left(\frac{v_{e\parallel}^{\prime 2}}{v_{\rm th\/}e}-\frac{v_{e\perp}^{ \prime 2}}{2v_{\rm th\/}^{2}}\right)\Bigg{\}}f_{e}^{(0)}\,, \tag{122}\] where \(\hat{\nu}_{s}\equiv\nu_{s}(\tilde{v}_{s})\tau_{s}\) is the non-dimensionalised collision rate for species \(s\). As with the Krook operator, we compare (122) to (120), substituting a Lorentz collision operator for the latter, viz., \[\mathfrak{C}_{L}(f_{e}^{(1)})=\Bigg{\{} \bigg{[}\bigg{(}\frac{|\mathbf{v}_{e}^{\prime}|^{2}}{v_{\rm th\/}^{2}}- \frac{5}{2}\bigg{)}\,\nabla_{\parallel}\log T_{e}+\frac{R_{e\parallel}}{p_{e }}+\frac{m_{e}u_{e\parallel}\nu_{e}(\tilde{v}_{e})}{T_{e}}\bigg{]}\,v_{e \parallel}^{\prime}\] \[+\bigg{(}\hat{\mathbf{z}}\hat{\mathbf{z}}-\frac{1}{3}\mathbf{l}\Big{)}\mathbf{ \cdot}\mathbf{W}_{e}\left(\frac{v_{e\parallel}^{\prime 2}}{v_{\rm th\/}^{2}}-\frac{v_{e \perp}^{\prime 2}}{2v_{\rm th\/}^{2}}\right)\Bigg{\}}f_{e}^{(0)}\,. \tag{123}\] We deduce from the comparison that \[A_{e}^{T}(\tilde{v}_{e})=-\frac{1}{2\hat{\nu}_{e}(\tilde{v}_{e})} \left(\tilde{v}_{e}^{2}-\frac{5}{2}\right)\,, \tag{124a}\] \[A_{e}^{R}(\tilde{v}_{e})=-\frac{1}{2\hat{\nu}_{e}(\tilde{v}_{e})}\,,\] (124b) \[A_{e}^{u}(\tilde{v}_{e})=\frac{1}{2}\,, \tag{124c}\] \[C_{e}(\tilde{v}_{e})=-\frac{1}{6\hat{\nu}_{e}(\tilde{v}_{e})}\,. \tag{110}\] The isotropic functions \(A_{i}(\tilde{v}_{i})\) and \(C_{i}(\tilde{v}_{i})\), which are given by \[A_{i}(\tilde{v}_{i}) =-\frac{1}{2\nu_{i}(\tilde{v}_{i})\tau_{i}}\left(\tilde{v}_{i}^{2 }-\frac{5}{2}\right)\,, \tag{111a}\] \[C_{i}(\tilde{v}_{i}) =-\frac{1}{6\nu_{i}(\tilde{v}_{i})\tau_{i}}\,. \tag{111b}\] can be deduced in an analogous manner. Appendix C Derivation of hot, magnetised plasma dispersion relation for arbitrary distribution functions In this appendix we re-derive the hot-plasma dispersion relation, given by (74) in section 2.4.1 (see also Davidson, 1983; Parra, 2017, the latter of whose approaches we follow). Our derivation also introduces a (simplified) collision operator in order to show that substitution (123) stated in section 2.5.7 provides a simple technique for including the effect of collisions on linear electromagnetic perturbations. Consider a kinetic, magnetised plasma in equilibrium composed of one electron species and multiple ions species, with (assumed constant) background magnetic field \(\mathbf{B}_{0}\). As in section 2.4.1, we denote the (gyrotropic) equilibrium distribution function of species \(s\) as \(f_{s0}=f_{s0}(v_{\parallel},v_{\perp})\). and then consider a collisionless, linear perturbation \(\delta f_{s}\) to this equilibrium state, with wavevector \(\mathbf{k}\) and complex frequency \(\omega\): \[\delta f_{s}=\widehat{\delta f}_{s}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{\cdot }\mathbf{r}-\omega t\right)\right\}\,. \tag{112}\] The electromagnetic perturbations associated with the perturbed distribution functions have the forms given in (68), viz., \[\delta\mathbf{E} =\widehat{\delta\mathbf{E}}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{\cdot }\mathbf{r}-\omega t\right)\right\}, \tag{113a}\] \[\delta\mathbf{B} =\widehat{\delta\mathbf{B}}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{\cdot }\mathbf{r}-\omega t\right)\right\}, \tag{113b}\] and satisfy Faraday's law and the Maxwell-Ampere's law: \[\frac{\partial\delta\mathbf{B}}{\partial t} =-c\mathbf{\nabla}\times\delta\mathbf{E}, \tag{114a}\] \[\mathbf{\nabla}\times\delta\mathbf{B} =\frac{4\pi}{c}\delta\mathbf{j}+\frac{1}{c}\frac{\partial\delta\mathbf{E }}{\partial t}, \tag{114b}\] where the current perturbation is \[\delta\mathbf{j}=\widehat{\delta\mathbf{j}}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{\cdot }\mathbf{r}-\omega t\right)\right\}=\sum_{s}Z_{s}e\int\mathrm{d}^{3}\mathbf{v}\,\mathbf{v }\,\delta f_{s}\,. \tag{115}\] To close these equations, we relate \(\delta f_{s}\) to the electromagnetic field perturbations by linearising the Maxwell-Vlasov-Landau equation (1). The linearisation \(f_{s}=f_{s0}+\delta f_{s}\) then gives that the perturbed distribution function of species \(s\) satisfies \[\frac{\partial\delta f_{s}}{\partial t}+\mathbf{v}\mathbf{\cdot}\nabla\delta f_{s}+ \frac{Z_{s}e}{m_{s}c}\left(\mathbf{v}\times\mathbf{B}_{0}\right)\mathbf{\cdot}\frac{ \partial\delta f_{s}}{\partial\mathbf{v}}=-\frac{Z_{s}e}{m_{s}}\left(\delta\mathbf{E}+ \frac{\mathbf{v}\times\delta\mathbf{B}}{c}\right)\mathbf{\cdot}\frac{\partial f_{s0}}{ \partial\mathbf{v}}-\nu_{s}\delta f_{s}\,, \tag{116}\] where we have replaced the full linearised collision operator with a simplified Krook collision operator with constant collision frequency \(\nu_{s}=\tau_{s}^{-1}\) for species \(s\). For any particular equilibrium distribution function, (11a), (11b), (11c) and (11d) are a closed set of governing equations. We now write these equations in terms of \(\mathbf{k}\) and \(\omega\) using (11), (11a), and (11b): \[-\mathrm{i}\omega\widehat{\delta\mathbf{B}} =-\mathrm{i}c\mathbf{k}\times\widehat{\delta\mathbf{E}}, \tag{11a}\] \[\mathrm{i}\mathbf{k}\times\widehat{\delta\mathbf{B}} =\frac{4\uppi}{c}\widehat{\delta\mathbf{j}}-\frac{\mathrm{i}\omega}{ c}\widehat{\delta\mathbf{E}},\] (11b) \[\widehat{\delta\mathbf{j}} =\sum_{s}Z_{s}e\int\mathrm{d}^{3}\mathbf{v}\,\mathbf{v}\,\widehat{\delta f _{s}},\] (11c) \[\left(-\mathrm{i}\hat{\omega}_{s}+\mathrm{i}\mathbf{k}\mathbf{\cdot}\bm {v}+\tilde{\Omega}_{s}\frac{\partial}{\partial\phi}\right)\widehat{\delta f_{s }} =-\frac{Z_{s}e}{m_{s}}\left(\widehat{\delta\mathbf{E}}+\frac{\mathbf{v} \times\widehat{\delta\mathbf{B}}}{c}\right)\mathbf{\cdot}\,\frac{\partial f_{s0}}{ \partial\mathbf{v}}\,, \tag{11d}\] where we have defined the (signed) Larmor frequency of species \(s\) as \[\tilde{\Omega}_{s}\equiv\frac{Z_{s}eB_{0}}{m_{s}c}=\frac{Z_{s}}{|Z_{s}|} \Omega_{s}, \tag{11e}\] and introduced the modified complex frequency \(\hat{\omega}_{s}\equiv\omega+\mathrm{i}\nu_{s}\). Note that \(Z_{e}=-1\), so that \(\tilde{\Omega}_{e}<0\). We then eliminate \(\widehat{\delta\mathbf{B}}\) in (11b) and (11d) using (11a) to give \[\frac{k^{2}c^{2}}{\omega^{2}}\left[\widehat{\delta\mathbf{E}}-\mathbf{k} \left(\hat{\mathbf{k}}\cdot\widehat{\delta\mathbf{E}}\right)\right] =\frac{4\uppi\mathrm{i}}{\omega}\widehat{\delta\mathbf{j}}-\widehat {\delta\mathbf{E}}, \tag{11a}\] \[\widehat{\delta\mathbf{j}} =\sum_{s}Z_{s}e\int\mathrm{d}^{3}\mathbf{v}\,\mathbf{v}\,\widehat{\delta f _{s}},\] (11b) \[\left(-\mathrm{i}\hat{\omega}_{s}+\mathrm{i}\mathbf{k}\mathbf{\cdot}\bm {v}+\tilde{\Omega}_{s}\frac{\partial}{\partial\phi}\right)\widehat{\delta f_{s }} =-\frac{Z_{s}e}{m_{s}}\left[\widehat{\delta\mathbf{E}}+\frac{k}{\omega} \mathbf{v}\times\left(\hat{\mathbf{k}}\times\widehat{\delta\mathbf{E}}\right)\right]\mathbf{ \cdot}\,\frac{\partial f_{s0}}{\partial\mathbf{v}}\,. \tag{11c}\] Next, we derive an expression for \(\widehat{\delta f}_{s}\) in terms of \(\widehat{\delta\mathbf{E}}\). For arbitrary wavelengths compared to the Larmor radius \(\rho_{s}\) of species \(s\), expressing \(\widehat{\delta f}_{s}\) in terms of the equilibrium distribution function and \(\widehat{\delta\mathbf{E}}\) requires inversion of the gyrophase-angle derivative in (11a). This can be done for any \(f_{s0}\) in an orthonormal coordinate system with basis vectors \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) defined by equations (75). By Fourier transforming \(\widehat{\delta f}_{s}\) in \(\phi\), it can then be shown that \[\widehat{\delta f_{s}}=-\frac{Z_{s}e\mathrm{i}}{m_{s}\omega}\left(\frac{ \partial f_{s0}}{\partial v_{\parallel}}-\frac{v_{\parallel}}{v_{\perp}} \frac{\partial f_{s0}}{\partial v_{\perp}}\right)\hat{\mathbf{z}}\mathbf{\cdot}\, \widehat{\delta\mathbf{E}}+\exp\left(-\mathrm{i}k_{\perp}\tilde{\rho}_{s}\tilde{v }_{s\perp}\sin\phi\right)\sum_{n=-\infty}^{\infty}\widehat{\delta f}_{s,n}\exp \left(\mathrm{i}m\phi\right)\,, \tag{11d}\] where the series coefficients are given by \[\widehat{\delta f}_{s,n}=-\frac{Z_{s}e\mathrm{i}}{m_{s}}\frac{1}{\hat{\omega} _{s}-k_{\parallel}v_{\parallel}-n\tilde{\Omega}_{s}}\left[\frac{\partial f_{s 0}}{\partial v_{\perp}}+\frac{k_{\parallel}}{\omega}\left(v_{\perp}\frac{ \partial f_{s0}}{\partial v_{\parallel}}-v_{\parallel}\frac{\partial f_{s0}}{ \partial v_{\perp}}\right)\right]\mathbf{u}_{n}^{*}\mathbf{\cdot}\,\widehat{\delta \mathbf{E}}\,, \tag{11e}\] and the vector \(\mathbf{u}_{n}\) in the basis \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\) is \[\mathbf{u}_{n}=\frac{v_{\parallel}}{v_{\perp}}\,J_{n}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})\hat{\mathbf{z}}+\frac{nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v }_{s\perp})}{k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp}}\hat{\mathbf{x}}-\mathrm{ i}J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\hat{\mathbf{y}}\,, \tag{11f}\] \(J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\) denoting the \(n\)-th order Bessel function of the first kind. We can then take advantage of the independence of \(f_{s0}\) of the gyroangle to show that the current perturbation is \[\widehat{\delta\mathbf{j}} = -\sum_{s}\frac{2\uppi Z_{s}^{2}e^{2}\mathrm{i}}{m_{s}\omega}\int_{- \infty}^{\infty}\mathrm{d}v_{\parallel}\int_{0}^{\infty}\mathrm{d}v_{\perp} \left(v_{\perp}\frac{\partial f_{s0}}{\partial v_{\parallel}}-v_{\parallel} \frac{\partial f_{s0}}{\partial v_{\perp}}\right)v_{\parallel}\hat{\mathbf{z}} \left(\hat{\mathbf{z}}\cdot\widehat{\delta\mathbf{E}}\right) \tag{122}\] \[+\sum_{s}2\uppi Z_{s}e\int_{C_{L}}\mathrm{d}v_{\parallel}\int_{0} ^{\infty}\mathrm{d}v_{\perp}v_{\perp}^{2}\sum_{n=-\infty}^{\infty}\widehat{ \delta f}_{s,n}\mathbf{u}_{n}\,,\] where \(C_{L}\) denotes the usual Landau contour. This can be written as Ohm's law: \[\widehat{\delta\mathbf{j}}=\mathbf{\sigma}\cdot\widehat{\delta\mathbf{E}}, \tag{123}\] where \(\mathbf{\sigma}\) is the conductivity tensor. In the absence of collisions (\(\nu_{s}=0\)), this is given by (76). If the collision frequency \(\nu_{s}\neq 0\) is non-zero, then \[\frac{\hat{\omega}_{s}}{|k_{\parallel}|v_{\mathrm{th}s}}=\tilde{\omega}_{ \parallel s}+\frac{\mathrm{i}}{|k_{\parallel}|\tau_{s}v_{\mathrm{th}s}}= \tilde{\omega}_{\parallel s}+\frac{\mathrm{i}}{|k_{\parallel}|\lambda_{s}}, \tag{124}\] from which the substitution (123) proposed in section 2.5.7 follows. Substituting Ohm's law (123) into Ampere's law (118a) gives the singular nonlinear eigenvalue equation \[\left[\frac{c^{2}k^{2}}{\omega^{2}}\left(\hat{\mathbf{k}}\hat{\mathbf{k}}-\mathbf{l} \right)+\mathfrak{E}\right]\cdot\widehat{\delta\mathbf{E}}=0, \tag{125}\] where \[\mathfrak{E}\equiv\mathbf{l}+\frac{4\uppi\mathrm{i}}{\omega}\mathbf{\sigma} \tag{126}\] is the plasma dielectric tensor (73). Taking the determinant of (125) gives the desired result (74). ## Appendix D Electrostatic instabilities of CE plasma In this appendix, we calculate the electrostatic hot-plasma dispersion relation for arbitrary distribution functions (appendix D.1). We then show (appendix D.2) that for frequencies \(\omega\) such that \(\tilde{\omega}_{s\parallel}=\omega/k_{\parallel}v_{\mathrm{th}s}\ll 1\), the dominant contribution to the longitudinal conductivity \(\hat{\mathbf{k}}\cdot\mathbf{\sigma}\cdot\hat{\mathbf{k}}\) is from the Maxwellian component, and strictly positive; the small \(\textit{O}(\eta_{s},\epsilon_{s})\) non-Maxwellian distortion associated with the CE distribution function results in only an \(\textit{O}(\eta_{s},\epsilon_{s})\) distortion to \(\hat{\mathbf{k}}\cdot\mathbf{\sigma}\cdot\hat{\mathbf{k}}\). We then illustrate the possibility of electrostatic instabilities associated with the CE distribution function by calculating the growth rate of the parallel CE bump-on-tail instability (appendix D.3). Finally, in appendix D.4, we show that the only electrostatic instabilities that can occur have a growth rate which is exponentially small in dimensionless parameters \(\textit{O}(\eta_{s},\epsilon_{s})\), for arbitrary frequencies. Thus, it follows that electrostatic instabilities generally have a small growth rate in comparison to electromagnetic instabilities for a CE plasma. ### The electrostatic hot-plasma dispersion relation Beginning from the singular eigenvalue equation (72), viz., \[\left[\frac{c^{2}k^{2}}{\omega^{2}}\left(\hat{\mathbf{k}}\hat{\mathbf{k}}-\mathbf{l} \right)+\mathfrak{E}\right]\cdot\widehat{\delta\mathbf{E}}=0, \tag{127}\] we consider the electrostatic modes, for which \(\widehat{\delta\mathbf{E}}=(\hat{\mathbf{k}}\cdot\widehat{\delta\mathbf{E}})\hat{\mathbf{k}}\). For them, the hot-plasma dispersion relation becomes \[\mathfrak{E}_{33}=k^{2}+\frac{4\pi\mathrm{i}}{\omega}\hat{\mathbf{k}}\cdot\mathbf{\sigma }\cdot\hat{\mathbf{k}}=0\,. \tag{45}\] Employing the expression (76) for the conductivity tensor, we calculate the longitudinal conductivity: \[\hat{\mathbf{k}}\cdot\mathbf{\sigma}\cdot\hat{\mathbf{k}} = -\frac{\mathrm{i}}{4\pi\omega}\sum_{s}\omega_{ps}^{2}\bigg{[} \frac{2}{\sqrt{\pi}}\frac{k_{\parallel}^{2}}{k^{2}}\int_{-\infty}^{\infty} \mathrm{d}\tilde{v}_{s\parallel}\,\tilde{v}_{s\parallel}\int_{0}^{\infty} \mathrm{d}\tilde{v}_{s\perp}\varLambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{ s\perp}) \tag{46}\] \[+\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\int_{C_{L}} \mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp }\tilde{v}_{s\perp}^{2}\varXi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) \sum_{n=-\infty}^{\infty}\frac{\hat{\mathbf{k}}\cdot\mathbf{\varOmega}_{sn}\cdot\hat{ \mathbf{k}}}{\zeta_{sn}-\tilde{v}_{s\parallel}}\bigg{]}\,,\] where \[\hat{\mathbf{k}}\cdot\mathbf{\varOmega}_{sn}\cdot\hat{\mathbf{k}} = \frac{J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{k^{ 2}\tilde{\rho}_{s}^{2}\tilde{v}_{s\perp}^{2}}\left(n^{2}+2nk_{\parallel}\tilde {\rho}_{s}\tilde{v}_{s\parallel}+k_{\parallel}^{2}\tilde{\rho}_{s}^{2}\tilde{ v}_{s\parallel}^{2}\right) \tag{47}\] \[= \frac{k_{\parallel}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_ {s\perp})^{2}}{k^{2}\tilde{v}_{s\perp}^{2}}\left(\frac{n}{k_{\parallel}\tilde {\rho}_{s}}+\tilde{v}_{s\parallel}\right)^{2}\,.\] By way of the identity \[\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{ 2}\frac{\left(n/k_{\parallel}\tilde{\rho}_{s}+\tilde{v}_{s\parallel}\right)^ {2}}{\zeta_{sn}-\tilde{v}_{s\parallel}}=-\tilde{v}_{s\parallel}+\tilde{\omega }_{s\parallel}\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{ v}_{s\perp})^{2}\frac{n/k_{\parallel}\tilde{\rho}_{s}+\tilde{v}_{s\parallel}}{\zeta_{sn}- \tilde{v}_{s\parallel}}\,, \tag{48}\] which follows directly from the Bessel function identity \[\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{ 2}=1, \tag{49}\] it follows that \[\tilde{\omega}_{s\parallel} \frac{2}{\sqrt{\pi}}\int_{C_{L}}\mathrm{d}\tilde{v}_{s\parallel} \int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\tilde{v}_{s\perp}^{2}\varXi_{s }(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\sum_{n=-\infty}^{\infty}\frac{ \hat{\mathbf{k}}\cdot\mathbf{\varOmega}_{sn}\cdot\hat{\mathbf{k}}}{\zeta_{sn}-\tilde{v}_{s \parallel}} \tag{50}\] \[=-\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\frac{k_{ \parallel}^{2}}{k^{2}}\int_{C_{L}}\mathrm{d}\tilde{v}_{s\parallel}\,\tilde{v} _{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\left[\frac{\partial \tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}+\frac{\varLambda_{s}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp})}{\tilde{\omega}_{s\parallel}}\right]\] \[+\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\frac{k_{ \parallel}^{2}}{k^{2}}\int_{C_{L}}\mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{ \infty}\mathrm{d}\tilde{v}_{s\perp}\varLambda_{s}(\tilde{v}_{s\parallel}, \tilde{v}_{s\perp})\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})^{2}\frac{n/k_{\parallel}\tilde{\rho}_{s}+\tilde{v}_{s \parallel}}{\zeta_{sn}-\tilde{v}_{s\parallel}}\] \[+\tilde{\omega}_{s\parallel}^{2}\frac{2}{\sqrt{\pi}}\frac{k_{ \parallel}^{2}}{k^{2}}\int_{C_{L}}\mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{ \infty}\mathrm{d}\tilde{v}_{s\perp}\frac{\partial\tilde{f}_{s0}}{\partial \tilde{v}_{s\perp}}\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})^{2}\frac{n/k_{\parallel}\tilde{\rho}_{s}+\tilde{v}_{s \parallel}}{\zeta_{sn}-\tilde{v}_{s\parallel}}\] \[= -\frac{2}{\sqrt{\pi}}\frac{k_{\parallel}^{2}}{k^{2}}\int_{-\infty} ^{\infty}\mathrm{d}\tilde{v}_{s\parallel}\,\tilde{v}_{s\parallel}\int_{0}^{ \infty}\mathrm{d}\tilde{v}_{s\perp}\varLambda_{s}(\tilde{v}_{s\parallel}, \tilde{v}_{s\perp})\] (51) \[+\frac{2\tilde{\omega}_{s}^{2}}{\sqrt{\pi}}\int_{C_{L}}\mathrm{d} \tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\sum_{n=- \infty}^{\infty}\left(\tilde{v}_{s\perp}\frac{\partial\tilde{f}_{s0}}{\partial \tilde{v}_{s\parallel}}+\frac{n}{k_{\parallel}\tilde{\rho}_{s}}\frac{\partial \tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}\right)\frac{J_{n}(k_{\perp}\tilde{ \rho}_{s}\tilde{v}_{s\perp})^{2}}{\zeta_{sn}-\tilde{v}_{s\parallel}}\,,\] where \(\tilde{\omega}_{s}\equiv\omega/kv_{\rm ths}\). We conclude that \[\hat{\mathbf{k}}\mathbf{\cdot}\mathbf{\sigma}\cdot\hat{\mathbf{k}}=-\frac{\rm i}{4\pi\omega}\sum_ {s}\omega_{\rm ps}^{2}\left[\tilde{\omega}_{s}^{2}\frac{2}{\sqrt{\pi}}\int_{C_{L }}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\sum_ {n=-\infty}^{\infty}\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\frac{J _{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\zeta_{sn}-\tilde{v}_{s \parallel}}\right]\,, \tag{103}\] where \[\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\equiv\tilde{v}_{s\perp} \frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\parallel}}+\frac{n}{k_{ \parallel}\tilde{\rho}_{s}}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s \perp}}\,. \tag{104}\] The electrostatic component of the dielectric tensor is then \[\mathfrak{E}_{33}=k^{2}+\sum_{s}k_{\rm Ds}^{2}\left[\frac{1}{\sqrt{\pi}}\int_{ C_{L}}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp} \sum_{n=-\infty}^{\infty}\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) \frac{J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\zeta_{sn}- \tilde{v}_{s\parallel}}\right], \tag{105}\] and the electrostatic hot-plasma dispersion relation (100) becomes \[k^{2}+\sum_{s}k_{\rm Ds}^{2}\left[\frac{1}{\sqrt{\pi}}\int_{C_{L}}{\rm d} \tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\sum_{n=- \infty}^{\infty}\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\frac{J_{n} (k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\zeta_{sn}-\tilde{v}_{s \parallel}}\right]=0, \tag{106}\] where the Debye wavenumber \(k_{\rm Ds}\) of species \(s\) is defined by \[k_{\rm Ds}\equiv\frac{\sqrt{2}\omega_{\rm ps}}{v_{\rm ths}}\,. \tag{107}\] ### The electrostatic dielectric response at low frequencies In this appendix, we perform a Taylor expansion of the electrostatic component \(\mathfrak{E}_{33}\) of the dielectric tensor (105) in \(\tilde{\omega}_{s\parallel}\ll 1\). Before carrying out the expansion, we first substitute the identity \[\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=\tilde{\omega}_{s\parallel }\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})+\left(\tilde{v}_{s \parallel}-\zeta_{sn}\right)\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{ s\perp}} \tag{108}\] into (105), which then becomes \[\mathfrak{E}_{33} = k^{2}-\sum_{s}k_{\rm Ds}^{2}\frac{1}{\sqrt{\pi}}\int_{-\infty}^ {\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s \perp}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}} \tag{109}\] \[+\sum_{s}k_{\rm Ds}^{2}\left[\frac{\tilde{\omega}_{s\parallel}}{ \sqrt{\pi}}\int_{C_{L}}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d} \tilde{v}_{s\perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\sum_{n=- \infty}^{\infty}\frac{J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}} {\zeta_{sn}-\tilde{v}_{s\parallel}}\right].\] Now carrying out the Taylor expansion in \(\tilde{\omega}_{s\parallel}\ll 1\), we see that, to the leading order in this expansion, \[\mathfrak{E}_{33}\approx k^{2}+\sum_{s}k_{\rm Ds}^{2}\frac{1}{\sqrt{\pi}}\int_ {-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\tilde{f}_{s0}(\tilde{v}_{s \parallel},0)\,. \tag{110}\] For the CE distribution \[\tilde{f}_{s0}(\tilde{v}_{s\parallel},0)=\exp\left(-\tilde{v}_{s\parallel}^{2} \right)\left\{1+\eta_{s}A_{s}(\tilde{v}_{s\parallel})\tilde{v}_{s\parallel}+ \epsilon_{s}C_{s}(\tilde{v}_{s\parallel})\tilde{v}_{s\parallel}^{2}\right\}, \tag{111}\] we have \[\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\tilde {f}_{s0}(\tilde{v}_{s\parallel},0)=1+\frac{\epsilon_{s}}{2\sqrt{\pi}}\int_{0}^{ \infty}{\rm d}\tilde{v}_{s\parallel}\tilde{v}_{s\parallel}^{2}C_{s}(\tilde{v} _{s\parallel})\exp\left(-\tilde{v}_{s\parallel}^{2}\right), \tag{112}\] where the term in the CE distribution function proportional to \(\eta_{s}\) has vanished on account of having odd parity with respect to \(\tilde{v}_{s\parallel}\). We conclude that the non-Maxwellian contribution to (46) is \(\mathit{O}(\eta_{s},\epsilon_{s})\) in comparison to the Maxwellian contribution, and so the electrostatic component of the dielectric tensor for low-frequency fluctuations is just \[\mathfrak{E}_{33}\approx k^{2}+\sum_{s}k_{\mathrm{D}s}^{2}\,, \tag{47}\] or, writing (47) explictly in terms of \(\tilde{\omega}_{s\parallel}\) and the plasma frequency \(\omega_{\mathrm{p}s}\) of species \(s\), \[\mathfrak{E}_{33}\approx k^{2}+\sum_{s}\frac{\omega_{\mathrm{p}s}^{2}}{\omega ^{2}}\frac{2k_{\parallel}^{2}}{k^{2}}\tilde{\omega}_{s\parallel}^{2}\,. \tag{48}\] It follows that \(\mathfrak{E}_{33}^{(0)}\) and \(\mathfrak{E}_{33}^{(1)}\) defined by (98) are given by \[\mathfrak{E}_{33}^{(0)} =0, \tag{49a}\] \[\mathfrak{E}_{33}^{(1)} =\frac{\omega_{\mathrm{p}e}^{2}}{\omega^{2}}\sum_{s}\frac{Z_{s}T_ {e}}{T_{s}}\frac{2k_{\parallel}^{2}}{k^{2}}. \tag{49b}\] where we have neglected the displacement current term (\(k\ll k_{\mathrm{D}e}\)), and the temperature of species \(s\) is denoted by \(T_{s}\). ### Existence of electrostatic instabilities for a CE plasma That electrostatic instabilities can exist is most simply shown in the limit of purely parallel, high-frequency fluctuations: \(k_{\perp}=0\), \(k_{\parallel}=k\), \(\tilde{\omega}_{s}=\tilde{\omega}_{s\parallel}\gg 1\), and \[\varpi\equiv\mathrm{Re}\;\omega\gg\mathrm{Im}\;\omega\equiv\gamma\,. \tag{50}\] For purely parallel modes, the only non-zero term in the sum of Bessel functions in the electrostatic hot-plasma dispersion relation (46) is the \(n=0\) term; thus, (46) simplifies to \[\mathfrak{E}_{33}=k^{2}+\sum_{s}k_{\mathrm{D}s}^{2}\left(\frac{1}{\sqrt{\pi} }\int_{C_{L}}\mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d} \tilde{v}_{s\perp}\tilde{v}_{s\perp}\frac{\partial\tilde{f}_{s0}}{\partial \tilde{v}_{s\parallel}}\frac{1}{\tilde{\omega}_{s}-\tilde{v}_{s\parallel}} \right)=0. \tag{51}\] Next, we expand (50) around the real frequency \(\varpi\), using (49); this gives \[\mathfrak{E}_{33}(\omega,k)\approx\mathfrak{E}_{33}(\varpi,k)+\mathrm{i} \gamma\frac{\partial\mathfrak{E}_{33}}{\partial\omega}(\varpi,k). \tag{52}\] Taking the imaginary part of (52) allows for an expression for \(\gamma\) to be derived in terms of \(\varpi\): \[\gamma\approx-\left[\frac{\partial\,\mathrm{Re}\,\mathfrak{E}_{33}}{\partial \omega}(\varpi,k)\right]^{-1}\mathrm{Im}\,\mathfrak{E}_{33}(\varpi,k)\,. \tag{53}\] To calculate \(\gamma\), we use \[\mathrm{Re}\,\mathfrak{E}_{33}(\varpi,k) =k^{2}+\sum_{s}k_{\mathrm{D}s}^{2}\left(\frac{1}{\sqrt{\pi}}P \!\int\mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s \perp}\tilde{v}_{s\perp}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s \parallel}}\frac{1}{\tilde{\omega}_{s}-\tilde{v}_{s\parallel}}\right)\,, \tag{54a}\] \[\mathrm{Im}\,\mathfrak{E}_{33}(\varpi,k) =-\sqrt{\pi}k^{2}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp} \tilde{v}_{s\perp}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\parallel }}(\tilde{\omega}_{s},\tilde{v}_{s\perp})\, \tag{54b}\] where, to the leading order, \(\tilde{\omega}_{s}\approx\varpi/kv_{\rm ths}\). Now expanding (D 25\(a\)) in \(\tilde{\omega}_{s}\gg 1\), we find that \[{\rm Re}\,\mathfrak{E}_{33}(\varpi,k)\approx k^{2}-\sum_{s}\frac{k_{\rm Ds}^{2} }{\tilde{\omega}_{s}^{2}}\approx k^{2}\left(1-\frac{\omega_{\rm pe}^{2}}{ \varpi^{2}}\right)\,,\] (D.26) where we have integrated (D 25\(a\)) by parts, used identity \[\int_{-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d} \tilde{v}_{s\perp}\,\tilde{v}_{s\perp}\tilde{f}_{s0}\big{(}\tilde{v}_{s\parallel },\tilde{v}_{s\perp}\big{)}=\sqrt{\pi}\,,\] (D.27) and neglected the small ion contribution to the dielectric tensor. We conclude that - as expected - the real frequency of such modes is simply the plasma frequency: \(\varpi\approx\pm\omega_{\rm pe}\). This in turn implies that \[\tilde{\omega}_{e}=\frac{k_{\rm De}}{\sqrt{2}k}\gg 1\,.\] (D.28) In other words, electrostatic modes in this limit are simply plasma oscillations with wavelengths much greater than the Debye length. We immediately deduce that if \(\varpi\approx\omega_{\rm pe}\) (without loss of generality, we can consider the mode with \(\varpi>0\)), then \[\frac{\partial\,{\rm Re}\,\mathfrak{E}_{33}}{\partial\omega}(\varpi,k)\approx \frac{2k^{2}}{\omega_{\rm pe}}\,,\] (D.29) which in turn implies that \(\gamma\) is positive if and only if, for some \(k\), \[{\rm Im}\,\mathfrak{E}_{33}(\omega_{\rm pe},k)>0.\] (D.30) For the electron CE distribution function (2.71), we have \[\frac{\partial\tilde{f}_{e0}}{\partial\tilde{v}_{e\parallel}}=- \exp\left(-\tilde{v}_{e}^{2}\right)\left\{2\tilde{v}_{e\parallel}+\eta_{e} \left[\left(2\tilde{v}_{e\parallel}^{2}-1\right)A_{e}(\tilde{v}_{e})-\frac{ \tilde{v}_{e\parallel}^{2}}{\tilde{v}_{e}}A_{e}^{\prime}(\tilde{v}_{e})\right]\right.\] \[\left.+\epsilon_{e}\left[2\tilde{v}_{e\parallel}C_{e}(\tilde{v}_{ e})\left(\tilde{v}_{e\parallel}^{2}-\frac{\tilde{v}_{e\perp}^{2}}{2}-1 \right)-\frac{\tilde{v}_{e\parallel}}{\tilde{v}_{e}}\left(\tilde{v}_{e\parallel }^{2}-\frac{\tilde{v}_{e\perp}^{2}}{2}\right)C_{e}^{\prime}(\tilde{v}_{e}) \right]\right\}\!.\] (D.31) As shown in appendix B.2.1, for a Krook collision operator it follows that (assuming \(\eta_{e}^{R}=\eta_{e}^{u}=0\)) \[A_{e}(\tilde{v}_{e})=-\left(\tilde{v}_{e}^{2}-\frac{5}{2}\right)\,,\] (D.32a) \[C_{e}(\tilde{v}_{e})=-1\,.\] (D.32b) We then see that \[{\rm Im}\,\mathfrak{E}_{33}(\omega_{\rm pe},k)=\sqrt{\pi}k^{2} \Bigg{[}\frac{k_{\rm De}}{\sqrt{2}k}-\eta_{e}\left(\frac{k_{\rm De}^{2}}{4k^{ 2}}-\frac{3}{4}\right)\left(\frac{k_{\rm De}^{2}}{k^{2}}-1\right)\] \[-\epsilon_{e}\frac{k_{\rm De}}{\sqrt{2}k}\left(\frac{k_{\rm De}^{ 2}}{2k^{2}}-\frac{3}{2}\right)\Bigg{]}\exp\left(-\frac{k_{\rm De}^{2}}{2k^{2} }\right)\,.\] (D.33) This expression changes sign from negative to positive when \(k\lesssim\eta_{e}^{1/3}k_{\rm De}\), or \(k\lesssim\epsilon_{e}^{1/2}k_{\rm De}\); thus, plasma waves with sufficiently long wavelengths are driven unstable by the non-Maxwellian component of the CE distribution function. Physically, this is the bump-on-tail instability; this arises because the distribution function is no longer monotonically decreasing at (parallel) particle velocities \(v_{\parallel}\gtrsim\eta_{e}^{-1/3}v_{\rm th\/e}\), or \(v_{\parallel}\gtrsim\eta_{e}^{-1/3}v_{\rm th\/e}\), and so plasma waves can extract energy from particles via the Landau resonance. Substituting (45) into (46), the growth rate of instabilities satisfying \(k\ll k_{\rm De}\) becomes \[\gamma\approx\omega_{\rm pe}\frac{\sqrt{\pi}}{2\sqrt{2}}\frac{k_{\rm De}}{k} \left(1-\eta_{e}\frac{k_{\rm De}^{3}}{2\sqrt{2}k^{3}}-\epsilon_{e}\frac{k_{\rm De }^{2}}{2k^{2}}\right)\exp\left(-\frac{k_{\rm De}^{2}}{2k^{2}}\right). \tag{47}\] Maximising this expression with respect to \(k\), it can then be shown that the peak growth rate for CE electron-temperature-gradient-driven microinstabilities (\(\epsilon_{e}=0\)) is \[\gamma_{\rm max}\approx\frac{3\sqrt{\pi}}{4}\eta_{e}^{1/3}\exp\left(-\eta_{e} ^{-2/3}-1\right)\omega_{\rm pe} \tag{48}\] at the wavenumber \[k_{\rm peak}\approx\frac{\eta_{e}^{1/3}}{\sqrt{2}}\left[1-\frac{\eta_{e}^{2/3 }}{2}\right]k_{\rm De}\,, \tag{49}\] whereas for CE electron-shear-driven microinstabilities (\(\eta_{e}=0\)), \[\gamma_{\rm max}\approx\frac{\sqrt{\pi}}{2}\epsilon_{e}^{1/2}\exp\left(- \epsilon_{e}^{-1}-1\right)\omega_{\rm pe} \tag{50}\] at the wavenumber \[k_{\rm peak}\approx\frac{\epsilon_{e}^{1/2}}{\sqrt{2}}\left[1-\frac{\epsilon_{e }}{2}\right]k_{\rm De}\,. \tag{51}\] ### Impossibility of electrostatic instabilities with 'fast' growth rates The existence of electrostatic instabilities was demonstrated in appendix (45); however, the growth rates of the exemplified instabilities were shown to be exponentially small in the parameters \(\eta_{e}\) or \(\epsilon_{e}\). In this appendix, we provide a proof that there cannot exist electrostatic instabilities whose growth rate scales algebraically with \(\eta_{s}\) or \(\epsilon_{s}\). To substantiate this claim properly, it is necessary to consider perturbations with frequencies \(\omega\) satisfying \(\omega\ll k_{\parallel}v_{\rm th\/s}\) and \(\omega\gtrsim k_{\parallel}v_{\rm th\/s}\) separately. #### d.4.1 Low-frequency electrostatic modes: \(\omega\ll k_{\parallel}v_{\rm th\/s}\) The impossibility of low-frequency electrostatic instabilities follows immediately from equation (44), which shows that the leading-order term in the \(\tilde{\omega}_{s\parallel}\ll 1\) expansion of the electrostatic component of the dielectric tensor is non-zero. It follows that the electrostatic component of the dielectric tensor is strictly positive at low frequencies. Since the electrostatic component of the dielectric tensor must vanish in order for the electrostatic dispersion relation (43) to be satisfied, we conclude that there do not exist electrostatic modes with \(\omega\ll k_{\parallel}v_{\rm th\/s}\), let alone instabilities. #### d.4.2 Other electrostatic modes: \(\omega\gtrsim k_{\parallel}v_{\rm th\/s}\) For all other electrostatic perturbations, we suppose that there exist microinstabilities with growth rates which scale algebraically with \(\eta_{s}\), \(\epsilon_{s}\), and then prove that that such an supposition is incompatible with the hot-plasma electrostatic dispersion relation. Consider some unstable perturbation satisfying the electrostatic dispersion relation (43), with complex frequency \(\omega=\varpi+{\rm i}\gamma\), and \(\gamma>0\). We then define \[\tilde{\varpi}_{s\parallel} \equiv \frac{\varpi}{k_{\parallel}v_{\rm th\/s}}, \tag{52a}\] \[\tilde{\gamma}_{s\parallel} \equiv \frac{\gamma}{k_{\parallel}v_{\rm th\/s}}, \tag{52b}\] so that \(\tilde{\omega}_{s\parallel}=\tilde{\varpi}_{s\parallel}+{\rm i}\tilde{\gamma}_{s \parallel}\). For unstable perturbations satisfying (46), it follows from the real and imaginary parts of the dispersion relation that \[0=k^{2}-\sum_{s}k_{\rm Ds}^{2}\Bigg{\{}\frac{1}{\sqrt{\pi}}\int_{ -\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v }_{s\perp}\sum_{n=-\infty}^{\infty}\Bigg{[}\Pi_{n}(\tilde{v}_{s\parallel}, \tilde{v}_{s\perp})\] \[\times\frac{\left(\tilde{v}_{s\parallel}-\tilde{\varpi}_{s \parallel}\right)+n/k_{\parallel}\tilde{\rho}_{s}\big{)}\,J_{n}(k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{ \varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma }_{s\parallel}^{2}}\Bigg{]}\Bigg{\}}, \tag{47a}\] \[0=\gamma\sum_{s}k_{\rm Ds}^{2}\mu_{s}^{-1/2}\Bigg{\{}\frac{1}{ \sqrt{\pi}}\int_{-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{ \infty}{\rm d}\tilde{v}_{s\perp}\sum_{n=-\infty}^{\infty}\Bigg{[}\Pi_{n}( \tilde{v}_{s\parallel},\tilde{v}_{s\perp})\] \[\times\frac{J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{\varpi}_{s\parallel}+n/k_{ \parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma}_{s\parallel}^{2}}\Bigg{]} \Bigg{\}}, \tag{47b}\] where \(\mu_{s}\equiv m_{e}/m_{s}\), and we have utilised the fact that the Landau contour simplifies to the real line for unstable perturbations. Using (47b), we can eliminate part of (47a) to give \[0=k^{2}-\sum_{s}k_{\rm Ds}^{2}\Bigg{\{}\frac{1}{\sqrt{\pi}}\int_ {-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d} \tilde{v}_{s\perp}\sum_{n=-\infty}^{\infty}\Bigg{[}\Pi_{n}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp})\] \[\times\frac{\left(\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{ \rho}_{s}\right)J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left( \tilde{v}_{s\parallel}-\tilde{\varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho} _{s}\right)^{2}+\tilde{\gamma}_{s\parallel}^{2}}\Bigg{]}\Bigg{\}}. \tag{47c}\] Next, we substitute for \(\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\) using \[\Pi_{n}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=\Lambda_{s}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp})+\left(\tilde{v}_{s\parallel}+\frac{n}{k_{ \parallel}\tilde{\rho}_{s}}\right)\frac{\partial\tilde{f}_{s0}}{\partial \tilde{v}_{s\perp}}, \tag{47d}\] to give \[0=k^{2}-\sum_{s}k_{\rm Ds}^{2}\Bigg{\{}\frac{1}{\sqrt{\pi}}\int_ {-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d} \tilde{v}_{s\perp}\sum_{n=-\infty}^{\infty}\Bigg{[}\frac{\partial\tilde{f}_{s 0}}{\partial\tilde{v}_{s\perp}}\frac{\left(\tilde{v}_{s\parallel}+n/k_{ \parallel}\tilde{\rho}_{s}\right)^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{ s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{\varpi}_{s \parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma}_{s \parallel}^{2}}\] \[+\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\frac{\left( \tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)J_{n}(k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{ \varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{ \gamma}_{s\parallel}^{2}}\Bigg{]}\Bigg{\}}. \tag{47e}\] This expression is very helpful for contradicting the premise of the existence of unstable electrostatic modes. We illustrate this claim with a simple example - a pure Maxwellian distribution function - before considering the CE distribution. For a Maxwellian distribution for which \(\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=0\), and \[\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}=-2\tilde{v}_{s\perp} \exp\left(-\tilde{v}_{s}^{2}\right), \tag{47f}\] (47i) becomes \[0=k^{2}+\sum_{s}k_{\rm Ds}^{2}\Bigg{[}\frac{2}{\sqrt{\pi}}\int_ {-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d} \tilde{v}_{s\perp}\tilde{v}_{s\perp}\exp\left(-\tilde{v}_{s}^{2}\right)\] \[\times\sum_{n=-\infty}^{\infty}\frac{\left(\tilde{v}_{s \parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}J_{n}(k_{\perp}\tilde{\rho }_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{\varpi}_{s \parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma}_{s \parallel}^{2}}\Bigg{]}. \tag{47j}\] The integrand on the right-hand-side of (109) is strictly positive - a contradiction. Therefore, we recover the standard result that there cannot exist unstable perturbations if the underlying distribution is Maxwellian. We now consider the CE distribution (108). In order for an instability to arise, it is clear that the integrand on the right-hand-side of (109) has to be positive - and further, the contribution of the integrand from that interval has to dominate all other (negative) contributions to the total integral. To prove that these conditions cannot be satisfied for the CE distribution function, we consider the two terms in the integrand on the right-hand-side of (109) separately. For the first term, \[\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}\,\frac{\left(\tilde{ v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}J_{n}(k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{ \varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma }_{s\parallel}^{2}}>0 \tag{110}\] if and only if \[\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}<0. \tag{111}\] For the CE distribution function (108), \[\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}} = -\tilde{v}_{s\perp}\exp\left(-\tilde{v}_{s}^{2}\right)\left\{2+ \eta_{s}\left[2\tilde{v}_{s\parallel}A_{s}(\tilde{v}_{s})-\frac{\tilde{v}_{s \parallel}}{\tilde{v}_{s}}A_{s}^{\prime}(\tilde{v}_{s})\right]\right. \tag{112}\] \[\left.+\epsilon_{s}\left[2C_{s}(\tilde{v}_{s})\left(\tilde{v}_{s \parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2}+\frac{1}{2}\right)-\frac{1}{ \tilde{v}_{s}}\left(\tilde{v}_{s\parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}} {2}\right)C_{s}^{\prime}(\tilde{v}_{s})\right]\Bigg{\}}\,.\] Thus, for \(\tilde{v}_{s\perp}\lesssim 1\) and \(\tilde{v}_{s\parallel}\lesssim 1\), we see that \(\partial\tilde{f}_{s0}/\partial\tilde{v}_{s\perp}<0\), because \(\eta_{s},\epsilon_{s}\ll 1\). The only values of \(\tilde{v}_{s}\) where this inequality could be reversed are large: \(\tilde{v}_{s}\gg 1\). Assuming that \(A_{s}(\tilde{v}_{s})\sim\tilde{v}_{s}^{\iota_{\eta}}\) and \(C_{s}(\tilde{v}_{s})\sim\tilde{v}_{s}^{\iota_{s}}\) for \(\tilde{v}_{s}\gg 1\), where \(\iota_{\eta}\) and \(\iota_{\epsilon}\) are constants, it follows that for \[\tilde{v}_{s}\gtrsim\eta_{s}^{-1/(\iota_{\eta}+1)},\epsilon_{s}^{-1/(\iota_{ \epsilon}+2)}\,, \tag{113}\] the non-Maxwellian terms are comparable to the Maxwellian ones. However, for such \(\tilde{v}_{s}\), \[\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}\sim\eta_{s}^{-1/( \iota_{\eta}+1)}\exp\left(-\eta_{s}^{-2/(\iota_{\eta}+1)}\right),\epsilon_{s} ^{-1/(\iota_{\epsilon}+1)}\exp\left(-\epsilon_{s}^{-2/(\iota_{\epsilon}+1)} \right), \tag{114}\] while \[\frac{\left(\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2} J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s \parallel}-\tilde{\varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right) ^{2}+\tilde{\gamma}_{s\parallel}^{2}}\lesssim\frac{\tilde{\varpi}_{s \parallel}^{2}}{\tilde{\gamma}_{s\parallel}^{2}} \tag{115}\] if it is assumed that \(|\varpi|\gg|\gamma|\). Since we assumed that \(\tilde{\gamma}_{s\parallel}\) is only algebraically small in \(\epsilon_{s}\) and/or \(\eta_{s}\), we conclude that the contribution to the integrand on the right-hand-side of (109) from \(\tilde{v}_{s}\) satisfying (111) is asymptotically small compared to other contributions, and thus cannot change the sign of the total integral. For the second term, we consider the \(n\)th term of the sum independently. Recalling from (107) that \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\tilde{v}_{s\perp}\exp \left(-\tilde{v}_{s}^{2}\right)\left[\eta_{s}A_{s}(\tilde{v}_{s})-3\epsilon_{ s}C_{s}(\tilde{v}_{s})\tilde{v}_{s\parallel}\right], \tag{116}\] it follows that for \(\tilde{v}_{s}\sim 1\), \[\frac{\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})}{\partial\tilde{f} _{s0}/\partial\tilde{v}_{s\perp}}\sim\frac{\eta_{s}}{\tilde{v}_{s\parallel}+n /k_{\parallel}\tilde{\rho}_{s}}\,,\,\frac{\epsilon_{s}}{\tilde{v}_{s\parallel}+n /k_{\parallel}\tilde{\rho}_{s}}\,. \tag{117}\] Thus, for \(\tilde{v}_{s}\sim 1\), the non-Maxwellian term is only comparable to the Maxwellian one for \(|\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}|\lesssim\eta_{s},\epsilon_ {s}\). However, this non-Maxwellian contribution is in fact always smaller that other non-Maxwellian contributions, which by (D.53) are in turn smaller than the equivalent Maxwellian contributions. Depending on the magnitude of \(|n/k_{\parallel}\tilde{\rho}_{s}|\), this claim is justified in two different ways. \(\bullet\)\(|n/k_{\parallel}\tilde{\rho}_{s}|\lesssim 1\): in this case, let the interval of non-dimensionalised parallel velocities \(\tilde{\overline{v}}_{s\parallel}\) satisfying \(|\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}|\lesssim\eta_{s}, \epsilon_{s}\) be denoted by \(\mathcal{I}\). Then, there exists another finite interval of \(\tilde{v}_{s\parallel}\sim 1\) such that \(|\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}|\sim 1\). It therefore follows that \[\int_{\mathcal{I}}\mathrm{d}\tilde{v}_{s\parallel} \Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\frac{ \left(\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)J_{n}(k_{ \perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}- \tilde{\varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+ \tilde{\gamma}_{s\parallel}^{2}}\] (D.54) \[\sim \eta_{s}^{2}\int_{-\infty}^{\infty}\mathrm{d}\tilde{v}_{s \parallel}\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\frac{\left( \tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)J_{n}(k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{\left(\tilde{v}_{s\parallel}-\tilde{ \varpi}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}\right)^{2}+\tilde{\gamma }_{s\parallel}^{2}},\] where we have assumed that \(|\tilde{\varpi}_{s\parallel}|\gg|\tilde{\gamma}_{s\parallel}|\) (and also \(|\tilde{\varpi}_{s\parallel}|\gtrsim 1\)). The claim immediately follows. \(\bullet\)\(|n/k_{\parallel}\tilde{\rho}_{s}|\gg 1\): in this case, it follows immediately that \(|\tilde{v}_{s\parallel}+n/k_{\parallel}\tilde{\rho}_{s}|\lesssim\eta_{s}, \epsilon_{s}\) if and only if \(\tilde{v}_{s\parallel}\gg 1\). Via a similar argument to that presented for large \(\tilde{v}_{s\parallel}\) for the first term in the integrand on the right-hand-side of (D.43), contributions to the total integral will be exponentially small in \(\eta_{s},\epsilon_{s}\), and thus are unable to reverse the sign of the total integral. Thus, we have confirmed that there cannot exist electrostatic instabilities with growth rates which are algebraic in small parameters \(\eta_{s},\epsilon_{s}\). ## Appendix E Weak growth of high-frequency perturbations In this appendix, we present an argument that all perturbations in a CE plasma with complex frequency \(\omega=\varpi+\mathrm{i}\gamma\) satisfying the 'high-frequency' conditions \(|\omega|\gtrsim k_{\parallel}v_{\mathrm{th}}\) and \(|\varpi|\gg|\gamma|\) for all particle species have a growth rate that is at most exponentially small in \(\eta_{s}\), and \(\epsilon_{s}\). This argument does not prove that all perturbations satisfying \(|\omega|\gtrsim k_{\parallel}v_{\mathrm{th}s}\) in a CE plasma are stable, in that it does not apply to perturbations whose damping or growth rate is not small compared to their frequency. ### Deriving conditions for stability We begin with the result that for any linear electromagnetic perturbation with real frequency \(\varpi>0\), growth rate \(\gamma\), wavevector \(\mathbf{k}\), and electric-field perturbation \[\delta\mathbf{E}=\widehat{\delta\mathbf{E}}\exp\left\{\mathrm{i}\left(\mathbf{k} \mathbf{\cdot}\mathbf{r}-\varpi t\right)+\gamma t\right\},\] (E.1) the dissipation rate \(\mathfrak{Q}\) of the perturbation is related to the anti-Hermitian part of the plasma dielectric tensor evaluated at the perturbation's real frequency (Pitaevskii & Lifshitz 1981): \[\mathfrak{Q}=\mathrm{i}\varpi\widehat{\delta\mathbf{E}}^{*}\mathbf{\cdot}\mathbf{\cdot}\mathbf{ \cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\mathbf{\cdot}\widehat{\delta\mathbf{E}}\,,\] (E.2) where the anti-Hermitian part \(\mathbf{\cdot}\mathbf{\cdot}^{A}\) is defined by \[\mathbf{\cdot}\mathbf{\cdot}^{A}=\frac{1}{2}\left(\mathbf{\cdot}\mathbf{\cdot}-\mathbf{\cdot}\mathbf{ \cdot}^{\dagger}\right),\] (E.3) with \(\mathbf{\cdot}\mathbf{\cdot}^{\dagger}\) representing the conjugate transpose of \(\mathbf{\cdot}\mathbf{\cdot}\). If the mode is damped, then the dissipation rate is positive: \(\mathfrak{Q}>0\). Since \(\mathbf{\cdot}^{A}\) is anti-Hermitian, it is diagonalisable in some orthonormal basis \(\{\hat{\mathbf{e}}_{a},\hat{\mathbf{e}}_{b},\hat{\mathbf{e}}_{c}\}\), with imaginary eigenvalues \((-\mathrm{i}\varsigma_{a},-\mathrm{i}\varsigma_{b},-\mathrm{i}\varsigma_{c})\), where \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) are real numbers. The dissipation rate \(\mathfrak{Q}\) can be written in terms of these eigenvectors as \[\mathfrak{Q}=\varpi\left(\varsigma_{a}\left|\hat{\boldsymbol{e}}_{a}\boldsymbol{ \cdot}\widehat{\delta\boldsymbol{E}}\right|^{2}+\varsigma_{b}\left|\hat{ \boldsymbol{e}}_{b}\boldsymbol{\cdot}\widehat{\delta\boldsymbol{E}}\right|^{2 }+\varsigma_{c}\left|\hat{\boldsymbol{e}}_{c}\boldsymbol{\cdot}\widehat{\delta \boldsymbol{E}}\right|^{2}\right)\,. \tag{100}\] Thus, for unstable perturbations to exist, it must be the case that at least one of the numbers \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) has to be negative (without loss of generality, we will assume \(\varsigma_{a}<0\)); if this is the case, then the dissipation rate (and hence the growth rate) is a linear function of \(\varsigma_{a}\). We will show that if \(|\omega|\gtrsim k_{\parallel}v_{\mathrm{th}s}\), \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) can only be negative if they are exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). To prove this, consider the characteristic polynomial \[\varrho(\varsigma)\equiv\det\left[\boldsymbol{\mathfrak{E}}^{A}(\boldsymbol{k },\varpi)-\varsigma\boldsymbol{I}\right] \tag{101}\] of \(\boldsymbol{\mathfrak{E}}^{A}\) evaluated at the real frequency \(\varpi\) and wavevector \(\boldsymbol{k}\); it is a cubic, and thus can be written \[\varrho(\varsigma)=-\varsigma^{3}-\mathrm{i}\varrho_{2}\varsigma^{2}+\varrho _{1}\varsigma+\mathrm{i}\varrho_{0}\,, \tag{102}\] where \(\varrho_{0}\), \(\varrho_{1}\), and \(\varrho_{2}\) depend on \(\boldsymbol{\mathfrak{E}}^{A}\). Since \(\boldsymbol{\mathfrak{E}}^{A}\) has eigenvalues \((-\mathrm{i}\varsigma_{a},-\mathrm{i}\varsigma_{b},-\mathrm{i}\varsigma_{c})\), it follows that \[\varrho(\varsigma) =-\left(\varsigma+\mathrm{i}\varsigma_{a}\right)\left(\varsigma+ \mathrm{i}\varsigma_{b}\right)\left(\varsigma+\mathrm{i}\varsigma_{c}\right)\] \[=-\varsigma^{3}-\mathrm{i}\varsigma^{2}\left(\varsigma_{a}+ \varsigma_{b}+\varsigma_{c}\right)+\varsigma\left(\varsigma_{a}\varsigma_{b}+ \varsigma_{b}\varsigma_{c}+\varsigma_{c}\varsigma_{a}\right)+\mathrm{i}\varsigma _{a}\varsigma_{b}\varsigma_{c}, \tag{103}\] and so \[\varrho_{0} =\varsigma_{a}\varsigma_{b}\varsigma_{c}, \tag{104a}\] \[\varrho_{1} =\varsigma_{a}\varsigma_{b}+\varsigma_{b}\varsigma_{c}+\varsigma_{ c}\varsigma_{a},\] (104b) \[\varrho_{2} =\varsigma_{a}+\varsigma_{b}+\varsigma_{c}. \tag{104c}\] This implies that \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) are positive if \(\varrho_{0}\), \(\varrho_{1}\), and \(\varrho_{2}\) are positive. Furthermore, \(\varrho_{0}\), \(\varrho_{1}\), and \(\varrho_{2}\) can be used to provide bounds for \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) using an inequality discovered by Laguerre (1880): \[\varsigma_{-}\leq\varsigma_{a},\varsigma_{b},\varsigma_{c}\leq\varsigma_{+}, \tag{105}\] where \[\varsigma_{\pm}=-\frac{\varrho_{2}}{3}\pm\frac{2}{3}\sqrt{\varrho_{2}^{2}-3 \varrho_{1}^{2}}. \tag{106}\] In particular, the expression (106) for the root bounds implies that if \(\varrho_{1}\) and \(\varrho_{2}\) are exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\), then so are \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\). We can also evaluate \(\varrho(\varsigma)\) in terms of the components of \(\boldsymbol{\mathfrak{E}}^{A}\) in the coordinate basis \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\): \[\varrho(\varsigma) =-\varsigma^{3}+\varsigma^{2}\left(\boldsymbol{\mathfrak{E}}_{xx }^{A}+\boldsymbol{\mathfrak{E}}_{yy}^{A}+\boldsymbol{\mathfrak{E}}_{zz}^{A}\right)\] \[\quad-\varsigma\left(\boldsymbol{\mathfrak{E}}_{xx}^{A} \boldsymbol{\mathfrak{E}}_{yy}^{A}+\boldsymbol{\mathfrak{E}}_{yy}^{A} \boldsymbol{\mathfrak{E}}_{zz}^{A}+\boldsymbol{\mathfrak{E}}_{zz}^{A} \boldsymbol{\mathfrak{E}}_{xx}^{A}+(\boldsymbol{\mathfrak{E}}_{xy}^{A})^{2}+( \boldsymbol{\mathfrak{E}}_{yz}^{A})^{2}+(\boldsymbol{\mathfrak{E}}_{xz}^{A})^{ 2}\right)+\det\boldsymbol{\mathfrak{E}}^{A}, \tag{107}\] where we have used the symmetries (86) of the dielectric tensor to give \(\varrho(\varsigma)\) in terms of only the (six) independent components of \(\boldsymbol{\mathfrak{E}}^{A}\). (107) gives \[\varrho_{0} =-\mathrm{idet}\,\boldsymbol{\mathfrak{E}}^{A}, \tag{108a}\] \[\varrho_{1} =-\boldsymbol{\mathfrak{E}}_{xx}^{A}\boldsymbol{\mathfrak{E}}_{yy} ^{A}-\boldsymbol{\mathfrak{E}}_{yy}^{A}\boldsymbol{\mathfrak{E}}_{zz}^{A}- \boldsymbol{\mathfrak{E}}_{zz}^{A}\boldsymbol{\mathfrak{E}}_{xx}^{A}-( \boldsymbol{\mathfrak{E}}_{xy}^{A})^{2}-(\boldsymbol{\mathfrak{E}}_{yz}^{A})^{2} -(\boldsymbol{\mathfrak{E}}_{xz}^{A})^{2},\] (108b) \[\varrho_{2} =-\mathrm{i}\left(\boldsymbol{\mathfrak{E}}_{xx}^{A}+ \boldsymbol{\mathfrak{E}}_{yy}^{A}+\boldsymbol{\mathfrak{E}}_{zz}^{A}\right). \tag{108c}\] The anti-Hermiticity of \(\mathfrak{E}^{A}\) implies that \(\mathrm{Im}\,\mathfrak{E}^{A}_{xx}=-\mathrm{i}\mathfrak{E}^{A}_{xx}\), \(\mathrm{Im}\,\mathfrak{E}^{A}_{yy}=-\mathrm{i}\mathfrak{E}^{A}_{yy}\), \(\mathrm{Im}\,\mathfrak{E}^{A}_{zz}=-\mathrm{i}\mathfrak{E}^{A}_{zz}\), and \(\mathrm{Im}\,\mathfrak{E}^{A}_{xz}=-\mathrm{i}\mathfrak{E}^{A}_{xz}\), while \(\mathrm{Re}\,\mathfrak{E}^{A}_{xy}=\mathfrak{E}^{A}_{xy}\) and \(\mathrm{Re}\,\mathfrak{E}^{A}_{yz}=\mathfrak{E}^{A}_{yz}\), as is indeed necessary for \(\varrho_{0}\), \(\varrho_{1}\), and \(\varrho_{2}\) to be real numbers. Thus, in order to establish stability it is sufficient for our purposes to show that \[\mathrm{i}\det\mathfrak{E}^{A} < 0, \tag{13a}\] \[\mathfrak{E}^{A}_{xx}\mathfrak{E}^{A}_{yy}+\mathfrak{E}^{A}_{yy} \mathfrak{E}^{A}_{zz}+\mathfrak{E}^{A}_{zz}\mathfrak{E}^{A}_{xx}+(\mathfrak{E} ^{A}_{xy})^{2}+(\mathfrak{E}^{A}_{yz})^{2}+(\mathfrak{E}^{A}_{xz})^{2}<0,\] (13b) \[\mathrm{i}\left(\mathfrak{E}^{A}_{xx}+\mathfrak{E}^{A}_{yy}+ \mathfrak{E}^{A}_{zz}\right) < 0. \tag{13c}\] When these inequalities are not strictly satisfied, then we can instead estimate the magnitude of (13b) and (13c) to determine bounds for \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\). ### Evaluating conditions for stability Combining equations (73) with (76) gives an expression for the general plasma dielectric tensor (assuming \(k_{\parallel}>0\) without loss of generality): \[\mathfrak{E}=\boldsymbol{I}+\sum_{s}\frac{\omega_{ps}^{2}}{ \omega^{2}}\biggl{[}\frac{2}{\sqrt{\pi}}\int_{-\infty}^{\infty}\mathrm{d} \tilde{v}_{s\parallel}\,\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d} \tilde{v}_{s\perp}\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) \hat{\boldsymbol{z}}\hat{\boldsymbol{z}}\] \[\quad+\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\int_{C_{L} }\mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp }\tilde{v}_{s\perp}^{2}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) \sum_{n=-\infty}^{\infty}\frac{\boldsymbol{R}_{sn}}{\zeta_{sn}-\tilde{v}_{s \parallel}}\biggr{]}\,, \tag{13d}\] where all salient quantities are defined in section 2.4.1. Now evaluating the anti-Hermitian part of (13d) for \(\omega=\varpi\), \(\tilde{\omega}_{s\parallel}=\tilde{\varpi}_{s\parallel}\), we find \[\mathfrak{E}^{A}=-\mathrm{i}\sum_{s}\frac{\omega_{ps}^{2}}{\varpi^{2}}\biggl{[} 2\sqrt{\pi}\tilde{\varpi}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{ s\perp}\tilde{v}_{s\perp}^{2}\sum_{n=-\infty}^{\infty}\Xi_{s}(\zeta_{sn}, \tilde{v}_{s\perp})\boldsymbol{R}_{sn}(\zeta_{sn},\tilde{v}_{s\perp})\biggr{]}\,. \tag{13e}\] We now consider stability conditions (13b) in turn. First evaluating (13c), it can be shown that \[\mathrm{i}\left(\mathfrak{E}^{A}_{xx}\right. + \left.\mathfrak{E}^{A}_{yy}\right.+\left.\mathfrak{E}^{A}_{zz} \right)=2\sqrt{\pi}\sum_{s}\frac{\omega_{ps}^{2}}{\varpi^{2}}\tilde{\varpi}_{ s\parallel}\sum_{n=-\infty}^{\infty}\bigg{\{}\int_{0}^{\infty}\mathrm{d} \tilde{v}_{s\perp}\tilde{v}_{s\perp}^{2}\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp}) \tag{13f}\] \[\times \left[\frac{n^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}\tilde{v}_{s\perp}^{2}}+J_{n}^{ \prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}+\frac{\zeta_{sn}^{ 2}}{\tilde{v}_{s\perp}^{2}}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp}) ^{2}\right]\bigg{\}}\,.\] It is clear that the right-hand-side (13f) is negative if \[\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})<0\,. \tag{13g}\] For a Maxwellian distribution, \[\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})=\frac{\partial\tilde{f}_{s0}}{ \partial\tilde{v}_{s\perp}}(\zeta_{sn},\tilde{v}_{s\perp})=-2\tilde{v}_{s \perp}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\exp\left(-\zeta_{sn}^{2}\right) <0\,, \tag{13h}\] and thus \(\mathrm{i}\left(\mathfrak{E}^{A}_{xx}+\mathfrak{E}^{A}_{yy}+\mathfrak{E}^{A}_{ zz}\right)<0\), as required. For the CE distribution (71), \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\tilde{v}_{s \perp}\exp\left(-\tilde{v}_{s}^{2}\right)\left\{2+\eta_{s}\left[2\tilde{v}_{s \parallel}A_{s}(\tilde{v}_{s})-\frac{\tilde{v}_{s\parallel}}{\tilde{v}_{s}}A_{ s}^{\prime}(\tilde{v}_{s})\right]\right.\] \[\left.+\epsilon_{s}\left[2C_{s}(\tilde{v}_{s})\left(\tilde{v}_{s \parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2}+\frac{1}{2}\right)-\frac{1}{ \tilde{v}_{s}}\left(\tilde{v}_{s\parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{ 2}\right)C_{s}^{\prime}(\tilde{v}_{s})\right]\right\}\] \[-\frac{\tilde{v}_{s\perp}}{\tilde{\omega}_{s\parallel}}\exp\left(-\tilde{v}_{s}^{2} \right)\left[\eta_{s}A_{s}(\tilde{v}_{s})-3\epsilon_{s}C_{s}(\tilde{v}_{s}) \tilde{v}_{s\parallel}\right]\,. \tag{119}\] For \(|\tilde{\omega}_{s\parallel}|\gtrsim 1\), it is clear for \(\tilde{v}_{s}\lesssim 1\) that the largest contribution to \(\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\) comes from the Maxwellian term; the non-Maxwellian terms are \(\,O(\eta_{s},\epsilon_{s})\). Thus, for \(\zeta_{sn},\tilde{v}_{s\perp}\lesssim 1\), \(\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})<0\). As discussed in appendix (D.4.2), for \(\zeta_{sn}\gg 1\), the sign of \(\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})<0\) can in principle be reversed. However, the magnitude of \(\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})\) is exponentially small for such \(\zeta_{sn}\), and thus so is \(\varrho_{2}\). The remaining conditions (101_a_) and (101_b_) are much more tedious to treat; thus for simplicity, we explicitly consider only the case when a single particle species provides the dominant contribution to the dielectric tensor. Under this assumption, it can be shown that \[{\mathfrak{E}}^{A}_{xx}{\mathfrak{E}}^{A}_{yy} + {\mathfrak{E}}^{A}_{yy}{\mathfrak{E}}^{A}_{zz}\,+\,{\mathfrak{E}} ^{A}_{zz}{\mathfrak{E}}^{A}_{xx}+({\mathfrak{E}}^{A}_{xy})^{2}+({\mathfrak{E} }^{A}_{yz})^{2}+({\mathfrak{E}}^{A}_{xz})^{2} \tag{120}\] \[= 2\pi\frac{\omega_{ps}^{4}}{\varpi^{4}}\widetilde{\omega}_{s \parallel}^{2}\sum_{m=-\infty}^{\infty}\sum_{n=-\infty}^{\infty}\bigg{\{}\int_ {0}^{\infty}{\rm d}\tilde{v}_{s\perp}^{(1)}\int_{0}^{\infty}{\rm d}\tilde{v}_{ s\perp}^{(2)}\,\tilde{v}_{s\perp}^{(1)}\tilde{v}_{s\perp}^{(2)}\] \[\times\bigg{[}\Xi_{s}(\zeta_{sm},\tilde{v}_{s\perp}^{(1)})\Xi_{s} (\zeta_{sn},\tilde{v}_{s\perp}^{(2)}){\mathfrak{A}}\,_{mn}\left(\alpha_{s}, \tilde{v}_{s\perp}^{(1)},\tilde{v}_{s\perp}^{(2)}\right)\biggr{]}\bigg{\}}\,,\] where \(\alpha_{s}\equiv k_{\perp}\tilde{\rho}_{s}\) and \[{\mathfrak{A}}\,_{mn}\left(\alpha_{s},\tilde{v}_{s\perp}^{(1)}, \tilde{v}_{s\perp}^{(2)}\right) \tag{121}\] \[\equiv \frac{1}{\alpha_{s}^{2}}\left[m\tilde{v}_{s\perp}^{(2)}J_{m}( \alpha_{s}\tilde{v}_{s\perp}^{(1)})J^{\prime}_{n}(\alpha_{s}\tilde{v}_{s\perp }^{(2)})-n\tilde{v}_{s\perp}^{(1)}J^{\prime}_{m}(\alpha_{s}\tilde{v}_{s\perp }^{(1)})J_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})\right]^{2}\] \[+\frac{1}{\alpha_{s}^{2}}\left[m\zeta_{sn}\tilde{v}_{s\perp}^{(2) }J_{m}(\alpha_{s}\tilde{v}_{s\perp}^{(1)})J^{\prime}_{n}(\alpha_{s}\tilde{v}_{ s\perp}^{(2)})-n\zeta_{sm}\tilde{v}_{s\perp}^{(1)}J^{\prime}_{m}(\alpha_{s} \tilde{v}_{s\perp}^{(1)})J_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})\right]^{2}\] \[+\left[\zeta_{sn}\tilde{v}_{s\perp}^{(2)}J_{m}(\alpha_{s}\tilde{ v}_{s\perp}^{(1)})J^{\prime}_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})-\zeta_{sm} \tilde{v}_{s\perp}^{(1)}J^{\prime}_{m}(\alpha_{s}\tilde{v}_{s\perp}^{(1)})J_{ n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})\right]^{2}\,.\] Being a sum of positive terms, \({\mathfrak{A}}\,_{mn}\) is positive for all \(n\) and \(m\), and thus we again conclude that the integrand on the right-hand side of (120) is negative if \(\,\Xi_{s}(\zeta_{sm},\tilde{v}_{s\perp})<0\) and \(\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})<0\). Via similar reasoning to that applied to \(\varrho_{2}\) in the previous paragraph, it follows that for the CE distribution function, the only way in which this condition can be violated is for either \(\zeta_{sm}\gg 1\) or \(\zeta_{sn}\gg 1\) - both of which give rise to exponentially small terms. Thus, either \(\varrho_{1}>0\) or \(\varrho_{1}\) is exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). Finally, for (101_a_), it is necessary to evaluate \(\det{\mathfrak{E}}^{A}\); this becomes (after much tedious algebra) \[\det {\mathfrak{E}}^{A}=-\frac{4}{3}{\rm i}\pi^{3/2}\frac{\omega_{ps}^{ 6}}{\varpi^{6}}\,\widetilde{\varpi}_{s\parallel}^{3} \tag{122}\] \[\times \sum_{m=-\infty}^{\infty}\sum_{n=-\infty}^{\infty}\sum_{l=-\infty }^{\infty}\bigg{\{}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}^{(1)}\int_{0}^{ \infty}{\rm d}\tilde{v}_{s\perp}^{(2)}\int_{0}^{\infty}{\rm d}\tilde{v}_{s \perp}^{(3)}\,\tilde{v}_{s\perp}^{(1)}\tilde{v}_{s\perp}^{(2)}\tilde{v}_{s \perp}^{(3)}\] \[\times \left[\Xi_{s}(\zeta_{sm},\tilde{v}_{s\perp}^{(1)})\Xi_{s}(\zeta_ {sn},\tilde{v}_{s\perp}^{(2)})\Xi_{s}(\zeta_{sl},\tilde{v}_{s\perp}^{(3)}){ \mathfrak{B}}\,_{mnl}\left(\alpha_{s},\tilde{v}_{s\perp}^{(1)},\tilde{v}_{s \perp}^{(2)},\tilde{v}_{s\perp}^{(3)}\right)\right]\bigg{\}}\,,\] where \[{\mathfrak{B}}\,_{mnl}\left(\alpha_{s},\tilde{v}_{s\perp}^{(1)}, \tilde{v}_{s\perp}^{(2)},\tilde{v}_{s\perp}^{(3)}\right) \tag{123}\] \[\equiv \bigg{\{}mJ_{m}(\alpha_{s}\tilde{v}_{s\perp}^{(1)})\left[\tilde{v}_ {s\perp}^{(1)}\zeta_{sn}J_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})J^{\prime}_{l}( \alpha_{s}\tilde{v}_{s\perp}^{(3)})-\tilde{v}_{s\perp}^{(3)}\zeta_{sl}J^{ \prime}_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})J_{l}(\alpha_{s}\tilde{v}_{s \perp}^{(3)})\right]\] \[+nJ_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(1)})\left[\tilde{v}_{s\perp}^{(2)} \zeta_{sl}J_{l}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})J_{m}^{\prime}(\alpha_{s} \tilde{v}_{s\perp}^{(3)})-\tilde{v}_{s\perp}^{(1)}\zeta_{sm}J_{l}^{\prime}( \alpha_{s}\tilde{v}_{s\perp}^{(2)})J_{m}(\alpha_{s}\tilde{v}_{s\perp}^{(3)})\right]\] \[+lJ_{l}(\alpha_{s}\tilde{v}_{s\perp}^{(1)})\left[\tilde{v}_{s\perp }^{(3)}\zeta_{sm}J_{m}(\alpha_{s}\tilde{v}_{s\perp}^{(2)})J_{n}^{\prime}(\alpha _{s}\tilde{v}_{s\perp}^{(3)})\right.\] \[\left.-\tilde{v}_{s\perp}^{(2)}\zeta_{sn}J_{m}^{\prime}(\alpha_{s} \tilde{v}_{s\perp}^{(2)})J_{n}(\alpha_{s}\tilde{v}_{s\perp}^{(3)})\right] \bigg{\}}^{2}\,. \tag{100}\] Similarly to \(\mathfrak{A}_{mn}\), \(\mathfrak{B}_{mnl}\) is strictly positive for all \(m\), \(n\) and \(l\), meaning that the integrand on the right-hand side of (100) is negative if \(\Xi_{s}(\zeta_{sm},\tilde{v}_{s\perp})<0\), \(\Xi_{s}(\zeta_{sn},\tilde{v}_{s\perp})<0\), and \(\Xi_{s}(\zeta_{sl},\tilde{v}_{s\perp})<0\). For the CE distribution, exactly the same argument as before applies to show that either \(\varrho_{0}>0\) or it is exponentially small. In summary, we have now verified that the only situation in which the stability conditions (101) are not satisfied are those for which \(\varrho_{0}\), \(\varrho_{1}\) and \(\varrho_{2}\) are exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). In the latter case, considerations of bounds (100) and (102) implies that \(\varsigma_{a}\), \(\varsigma_{b}\), and \(\varsigma_{c}\) are also all exponentially small in \(\eta_{s}\) and \(\epsilon_{s}\). The claim of the appendix follows. Appendix F Properties of leading-order expansion \(\mathfrak{E}^{(0)}\) of dielectric tensor (73) in \(\tilde{\omega}_{s\parallel}\ll 1\) for a weakly anisotropic distribution function Symmetries of \(\mathfrak{E}^{(0)}_{s}\) in coordinate basis \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\) In this appendix, we show that the leading-order expansion \(\mathfrak{E}^{(0)}_{s}\) [cf. (99_a_)] of the dielectric tensor \(\mathfrak{E}_{s}\) of species \(s\) [cf. (95)] in \(\tilde{\omega}_{s\parallel}\ll 1\) arising in a non-relativistic plasma with only weak anisotropy of its particle distribution function obeys additional symmetries (100), viz., \[(\mathfrak{E}^{(0)}_{s})_{xz} = -\frac{k_{\perp}}{k_{\parallel}}(\mathfrak{E}^{(0)}_{s})_{xx}\,, \tag{101a}\] \[(\mathfrak{E}^{(0)}_{s})_{yz} = \frac{k_{\perp}}{k_{\parallel}}(\mathfrak{E}^{(0)}_{s})_{xy}\,,\] (101b) \[(\mathfrak{E}^{(0)}_{s})_{zz} = \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathfrak{E}^{(0)}_{s})_{ xx}\,. \tag{101c}\] when \(k\rho_{s}\sim 1\). The term 'weak anisotropy' means that the magnitude of angular anisotropy - mathematically represented by the function \(\Lambda_{s}\) defined by (83) - satisfies \(\Lambda_{s}\lesssim\tilde{\omega}_{s\parallel}\) for all particle species when \(\tilde{v}_{s}\sim 1\). We begin the proof by substituting (76) into (95) to give \[\mathfrak{E}_{s} \equiv \frac{\omega_{\rm ps}^{2}}{\omega^{2}}\bigg{[}\frac{2}{\sqrt{\pi} }\frac{k_{\parallel}}{|k_{\parallel}|}\int_{-\infty}^{\infty}{\rm d}\tilde{v}_ {s\parallel}\,\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp }\,\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\hat{\boldsymbol{z}} \hat{\boldsymbol{z}} \tag{102}\] \[+\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\int_{C_{L}}{\rm d }\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\tilde{v}_{s \perp}^{2}\,\boldsymbol{\Xi}_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) \sum_{n=-\infty}^{\infty}\frac{\boldsymbol{\mathcal{R}}_{sn}}{\zeta_{sn}- \tilde{v}_{s\parallel}}\bigg{]}\,.\] Then, under the assumed ordering \(\tilde{\omega}_{s\parallel}\sim\Lambda_{s}\), the function \(\Xi_{s}\) defined by (84) satisfies \(\Xi_{s}\sim 1\) for \(\tilde{v}_{s}\sim 1\); therefore, \(\mathfrak{E}_{s}\) has order-unity elements as \(\tilde{\omega}_{s\parallel}\to 0\). Let us expand \(\mathfrak{E}_{s}\) in a Taylor series around \(\tilde{\omega}_{s\parallel}=0\): \[\mathfrak{E}_{s}=\tilde{\omega}_{s\parallel}\mathfrak{E}^{(0)}_{s}+\delta \mathfrak{E}_{s}, \tag{103}\] where \(\delta\mathfrak{E}_{s}=\textit{O}(\tilde{\omega}_{s\parallel}^{2})\), and the matrix elements of \(\mathfrak{E}_{s}^{(0)}\) are given below: \[(\mathfrak{E}_{s}^{(0)})_{xx}\equiv-\frac{2\omega_{ps}^{2}}{\sqrt{ \pi}\omega^{2}}\sum_{n=-\infty}^{\infty}\Bigg{[}\frac{n^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}\int_{C_{L}}\frac{\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}\] \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\Xi_{s}( \tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})^{2}\Bigg{]}, \tag{11a}\] \[(\mathfrak{E}_{s}^{(0)})_{xy}\equiv-\frac{2\mathrm{i}\omega_{ps}^{2}}{\sqrt{ \pi}\omega^{2}}\sum_{n=-\infty}^{\infty}\Bigg{[}\frac{n}{k_{\perp}\tilde{\rho }_{s}}\int_{C_{L}}\frac{\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s \parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}\] \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{ s\perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{ \rho}_{s}\tilde{v}_{s\perp})J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{ s\perp})\Bigg{]}\,,\] (11b) \[(\mathfrak{E}_{s}^{(0)})_{xx}\equiv-\frac{2\omega_{ps}^{2}}{\sqrt{ \pi}\omega^{2}}\sum_{n=-\infty}^{\infty}\Bigg{[}\frac{n}{k_{\perp}\tilde{\rho }_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}\mathrm{d}\tilde{v}_{s\parallel} }{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}\] \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\Xi_{s}(\tilde{ v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}\Bigg{]},\] (11c) \[(\mathfrak{E}_{s}^{(0)})_{yx}\equiv-(\mathfrak{E}_{s}^{(0)})_{xy},\] (11d) \[(\mathfrak{E}_{s}^{(0)})_{yy}\equiv-\frac{2\omega_{ps}^{2}}{\sqrt {\pi}\omega^{2}}\sum_{n=-\infty}^{\infty}\Bigg{[}\int_{C_{L}}\frac{\mathrm{d} \tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{ s}}\] \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{ s\perp}^{2}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}^{\prime}(k_{ \perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\Bigg{]},\] (11e) \[(\mathfrak{E}_{s}^{(0)})_{yz}\equiv-\frac{2\mathrm{i}\omega_{ps}^{2}}{ \sqrt{\pi}\omega^{2}}\sum_{n=-\infty}^{\infty}\Bigg{[}\int_{C_{L}}\frac{\tilde {v}_{s\parallel}\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}+n/|k_ {\parallel}|\tilde{\rho}_{s}}\] \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{ s\perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{ \rho}_{s}\tilde{v}_{s\perp})J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_ {s\perp})\Bigg{]}\,,\] (11f) \[(\mathfrak{E}_{s}^{(0)})_{zx}\equiv(\mathfrak{E}_{s}^{(0)})_{xz},\] (11g) \[(\mathfrak{E}_{s}^{(0)})_{zy}\equiv-(\mathfrak{E}_{s}^{(0)})_{yz},\] (11h) \[(\mathfrak{E}_{s}^{(0)})_{zz}\equiv\frac{2\omega_{ps}^{2}}{\sqrt {\pi}\omega_{s\parallel}\omega^{2}}\int_{-\infty}^{\infty}\mathrm{d}\tilde{v}_ {s\parallel}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp} \Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\] \[-\frac{2\omega_{ps}^{2}}{\sqrt{\pi}\omega^{2}}\sum_{n=-\infty}^{ \infty}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2}\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}\int_{0}^{ \infty}\mathrm{d}\tilde{v}_{s\perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_ {s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}. \tag{11i}\] Next, noting that \[\frac{\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{ \rho}_{s}}=1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\frac{\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}, \tag{11j}\] as well as \[\sum_{n=-\infty}^{\infty}\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}} \mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp} \Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s }\tilde{v}_{s\perp})^{2}=0, \tag{11k}\] we see that the double integral in (F 4\(c\)) can be rearranged to give \[(\mathfrak{E}_{s}^{(0)})_{xz} = \frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\sum_{n=-\infty}^{ \infty}\Bigg{[}\frac{n^{2}}{|k_{\parallel}|k_{\perp}\tilde{\rho}_{s}^{2}}\int_{ C_{L}}\frac{{\rm d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}+n/|k_{ \parallel}|\tilde{\rho}_{s}}\] \[\qquad\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_{s \perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{ \rho}_{s}\tilde{v}_{s\perp})^{2}\Bigg{]},\] \[= -\frac{k_{\perp}}{|k_{\parallel}|}(\mathfrak{E}_{s}^{(0)})_{xx}. \tag{100}\] Similarly, it can be shown that \[(\mathfrak{E}_{s}^{(0)})_{yz} = \frac{2{\rm i}\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\sum_{n=- \infty}^{\infty}\Bigg{[}\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\int_{C_{L}} \frac{{\rm d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}+n/|k_{\parallel}| \tilde{\rho}_{s}} \tag{101}\] \[\qquad\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_{s \perp}\,\tilde{v}_{s\perp}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J _{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n}^{\prime}(k_{\perp}\tilde {\rho}_{s}\tilde{v}_{s\perp})\Bigg{]}\,,\] \[= \frac{k_{\perp}}{|k_{\parallel}|}(\mathfrak{E}_{s}^{(0)})_{xy}.\] Finally, \((\mathfrak{E}_{s}^{(0)})_{zz}\) can also be written in terms of \((\mathfrak{E}_{s}^{(0)})_{xx}\): because \[\frac{\tilde{v}_{s\parallel}^{2}}{\tilde{v}_{s\parallel}+n/|k_{ \parallel}|\tilde{\rho}_{s}}=\tilde{v}_{s\parallel}-\frac{n}{|k_{\parallel}| \tilde{\rho}_{s}}+\frac{n^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^{2}}\frac{ 1}{\tilde{v}_{s\parallel}+n/|k_{\parallel}|\tilde{\rho}_{s}}, \tag{102}\] it follows that \[(\mathfrak{E}_{s}^{(0)})_{zz} = \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathfrak{E}_{s}^{(0)})_ {xx}+\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\tilde{\omega}_{s\parallel}\omega ^{2}}\int_{-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\tilde{v}_{s \parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\Lambda_{s}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp}) \tag{103}\] \[-\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\sum_{n=- \infty}^{\infty}\int_{-\infty}^{\infty}\tilde{v}_{s\parallel}{\rm d}\tilde{v }_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\Xi_{s}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}\] \[+\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\sum_{n=- \infty}^{\infty}\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{-\infty}^{\infty}{ \rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\Xi_{s} (\tilde{v}_{s\parallel},\tilde{v}_{s\perp})J_{n}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})^{2}\] \[= \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathfrak{E}_{s}^{(0)})_ {xx}-\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\int_{-\infty}^{\infty} {\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\, \tilde{v}_{s\parallel}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s \perp}}\] \[+\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\tilde{\omega}_{s\parallel} \omega^{2}}\int_{-\infty}^{\infty}{\rm d}\tilde{v}_{s\parallel}\tilde{v}_{s \parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\Lambda_{s}(\tilde{v}_{s \parallel},\tilde{v}_{s\perp})\left[1-\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\right]\] \[= \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathfrak{E}_{s}^{(0)})_ {xx}+\frac{2\omega_{\rm ps}^{2}}{\sqrt{\pi}\omega^{2}}\int_{-\infty}^{\infty} {\rm d}\tilde{v}_{s\parallel}\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\Lambda_{ s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}),\] where we have used the identity \[\sum_{n=-\infty}^{\infty}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}=1. \tag{104}\] Thus, we conclude that since the anisotropy is assumed small, \[(\mathfrak{E}_{s}^{(0)})_{zz}=\frac{k_{\perp}^{2}}{k_{\parallel}^{2}}( \mathfrak{E}_{s}^{(0)})_{xx}+\textit{O}(\tilde{\omega}_{s\parallel}), \tag{105}\] completing the proof. ### Evaluating the dielectric tensor in coordinate basis \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) To demonstrate that the components of the dielectric tensor \(\mathbf{\mathfrak{C}}_{s}^{(0)}\) are given by (103), viz., \[(\mathbf{\mathfrak{C}}_{s}^{(0)})_{11} =\frac{k^{2}}{k_{\parallel}^{2}}(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xx}\,, \tag{13a}\] \[(\mathbf{\mathfrak{C}}_{s}^{(0)})_{12} =-(\mathbf{\mathfrak{C}}_{s}^{(0)})_{21}=\frac{k}{k_{\parallel}}(\bm {\mathfrak{C}}_{s}^{(0)})_{xy}\,,\] (13b) \[(\mathbf{\mathfrak{C}}_{s}^{(0)})_{22} =(\mathbf{\mathfrak{C}}_{s}^{(0)})_{yy}\,, \tag{13c}\] we use (126) to express \(\mathbf{\mathfrak{C}}_{s}^{(0)}\) in the form \[\mathbf{\mathfrak{C}}_{s}^{(0)} =(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xx}\hat{\mathbf{x}}\hat{\mathbf{x}}+( \mathbf{\mathfrak{C}}_{s}^{(0)})_{xy}\,(\hat{\mathbf{x}}\hat{\mathbf{y}}-\hat{\mathbf{y}}\hat {\mathbf{x}})+(\mathbf{\mathfrak{C}}_{s}^{(0)})_{yy}\hat{\mathbf{y}}\hat{\mathbf{y}}\] \[\quad-\frac{k_{\perp}}{|k_{\parallel}|}(\mathbf{\mathfrak{C}}_{s}^{(0 )})_{xx}\,(\hat{\mathbf{x}}\hat{\mathbf{z}}+\hat{\mathbf{z}}\hat{\mathbf{x}})+\frac{k_{\perp}} {|k_{\parallel}|}(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xy}\,(\hat{\mathbf{y}}\hat{\mathbf{z}} -\hat{\mathbf{z}}\hat{\mathbf{y}})+\frac{k_{\perp}^{2}}{k_{\parallel}^{2}}(\mathbf{ \mathfrak{C}}_{s}^{(0)})_{xx}\hat{\mathbf{z}}\hat{\mathbf{z}}\,. \tag{13d}\] Noting that \[\hat{\mathbf{k}} =\frac{k_{\perp}}{k}\hat{\mathbf{x}}+\frac{k_{\parallel}}{k}\hat{\bm {z}}, \tag{13e}\] \[\hat{\mathbf{y}}\times\hat{\mathbf{k}} =\frac{k_{\parallel}}{k}\hat{\mathbf{x}}-\frac{k_{\perp}}{k}\hat{\bm {z}}, \tag{13e}\] we can rewrite (13d) as \[\mathbf{\mathfrak{C}}_{s}^{(0)} =\frac{k^{2}}{k_{\parallel}^{2}}(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xx }\left(\hat{\mathbf{y}}\times\hat{\mathbf{k}}\right)\left(\hat{\mathbf{y}}\times\hat{\mathbf{ k}}\right)\] \[\quad+\frac{k}{|k_{\parallel}|}(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xy} \left[\left(\hat{\mathbf{y}}\times\hat{\mathbf{k}}\right)\hat{\mathbf{y}}-\hat{\mathbf{y}} \left(\hat{\mathbf{y}}\times\hat{\mathbf{k}}\right)\right]+(\mathbf{\mathfrak{C}}_{s}^{(0 )})_{yy}\hat{\mathbf{y}}\hat{\mathbf{y}} \tag{13f}\] \[=\frac{k^{2}}{k_{\parallel}^{2}}(\mathbf{\mathfrak{C}}_{s}^{(0)})_{xx }\mathbf{e}_{1}\mathbf{e}_{1}+\frac{k}{|k_{\parallel}|}(\mathbf{\mathfrak{C}}_{s}^{(0)})_ {xy}\,(\mathbf{e}_{1}\mathbf{e}_{2}-\mathbf{e}_{2}\mathbf{e}_{1})+(\mathbf{\mathfrak{C}}_{s}^{(0) })_{yy}\mathbf{e}_{2}\mathbf{e}_{2}\,, \tag{13f}\] leading to the desired results (13d). In addition, we see that \(\mathbf{\mathfrak{C}}_{s}^{(0)}\cdot\hat{\mathbf{k}}=0\); thus, the results (104) claiming that certain components of \(\mathbf{\mathfrak{C}}_{s}\) are small in \(\tilde{\omega}_{s\parallel}\) are justified. ## Appendix G Dielectric tensor components for the CE distribution function (8) In this appendix, we calculate the components of the dielectric tensor arising from the CE distribution function (8), with isotropic functions \(A_{e}^{T}(\tilde{v}_{e})\), \(A_{e}^{R}(\tilde{v}_{e})\), \(A_{e}^{u}(\tilde{v}_{e})\), \(C_{e}(\tilde{v}_{e})\), \(A_{i}(\tilde{v}_{i})\) and \(C_{i}(\tilde{v}_{i})\) chosen as appropriate for a Krook collision operator (see appendix B.2.1), viz., \[A_{e}^{T}(\tilde{v}_{e}) =-\left(\tilde{v}_{e}^{2}-\frac{5}{2}\right)\,, \tag{13a}\] \[A_{e}^{R}(\tilde{v}_{e}) =-1\,,\] (13b) \[A_{e}^{u}(\tilde{v}_{e}) =0\,, \tag{13c}\] \[A_{i}(\tilde{v}_{i}) = -\left(\tilde{v}_{i}^{2}-\frac{5}{2}\right)\,, \tag{124}\] \[C_{e}(\tilde{v}_{e}) = -1\,,\] (125) \[C_{i}(\tilde{v}_{i}) = -1\,. \tag{126}\] This, via (107), allows for the dielectric tensor \(\mathfrak{E}_{s}\) to be calculated order by order in \(\tilde{\omega}_{s\parallel}\). We carry out these calculations in the case of non-relativistic fluctuations, and so \[\mathfrak{E}\approx\frac{4\pi\mathrm{i}}{\omega}\boldsymbol{\sigma}=\sum_{s} \mathfrak{E}_{s}, \tag{127}\] where we remind the reader that [cf. (126)] \[\mathfrak{E}_{s} = \frac{\omega_{\mathrm{ps}}^{2}}{\omega^{2}}\bigg{[}\frac{2}{ \sqrt{\pi}}\frac{k_{\parallel}}{|k_{\parallel}|}\int_{-\infty}^{\infty} \mathrm{d}\tilde{v}_{s\parallel}\,\tilde{v}_{s\parallel}\int_{0}^{\infty} \mathrm{d}\tilde{v}_{s\perp}\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s \perp})\hat{z}\hat{z} \tag{128}\] \[+\tilde{\omega}_{s\parallel}\frac{2}{\sqrt{\pi}}\int_{C_{L}} \mathrm{d}\tilde{v}_{s\parallel}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp }\tilde{v}_{s\perp}^{2}\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\sum _{n=-\infty}^{\infty}\frac{\boldsymbol{R}_{sn}}{\zeta_{sn}-\tilde{v}_{s \parallel}}\bigg{]}\,,\] \[\zeta_{sn}\equiv\tilde{\omega}_{s\parallel}-\frac{n}{|k_{\parallel}|\tilde{ \rho}_{s}}, \tag{129}\] \[\tilde{f}_{s0}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\equiv\frac{\pi^{3/2 }v_{\mathrm{ths}}^{3}}{n_{s0}}f_{s0}\left(\frac{k_{\parallel}}{|k_{\parallel} |}v_{\mathrm{ths}}\tilde{v}_{s\parallel},v_{\mathrm{ths}}\tilde{v}_{s\perp} \right), \tag{130}\] \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\equiv\tilde{v}_{s\perp }\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\parallel}}-\tilde{v}_{s \parallel}\frac{\partial\tilde{f}_{s0}}{\partial\tilde{v}_{s\perp}}, \tag{131}\] \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})\equiv\frac{\partial\tilde {f}_{s0}}{\partial\tilde{v}_{s\perp}}+\frac{\Lambda_{s}(\tilde{v}_{s\parallel },\tilde{v}_{s\perp})}{\tilde{\omega}_{s\parallel}}, \tag{132}\] and \[(\boldsymbol{R}_{sn})_{xx} \equiv \frac{n^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2 }}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}\tilde{v}_{s\perp}^{2}}, \tag{133a}\] \[(\boldsymbol{R}_{sn})_{xy} \equiv \frac{\mathrm{i}nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp })J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})}{k_{\perp} \tilde{\rho}_{s}\tilde{v}_{s\perp}},\] (133b) \[(\boldsymbol{R}_{sn})_{xz} \equiv \frac{nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}}{k _{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp}}\frac{k_{\parallel}\tilde{v}_{s \parallel}}{|k_{\parallel}|\tilde{v}_{s\perp}},\] (133c) \[(\boldsymbol{R}_{sn})_{yx} \equiv -(\boldsymbol{R}_{sn})_{xy}\] (133d) \[(\boldsymbol{R}_{sn})_{yy} \equiv J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp })^{2},\] (133e) \[(\boldsymbol{R}_{sn})_{yz} \equiv \mathrm{i}nJ_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n }^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\frac{k_{\parallel} \tilde{v}_{s\parallel}}{|k_{\parallel}|\tilde{v}_{s\perp}},\] (133f) \[(\boldsymbol{R}_{sn})_{zx} \equiv (\boldsymbol{R}_{sn})_{xz}\] (133g) \[(\boldsymbol{R}_{sn})_{zy} \equiv -(\boldsymbol{R}_{sn})_{yz}\] (133h) \[(\boldsymbol{R}_{sn})_{zz} \equiv \frac{\tilde{v}_{s\parallel}^{2}}{\tilde{v}_{s\perp}^{2}}J_{n}(k_ {\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}. \tag{133i}\] The components of the dielectric tensor \(\mathfrak{E}_{s}\) in coordinate basis \(\{\boldsymbol{e}_{1},\boldsymbol{e}_{2},\boldsymbol{e}_{3}\}\) are related to the components in coordinate basis \(\{\boldsymbol{\hat{x}},\boldsymbol{\hat{y}},\boldsymbol{\hat{z}}\}\) by \[(\mathfrak{E}_{s})_{11} =\frac{k_{\parallel}^{2}}{k^{2}}(\mathfrak{E}_{s})_{xx}-\frac{2k_ {\parallel}k_{\perp}}{k^{2}}(\mathfrak{E}_{s})_{xz}+\frac{k_{\perp}^{2}}{k^{2 }}(\mathfrak{E}_{s})_{zz}\,, \tag{111a}\] \[(\mathfrak{E}_{s})_{12} =\frac{k_{\parallel}}{k}(\mathfrak{E}_{s})_{xy}+\frac{k_{\perp}}{ k}(\mathfrak{E}_{s})_{yz}\,,\] (111b) \[(\mathfrak{E}_{s})_{13} =\frac{k_{\parallel}k_{\perp}}{k^{2}}\left[(\mathfrak{E}_{s})_{ xx}-(\mathfrak{E}_{s})_{zz}\right]+\left(\frac{k_{\parallel}^{2}}{k^{2}}-\frac{k_{ \perp}^{2}}{k^{2}}\right)(\mathfrak{E}_{s})_{xz}\,,\] (111c) \[(\mathfrak{E}_{s})_{21} =-(\mathfrak{E}_{s})_{12}\,,\] (111d) \[(\mathfrak{E}_{s})_{22} =(\mathfrak{E}_{s})_{yy}\,,\] (111e) \[(\mathfrak{E}_{s})_{23} =-\frac{k_{\perp}}{k}(\mathfrak{E}_{s})_{xy}+\frac{k_{\parallel }}{k}(\mathfrak{E}_{s})_{yz}\,,\] (111f) \[(\mathfrak{E}_{s})_{31} =(\mathfrak{E}_{s})_{13}\,,\] (111g) \[(\mathfrak{E}_{s})_{32} =-(\mathfrak{E}_{s})_{23}\,,\] (111h) \[(\mathfrak{E}_{s})_{33} =\frac{k_{\perp}^{2}}{k^{2}}(\mathfrak{E}_{s})_{xx}+\frac{2k_{ \parallel}k_{\perp}}{k^{2}}(\mathfrak{E}_{s})_{xz}+\frac{k_{\parallel}^{2}}{k ^{2}}(\mathfrak{E}_{s})_{zz}\,. \tag{111h}\] For clarity, we calculate separately the Maxwellian contribution \(\boldsymbol{M}_{s}\) of the total CE distribution function and the non-Maxwellian contribution \(\boldsymbol{P}_{s}\) associated with the CE electron friction, temperature-gradient, and shear terms to \(\mathfrak{E}_{s}\) - viz., we decompose \(\mathfrak{E}_{s}\) as follows [cf. (96)]: \[\mathfrak{E}_{s}=\frac{\omega_{ps}^{2}}{\omega^{2}}\left(\boldsymbol{M}_{s}+ \boldsymbol{P}_{s}\right)\,. \tag{111h}\] ### Maxwellian distribution #### g.1.1 General dielectric tensor Consider a non-dimensionalised Maxwellian distribution function: \[\tilde{f}_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=\exp\left(-\tilde{v} _{s}^{2}\right). \tag{111h}\] The Maxwellian is isotropic, so (111) gives \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=0, \tag{111h}\] while (111) becomes \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-2\tilde{v}_{s\perp}\exp \left(-\tilde{v}_{s}^{2}\right). \tag{111h}\] Substituting this into (111) gives \[(\boldsymbol{M}_{s})_{xx} =\frac{4}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{n=-\infty} ^{\infty}\left[\frac{n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\int_{C_{L}} \frac{\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp \left(-\tilde{v}_{s\perp}^{2}\right)\right], \tag{111a}\] \[(\boldsymbol{M}_{s})_{xy} =\frac{4\mathrm{i}}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{ n=-\infty}^{\infty}\left[\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}} \frac{\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n} ^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde{v}_{s \perp}^{2}\right)\right]\,, \tag{111b}\] \[(\mbox{\it M}_{\!s})_{xz} = \frac{4}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{ \infty}\left[\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s \parallel}\exp\left(-\tilde{v}_{s\parallel}^{2}\right){\rm d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\qquad\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_ {s\perp}\,\tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp} )^{2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\right],\] \[(\mbox{\it M}_{\!s})_{yx} = (\mbox{\it M}_{\!s})_{xy},\] \[(\mbox{\it M}_{\!s})_{yy} = \frac{4}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^ {\infty}\left[\int_{C_{L}}\frac{\exp\left(-\tilde{v}_{s\parallel}^{2}\right){ \rm d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\qquad\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_ {s\perp}\,\tilde{v}_{s\perp}^{3}J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde {v}_{s\perp})^{2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\right],\] \[(\mbox{\it M}_{\!s})_{yz} = -\frac{4{\rm i}}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{n=- \infty}^{\infty}\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}\exp\left(- \tilde{v}_{s\parallel}^{2}\right){\rm d}\tilde{v}_{s\parallel}}{\tilde{v}_{s \parallel}-\zeta_{sn}}\right.\] \[\left.\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_{s \perp}\,\tilde{v}_{s\perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp })J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde {v}_{s\perp}^{2}\right)\right]\,,\] \[(\mbox{\it M}_{\!s})_{zx} = (\mbox{\it M}_{\!s})_{xz}\,,\] \[(\mbox{\it M}_{\!s})_{zy} = -(\mbox{\it M}_{\!s})_{yz}\,,\] \[(\mbox{\it M}_{\!s})_{zz} = \frac{4}{\sqrt{\pi}}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^ {\infty}\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2}\exp\left(-\tilde{v}_ {s\parallel}^{2}\right){\rm d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}- \zeta_{sn}}\right. \tag{144}\] \[\left.\qquad\qquad\qquad\times\int_{0}^{\infty}{\rm d}\tilde{v}_ {s\perp}\,\tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp} )^{2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\right].\] Using the integral identities \[\frac{1}{\sqrt{\pi}}\int_{C_{L}}\frac{u\exp\left(-u^{2}\right){\rm d }u}{u-z} = 1+zZ(z)\,\] \[\frac{1}{\sqrt{\pi}}\int_{C_{L}}\frac{u^{2}\exp\left(-u^{2}\right) {\rm d}u}{u-z} = z\left[1+zZ(z)\right]\,, \tag{145}\] involving the plasma dispersion function, and the identities \[\int_{0}^{\infty}{\rm d}t\,t\,J_{n}(\alpha t)^{2}\exp\left(-t^{2}\right) = \frac{1}{2}\exp\left(-\frac{\alpha^{2}}{2}\right)I_{n}\!\left( \frac{\alpha^{2}}{2}\right)\,,\] \[\int_{0}^{\infty}{\rm d}t\,t^{2}J_{n}(\alpha t)J_{n}^{\prime}( \alpha t)\exp\left(-t^{2}\right) = \frac{\alpha}{4}\exp\left(-\frac{\alpha^{2}}{2}\right)\left[I_{n} ^{\prime}\!\left(\frac{\alpha^{2}}{2}\right)-I_{n}\!\left(\frac{\alpha^{2}}{2} \right)\right]\,,\] \[\int_{0}^{\infty}{\rm d}t\,t^{3}J_{n}^{\prime}(\alpha t)^{2}\exp \left(-t^{2}\right) = \frac{1}{4}\exp\left(-\frac{\alpha^{2}}{2}\right)\left\{\frac{2n^{2 }}{\alpha^{2}}I_{n}\!\left(\frac{\alpha^{2}}{2}\right)\right. \tag{146}\] \[\left.-\alpha^{2}\left[I_{n}^{\prime}\!\left(\frac{\alpha^{2}}{2} \right)-I_{n}\!\left(\frac{\alpha^{2}}{2}\right)\right]\right\}\,,\] involving Bessel functions (here \(\alpha\) a real number), we obtain expressions for the dielectric components (146) in terms of special functions: \[(\mbox{\it M}_{\!s})_{xx}=2\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty} \frac{n^{2}}{k_{\perp}^{2}\,\tilde{\rho}_{s}^{2}}Z(\zeta_{sn})\exp\left(-\frac{k_ {\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde {\rho}_{s}^{2}}{2}\right), \tag{147a}\] \[(\textbf{M}_{s})_{xy} = \mathrm{i}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty}nZ( \zeta_{sn})\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_ {n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)-I_{n} \!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\,, \tag{166}\] \[(\textbf{M}_{s})_{xz} = 2\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty}\frac{n}{k_{ \perp}\tilde{\rho}_{s}}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]\exp\left(-\frac{ k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right),\] (167) \[(\textbf{M}_{s})_{yx} = (\textbf{M}_{s})_{xy}\,,\] (168) \[(\textbf{M}_{s})_{yy} = \tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty}Z(\zeta_{sn})\] (169) \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[\left(\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{\perp} ^{2}\tilde{\rho}_{s}^{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s }^{2}}{2}\right)-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{n}^{\prime}\!\left(\frac{ k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right],\] \[(\textbf{M}_{s})_{yz} = -\mathrm{i}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty}k_ {\perp}\tilde{\rho}_{s}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]\] (170) \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \right]\,,\] \[(\textbf{M}_{s})_{zz} = (\textbf{M}_{s})_{xz}\,,\] (171) \[(\textbf{M}_{s})_{zy} = -(\textbf{M}_{s})_{yz}\,,\] (172) \[(\textbf{M}_{s})_{zz} = 2\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty}\zeta_{sn} \left[1+\zeta_{sn}Z(\zeta_{sn})\right]\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)\,. \tag{173}\] The components of the dielectric tensor (166) in coordinate basis \(\{\textbf{e}_{1},\textbf{e}_{2},\textbf{e}_{3}\}\) then follow from (164), though we do not write these out explicitly. b.1.2 Dielectric tensor in low-frequency limit, \(\{\hat{\boldsymbol{x}},\hat{\boldsymbol{y}},\hat{\boldsymbol{z}}\}\) coordinate frame Now, to consider the low-frequency limit \(\tilde{\omega}_{s\parallel}\ll 1\), we Taylor expand (166) in \(\tilde{\omega}_{s\parallel}\). Noting that \(\tilde{\omega}_{s\parallel}\) only appears via the argument \(\zeta_{sn}=\tilde{\omega}_{s\parallel}-n/|k_{\parallel}|\tilde{\rho}_{s}\), we use the differential identity \(Z^{\prime}(z)=-2\left[1+zZ(z)\right]\) to obtain the expansions \[Z(\zeta_{sn}) = Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)-2 \tilde{\omega}_{s\parallel}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z \!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\right]+\textit{O}( \tilde{\omega}_{s\parallel}^{2}), \tag{174a}\] \[1+\zeta_{sn}Z(\zeta_{sn}) = 1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_ {\parallel}|\tilde{\rho}_{s}}\right)\] (174b) \[+\tilde{\omega}_{s\parallel}\left[\left(1-\frac{2n^{2}}{|k_{ \parallel}|^{2}\tilde{\rho}_{s}^{2}}\right)Z\!\left(-\frac{n}{|k_{\parallel}| \tilde{\rho}_{s}}\right)+\frac{2n}{|k_{\parallel}|\tilde{\rho}_{s}}\right]+ \textit{O}(\tilde{\omega}_{s\parallel}^{2})\,,\] \[\zeta_{sn}\left[1+\zeta_{sn}Z(\zeta_{sn})\right] = -\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\left[1-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}} \right)\right]+\tilde{\omega}_{s\parallel}\left[1-\frac{2n^{2}}{|k_{ \parallel}|^{2}\tilde{\rho}_{s}^{2}}\right.\] (174c) \[\left.-\frac{2n}{|k_{\parallel}|\tilde{\rho}_{s}}\left(1-\frac{n ^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^{2}}\right)Z\!\left(-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\right)\right]+\textit{O}(\tilde{\omega}_{s \parallel}^{2}).\] Then, expanding the dielectric tensor as \[\textbf{M}_{s}=\tilde{\omega}_{s\parallel}\textbf{M}_{s}^{(0)}+\tilde{\omega}_{ s\parallel}^{2}\textbf{M}_{s}^{(1)}+\textit{O}(\tilde{\omega}_{s\parallel}^{3})\,, \tag{175}\] we have \[(\textbf{M}_{s}^{(0)})_{xx}=2\sum_{n=-\infty}^{\infty}\frac{n^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right) \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{ k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right), \tag{176a}\] \[(\mbox{\it{M}}^{(0)}_{s})_{xy}={\rm i}\sum_{n=-\infty}^{\infty}nZ \biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\biggl{[}I^{\prime}_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\biggr{)}-I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \biggr{]}\, \tag{126b}\] \[(\mbox{\it{M}}^{(0)}_{s})_{xz}=2\sum_{n=-\infty}^{\infty}\frac{n}{k_{ \perp}\tilde{\rho}_{s}}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z \biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\,,\] (126c) \[(\mbox{\it{M}}^{(0)}_{s})_{yy}=\sum_{n=-\infty}^{\infty}Z\biggl{(} -\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\biggl{[}\biggl{(}\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{ \perp}^{2}\tilde{\rho}_{s}^{2}\biggr{)}\,I_{n}\biggl{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\biggr{)}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I^{\prime}_ {n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\biggr{]}\,,\] (126d) \[(\mbox{\it{M}}^{(0)}_{s})_{yz}={\rm i}\sum_{n=-\infty}^{\infty}k_ {\perp}\tilde{\rho}_{s}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z \biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\biggl{[}I^{\prime}_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2} }{2}\biggr{)}-I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \biggr{]}\,\] (126e) \[(\mbox{\it{M}}^{(0)}_{s})_{zz}=-2\sum_{n=-\infty}^{\infty}\frac{n}{| k_{\parallel}|\tilde{\rho}_{s}}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z \biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\, \tag{126f}\] and \[(\mbox{\it{M}}^{(1)}_{s})_{xx}=-4\sum_{n=-\infty}^{\infty}\frac{n^ {2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\left[1-\frac{n}{|k_{\parallel}|\tilde{ \rho}_{s}}Z\biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\,, \tag{126i}\] \[(\mbox{\it{M}}^{(1)}_{s})_{xy}=-2{\rm i}\sum_{n=-\infty}^{\infty}n \left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(}-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I^{\prime}_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \biggr{)}-I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \right]\,,\] (126j) \[(\mbox{\it{M}}^{(1)}_{s})_{xz}=2\sum_{n=-\infty}^{\infty}\frac{n} {k_{\perp}\tilde{\rho}_{s}}\left[\left(1-\frac{2n^{2}}{|k_{\parallel}|^{2} \tilde{\rho}_{s}^{2}}\right)Z\biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}} \biggr{)}+\frac{2n}{|k_{\parallel}|\tilde{\rho}_{s}}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\,,\] (126j) \[(\mbox{\it{M}}^{(1)}_{s})_{yy}=-2\sum_{n=-\infty}^{\infty}\left[1- \frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(}-\frac{n}{|k_{\parallel}| \tilde{\rho}_{s}}\biggr{)}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[\biggl{(}\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{\perp}^ {2}\tilde{\rho}_{s}^{2}\biggr{)}\,I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{ s}^{2}}{2}\biggr{)}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I^{\prime}_{n}\biggl{(}\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\right],\] \[(\mathbf{M}_{s}^{(1)})_{yz}=-{\rm i}\sum_{n=-\infty}^{ \infty}k_{\perp}\tilde{\rho}_{s}\left[\left(1-\frac{2n^{2}}{|k_{\parallel}|^{2} \tilde{\rho}_{s}^{2}}\right)Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s} }\right)+\frac{2n}{|k_{\parallel}|\tilde{\rho}_{s}}\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \right]\,, \tag{111}\] \[(\mathbf{M}_{s}^{(1)})_{zz}=2\sum_{n=-\infty}^{\infty} \left[1-\frac{2n^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^{2}}- \frac{2n}{|k_{\parallel}|\tilde{\rho}_{s}}\left(1-\frac{n^{2}}{|k_{\parallel} |^{2}\tilde{\rho}_{s}^{2}}\right)Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{ \rho}_{s}}\right)\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\,.\] These expressions can be simplified somewhat using two further types of algebraic manipulation. First, for \(z\) a real number, we can split the plasma dispersion into real and imaginary parts as \[Z(z) = \frac{1}{\sqrt{\pi}}{\cal P}\int_{-\infty}^{\infty}\frac{\exp \left(-u^{2}\right)\!{\rm d}u}{u-z}+{\rm i}\sqrt{\pi}\exp\left(-z^{2}\right) \tag{112}\] \[=\mbox{Re}\;Z(z)+{\rm i}\sqrt{\pi}\exp\left(-z^{2}\right).\] Thus, we see that the real part of \(Z(z)\) is an odd function for real \(z\), while the imaginary part is an even function. As a consequence, only one of the real or imaginary parts of the plasma dispersion function will enter into the summations in (111) and (111). Secondly, we utilise the generating function of the modified Bessel function, viz., \[\sum_{n=-\infty}^{\infty}I_{n}(\alpha)\,t^{n}=\exp\left[\frac{\alpha}{2}\left( t+\frac{1}{t}\right)\right], \tag{113}\] to deduce the following identities: \[\sum_{n=-\infty}^{\infty}I_{n}(\alpha) = \exp\left(\alpha\right), \tag{114a}\] \[\sum_{n=-\infty}^{\infty}n^{2}I_{n}(\alpha) = \alpha\exp\left(\alpha\right),\] (114b) \[\sum_{n=-\infty}^{\infty}\left[I_{n}^{\prime}(\alpha)-I_{n}( \alpha)\right] = 0\,,\] (114c) \[\sum_{n=-\infty}^{\infty}n^{2}\left[I_{n}^{\prime}(\alpha)-I_{n}( \alpha)\right] = \exp\left(\alpha\right). \tag{114d}\] Combining these results, we obtain from (111) and (111) the following expressions for the components of \(\mathbf{M}_{s}^{(0)}\) and \(\mathbf{M}_{s}^{(1)}\): \[(\mathbf{M}_{s}^{(0)})_{xx} = 4{\rm i}\sqrt{\pi}\sum_{m=1}^{\infty}\frac{m^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s} ^{2}}\right)\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{m} \!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \tag{115a}\] \[= {\rm i}F\!\left(k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho }_{s}\right)\,,\] \[(\mathbf{M}_{s}^{(0)})_{xy} = -{\rm i}\sum_{m=-\infty}^{\infty}m\,\mbox{Re}\!\left[Z\left( \frac{m}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\right]\exp\left(-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{m}^{\prime}\!\left(\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)-I_{m}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\right]\] (115b) \[= -{\rm i}G\!\left(k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\right)\,,\] \[(\textbf{M}_{s}^{(0)})_{xz} = -4{\rm i}\sqrt{\pi}\sum_{m=-\infty}^{\infty}\frac{m^{2}}{k_{\perp}|k_ {\parallel}|\tilde{\rho}_{s}^{2}}\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde {\rho}_{s}^{2}}\right)\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \tag{125c}\] \[= -\frac{{\rm i}k_{\perp}}{|k_{\parallel}|}F\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] \[(\textbf{M}_{s}^{(0)})_{yy} = {\rm i}\sqrt{\pi}\sum_{m=-\infty}^{\infty}\exp\left(-\frac{m^{2}} {k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\] (125d) \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\bigg{[}\bigg{(}\frac{2m^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{ \perp}^{2}\tilde{\rho}_{s}^{2}\bigg{)}\,I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde {\rho}_{s}^{2}}{2}\bigg{)}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{m}^{\prime}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{]}\] \[= {\rm i}H\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\big{)}\,\] \[(\textbf{M}_{s}^{(0)})_{yz} = -{\rm i}\sum_{m=-\infty}^{\infty}\frac{mk_{\perp}}{|k_{\parallel }|}\,{\rm Re}\bigg{[}Z\left(\frac{m}{|k_{\parallel}|\tilde{\rho}_{s}}\right) \bigg{]}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\bigg{[}I _{m}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{m} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{]}\] (125e) \[= -\frac{{\rm i}k_{\perp}}{|k_{\parallel}|}G\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] \[(\textbf{M}_{s}^{(0)})_{zz} = 4{\rm i}\sqrt{\pi}\sum_{m=1}^{\infty}\frac{m^{2}}{k_{\parallel}^{ 2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde{\rho}_{ s}^{2}}\right)\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{m} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] (125f) \[= \frac{{\rm i}k_{\perp}^{2}}{k_{\parallel}^{2}}F\big{(}k_{\parallel }\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] and \[(\textbf{M}_{s}^{(1)})_{xx} = -2\left\{1+\sum_{m=-\infty}^{\infty}\frac{2m^{3}}{|k_{\parallel}|k _{\perp}^{2}\tilde{\rho}_{s}^{3}}{\rm Re}\bigg{[}Z\left(\frac{m}{|k_{\parallel} |\tilde{\rho}_{s}}\right)\bigg{]}\right. \tag{125g}\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{\}}\] \[= -\frac{4}{3}W\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\big{)}\,,\] \[(\textbf{M}_{s}^{(1)})_{xy} = 4\sqrt{\pi}\sum_{m=1}^{\infty}\frac{m^{2}}{|k_{\parallel}|\tilde{ \rho}_{s}}\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\] (125h) \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\bigg{[}I_{m}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\bigg{)}-I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \bigg{]}\,\] \[(\textbf{M}_{s}^{(1)})_{xz} = 2\left\{\frac{k_{\perp}}{|k_{\parallel}|}+\sum_{m=-\infty}^{ \infty}\left(\frac{2m^{3}}{|k_{\parallel}|^{2}k_{\perp}\tilde{\rho}_{s}^{3}}- \frac{m}{k_{\perp}\tilde{\rho}_{s}}\right){\rm Re}\bigg{[}Z\left(\frac{m}{|k_{ \parallel}|\tilde{\rho}_{s}}\right)\bigg{]}\right.\] (125h) \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\bigg{[}\bigg{(}\frac{2m^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{ \perp}^{2}\tilde{\rho}_{s}^{2}\bigg{)}\,I_{m}\bigg{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\bigg{)}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{m}^{ \prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{]}\right\}\] \[(\textbf{M}_{s}^{(1)})_{yy} = -2\left\{1+\sum_{m=-\infty}^{\infty}\frac{m}{|k_{\parallel}|\tilde{ \rho}_{s}}{\rm Re}\bigg{[}Z\left(\frac{m}{|k_{\parallel}|\tilde{\rho}_{s}} \right)\bigg{]}\right.\] (125h) \[\times \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \bigg{[}\bigg{(}\frac{2m^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{\perp}^{2} \tilde{\rho}_{s}^{2}\bigg{)}\,I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\bigg{)}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{m}^{\prime}\bigg{(}\frac{k_{\perp}^ {2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{]}\right\}\] \[= -\frac{4}{3}Y\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\big{)}\,,\] \[(\textbf{M}^{(1)}_{s})_{yz}=-\sqrt{\pi}\sum_{m=-\infty}^{\infty}k_{ \perp}\tilde{\rho}_{s}\left(1-\frac{2m^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^ {2}}\right)\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I^{\prime}_{m}\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)-I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \right]\,, \tag{126}\] \[(\textbf{M}^{(1)}_{s})_{zz}=2\left\{1-\frac{k_{\perp}^{2}}{k_{ \parallel}^{2}}+\sum_{m=-\infty}^{\infty}\frac{2m}{|k_{\parallel}|\tilde{\rho} _{s}}\left(1-\frac{m^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^{2}}\right) \text{Re}\bigg{[}Z\left(\frac{m}{|k_{\parallel}|\tilde{\rho}_{s}}\right) \bigg{]}\right.\] \[\left.\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \right\}\,,\] where we have reintroduced the special functions \(F(x,y),\,G(x,y)\) and \(H(x,y)\) defined by (121), as well as \(W(x,y)\) and \(Y(x,y)\) defined by (122). As anticipated from the arguments presented in appendix F, \(\textbf{M}^{(0)}_{s}\) obeys the symmetries \[(\textbf{M}^{(0)}_{s})_{xz}=-\frac{k_{\perp}}{k_{\parallel}}( \textbf{M}^{(0)}_{s})_{xx}\,, \tag{127a}\] \[(\textbf{M}^{(0)}_{s})_{yz}=\frac{k_{\perp}}{k_{\parallel}}( \textbf{M}^{(0)}_{s})_{xy}\,,\] (127b) \[(\textbf{M}^{(0)}_{s})_{zz}=\frac{k_{\perp}^{2}}{k_{\parallel}^{ 2}}(\textbf{M}^{(0)}_{s})_{xx}\,. \tag{127c}\] b.1.3 Dielectric tensor in low-frequency limit, \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) coordinate frame Having evaluated the first- and second-order terms in the expansion for components of the dielectric tensor in the coordinate basis \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\), we can use (124) to find equivalent expressions in the coordinate basis \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\). Explicitly, we have the following transformations for \(\textbf{M}^{(0)}_{s}\): \[(\textbf{M}^{(0)}_{s})_{11}=\frac{k_{\parallel}^{2}}{k^{2}}( \textbf{M}^{(0)}_{s})_{xx}-\frac{2k_{\parallel}k_{\perp}}{k^{2}}(\textbf{M}^{ (0)}_{s})_{xz}+\frac{k_{\perp}^{2}}{k^{2}}(\textbf{M}^{(0)}_{s})_{zz}\,, \tag{128a}\] \[(\textbf{M}^{(0)}_{s})_{12}=\frac{k_{\parallel}}{k}(\textbf{M}^{(0 )}_{s})_{xy}+\frac{k_{\perp}}{k}(\textbf{M}^{(0)}_{s})_{yz}\,,\] (128b) \[(\textbf{M}^{(0)}_{s})_{13}=\frac{k_{\parallel}k_{\perp}}{k^{2}} \left[(\textbf{M}^{(0)}_{s})_{xx}-(\textbf{M}^{(0)}_{s})_{zz}\right]+\left( \frac{k_{\parallel}^{2}}{k^{2}}-\frac{k_{\perp}^{2}}{k^{2}}\right)(\textbf{M}^ {(0)}_{s})_{xz}\,,\] (128c) \[(\textbf{M}^{(0)}_{s})_{22}=(\textbf{M}^{(0)}_{s})_{yy}\,,\] (128d) \[(\textbf{M}^{(0)}_{s})_{23}=-\frac{k_{\perp}}{k}(\textbf{M}^{(0) }_{s})_{xy}+\frac{k_{\parallel}}{k}(\textbf{M}^{(0)}_{s})_{yz}\,,\] (128e) \[(\textbf{M}^{(0)}_{s})_{33}=\frac{k_{\perp}^{2}}{k^{2}}(\textbf{M}^ {(0)}_{s})_{xx}+\frac{2k_{\parallel}k_{\perp}}{k^{2}}(\textbf{M}^{(0)}_{s})_{ xz}+\frac{k_{\parallel}^{2}}{k^{2}}(\textbf{M}^{(0)}_{s})_{zz}\,, \tag{128f}\] and similiarly for \(\textbf{M}^{(1)}_{s}\). On account of the symmetries derived in appendix G.1.2, we find for \(\textbf{M}^{(0)}_{s}\) that \[(\textbf{M}^{(0)}_{s})_{11}=\frac{k^{2}}{k_{\parallel}^{2}}( \textbf{M}^{(0)}_{s})_{xx}\,, \tag{129a}\] \[(\textbf{M}^{(0)}_{s})_{12}=\frac{k}{k_{\parallel}}(\textbf{M}^{(0 )}_{s})_{xy}\,,\] (129b) \[(\textbf{M}^{(0)}_{s})_{21}=-(\textbf{M}^{(0)}_{s})_{12}\,, \tag{129c}\] \[(\mathbf{M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[(\mathbf{M}_{s}^{(1)})_{22} = (\mathbf{M}_{s}^{(1)})_{yy}\,,\] (G 33 \(d\) ) \[(\mathbf{M}_{s}^{(1)})_{23} = -\frac{k_{\parallel}}{k}N\big{(}k_{\parallel}\tilde{\rho}_{s},k_{ \perp}\tilde{\rho}_{s}\big{)}\,\] (G 33 \(e\) ) \[(\mathbf{M}_{s}^{(1)})_{33} = \frac{2k_{\parallel}^{2}}{k^{2}}\,.\] (G 33 \(f\) ) We note that \(\mathbf{M}_{s}^{(1)}\) does not possess the same symmetry properties as \(\mathbf{M}_{s}^{(0)}\). #### g.1.4 Asymptotic forms of \(\mathbf{M}_{s}^{(0)}\) and \(\mathbf{M}_{s}^{(1)}\) In this appendix, we write down asymptotic forms at small and large \(x\) and \(y\) for the special functions \(F(x,y)\), \(G(x,y)\), \(H(x,y)\), \(L(x,y)\) and \(N(x,y)\) defined by (2.121) and (G 32), respectively. Physically, this corresponds via (2.120) to considering the dielectric response associated with \(\mathbf{M}_{s}^{(0)}\) and \(\mathbf{M}_{s}^{(1)}\) for modes with parallel and perpendicular wavenumbers very small (or very large) with respect to the inverse Larmor radius of species \(s\). Detailed derivations are left as an exercise to keen readers (and can be verified numerically). Proceeding systematically through various limits, we have the following results: * \(x\sim 1\), \(y\ll 1\): \[F(x,y) = \sqrt{\pi}\exp\left(-\frac{1}{x^{2}}\right)\left[1+\mbox{ \it O}\!\left(y^{2}\right)\right]\,,\] (G 34 \(a\) ) \[G(x,y) = \mbox{Re}\!\left[Z\left(\frac{1}{x}\right)\right]\left[1+\mbox{ \it O}\!\left(y^{2}\right)\right]\,,\] (G 34 \(b\) ) \[H(x,y) = \sqrt{\pi}\exp\left(-\frac{1}{x^{2}}\right)\left[1+\mbox{ \it O}\!\left(y^{2}\right)\right]\,,\] (G 34 \(c\) ) \[L(x,y) = y\mbox{Re}\!\left[Z\left(\frac{1}{x}\right)\right]\left[1+\mbox{ \it O}\!\left(y^{2}\right)\right]\,,\] (G 34 \(d\) ) \[N(x,y) = \sqrt{\pi}y\left[2\exp\left(-\frac{1}{x^{2}}\right)-1\right] \left[1+\mbox{ \it O}\!\left(y^{2}\right)\right]\,.\] (G 34 \(e\) ) * \(x,y\gg 1\) \[F(x,y) = \frac{\sqrt{\pi}x^{3}}{\left(x^{2}+y^{2}\right)^{3/2}}\left[1+ \mbox{ \it O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 35 \(a\) ) \[G(x,y) = -\frac{2x^{3}}{\left(x^{2}+y^{2}\right)^{2}}\left[1+\mbox{ \it O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 35 \(b\) ) \[H(x,y) = \frac{\sqrt{\pi}x}{\left(x^{2}+y^{2}\right)^{1/2}}\left[1+\mbox{ \it O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 35 \(c\) ) \[L(x,y) = -\frac{2xy}{x^{2}+y^{2}}\left[1+\mbox{ \it O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 35 \(d\) ) \[N(x,y) = \frac{\sqrt{\pi}x}{y\left(x^{2}+y^{2}\right)^{1/2}}\left[1+\mbox{ \it O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,.\] (G 35 \(e\) ) We observe that the asymptotic forms (G 35) are in fact valid even for \(y\lesssim 1\). * \(x\ll 1\), \(y\sim 1\): \[F(x,y)=\frac{4\sqrt{\pi}}{y^{2}}\exp\left(-\frac{y^{2}}{2}\right)\!I_{1}\! \left(\frac{y^{2}}{2}\right)\exp\left(-\frac{1}{x^{2}}\right)\left\{1+\mbox{ \it O}\!\left[\exp\left(-\frac{3}{x^{2}}\right)\right]\right\}\,,\] (G 36 \(a\) ) \[G(x,y) = -x\exp\left(-\frac{y^{2}}{2}\right)\left[I_{0}\!\left(\frac{y^{2}}{2 }\right)-I_{1}\!\left(\frac{y^{2}}{2}\right)\right]\left[1+\textit{O}\!\left(x^ {2}\right)\right]\,, \tag{126b}\] \[H(x,y) = \sqrt{\pi}y^{2}\exp\left(-\frac{y^{2}}{2}\right)\left[I_{0}\! \left(\frac{y^{2}}{2}\right)-I_{1}\!\left(\frac{y^{2}}{2}\right)\right]\left[1 +\textit{O}\!\left(x^{2}\right)\right]\,,\] (126c) \[L(x,y) = -\frac{2x}{y}\left[1-\exp\left(-\frac{y^{2}}{2}\right)I_{0}\! \left(\frac{y^{2}}{2}\right)\right]\left[1+\textit{O}\!\left(x^{2}\right) \right]\,,\] (126d) \[N(x,y) = -\sqrt{\pi}y\exp\left(-\frac{y^{2}}{2}\right)\left[I_{0}\! \left(\frac{y^{2}}{2}\right)-I_{1}\!\left(\frac{y^{2}}{2}\right)\right]\left[ 1+\textit{O}\!\left(x^{2}\right)\right]\,. \tag{126e}\] * \(x,y\ll 1\): \[F(x,y) = \sqrt{\pi}\exp\left(-\frac{1}{x^{2}}\right)\left\{1+\textit{O} \!\left[\exp\left(-\frac{3}{x^{2}}\right),y^{2}\right]\right\}\,,\] (126a) \[G(x,y) = -x\left[1-\left(\frac{3}{4}y^{2}-\frac{1}{2}x^{2}\right)\right.\] (126b) \[\qquad\qquad+\left(\frac{3}{4}x^{4}-\frac{15}{32}x^{2}y^{2}+\frac{5}{16}y^{4} \right)\right]\left[1+\textit{O}\!\left(x^{6},x^{4}y^{2},x^{2}y^{4},y^{6} \right)\right]\,,\] \[H(x,y) = \sqrt{\pi}y^{2}\left[1-\left(\frac{3}{4}y^{2}-\frac{1}{2}x^{2}\right)\right.\] (126c) \[\qquad\qquad+\left(\frac{3}{4}x^{4}-\frac{15}{32}x^{2}y^{2}+\frac{5}{16}y^{4} \right)\right]\left[1+\textit{O}\!\left(x^{6},x^{4}y^{2},x^{2}y^{4},y^{6} \right)\right]\,,\] \[L(x,y) = -xy\left[1+\textit{O}\!\left(x^{2},y^{2}\right)\right]\,,\] (126d) \[N(x,y) = -\sqrt{\pi}y\left[1+\textit{O}\!\left(x^{2}\right)\right]\left[1+ \textit{O}\!\left(x^{2},y^{2}\right)\right]\,.\] (126e) * \(x\ll 1\), \(y\gg 1\): \[F(x,y) = \frac{4}{y^{3}}\exp\left(-\frac{1}{x^{2}}\right)\left\{1+\textit{O }\!\left[\exp\left(-\frac{3}{x^{2}}\right),\frac{1}{y^{2}}\right]\right\}\,,\] (126a) \[G(x,y) = -\frac{x}{\sqrt{\pi}y^{3}}\left[1+\textit{O}\!\left(\frac{1}{y^{2 }}\right)\right]\,,\] (126b) \[H(x,y) = \frac{1}{y}\left[1+\textit{O}\!\left(\frac{1}{y^{2}}\right)\right]\,,\] (126c) \[L(x,y) = -\frac{2x}{y}\left[1-\frac{1}{\sqrt{\pi}y}\right]\left[1+\textit{O }\!\left(x^{2},\frac{1}{y^{3}}\right)\right]\,,\] (126d) \[N(x,y) = -\frac{1}{y^{2}}\left[1+\textit{O}\!\left(\frac{1}{y^{2}}\right) \right]\,.\] (126e) #### i.1.5 Unmagnetised Maxwellian dielectric response In this paper, we consider microinstabilities over a wide range of scales, from \(k\rho_{i}\ll 1\) to sub-electron-scale microinstabilities with \(k\rho_{e}\gg 1\). Therefore, the ordering \(k\rho_{s}\sim 1\) assumed in section 2.5.3 for the derivation of the low-frequency dielectric tensor in a magnetised plasma cannot hold for both ions and electrons (as was noted in section 2.5.5 and discussed in section 2.5.6). While the derivation of the dielectric tensor in a strongly magnetised plasma (\(k\rho_{s}\ll 1\)) is straightforwardly performed by asymptotic analysis applied directly to the hot, magnetised plasma conductivity tensor (76), the equivalent calculation for \(k\rho_{s}\gg 1\) is most easily done by direct analysis of the Vlasov equation with \(\mathbf{B}_{0}=0\). In this appendix, we present such a calculation. We begin from (111), but with \(\tilde{\Omega}_{s}=0\) (and ignoring the displacement current): \[\frac{k^{2}c^{2}}{\omega^{2}}\left[\widehat{\delta\mathbf{E}}-\hat{\mathbf{ k}}\left(\hat{\mathbf{k}}\cdot\widehat{\delta\mathbf{E}}\right)\right] = \frac{4\pi\mathrm{i}}{\omega}\widehat{\delta\mathbf{j}}, \tag{112a}\] \[\widehat{\delta\mathbf{j}} = \sum_{s}Z_{s}e\int\mathrm{d}^{3}\mathbf{v}\,\mathbf{v}\,\widehat{\delta f _{s}},\] (112b) \[\left(-\mathrm{i}\omega+\mathrm{i}\mathbf{k}\mathbf{\cdot v}\right) \widehat{\delta f_{s}} = -\frac{Z_{s}e}{m_{s}}\left[\widehat{\delta\mathbf{E}}+\frac{k}{\omega }\mathbf{v}\times\left(\hat{\mathbf{k}}\times\widehat{\delta\mathbf{E}}\right)\right]\mathbf{ \cdot}\,\frac{\partial f_{s0}}{\partial\mathbf{v}}\,. \tag{112c}\] As with the magnetised case, we substitute the perturbed distribution function (112c) into the current (112b) : \[\widehat{\delta\mathbf{j}}=-\mathrm{i}\sum_{s}\frac{Z_{s}^{2}e^{2}}{m_{s}}\int \mathrm{d}^{3}\mathbf{v}\,\frac{\mathbf{v}}{\omega-\mathbf{k}\mathbf{\cdot v}}\left[\widehat{ \delta\mathbf{E}}+\frac{k}{\omega}\mathbf{v}\times\left(\hat{\mathbf{k}}\times\widehat{ \delta\mathbf{E}}\right)\right]\mathbf{\cdot}\,\frac{\partial f_{s0}}{\partial\mathbf{v}}\,. \tag{112d}\] Non-dimensionalising the distribution function via \[\tilde{f}_{s0}(\tilde{\mathbf{v}}_{s})\equiv\frac{\uppi^{3/2}v_{\mathrm{ths}}^{3} }{n_{s0}}f_{s0}\left(v_{\mathrm{ths}}\tilde{\mathbf{v}}_{s}\right), \tag{112e}\] we obtain \[\widehat{\delta\mathbf{j}}=-\frac{\mathrm{i}}{4\pi\omega}\sum_{s}\omega_{\mathrm{ ps}}^{2}\frac{\tilde{\omega}_{s}}{\uppi^{3/2}}\int\mathrm{d}^{3}\mathbf{v}_{s}\, \frac{\tilde{\mathbf{v}}_{s}}{\upomega_{s}-\hat{\mathbf{k}}\mathbf{\cdot}\tilde{\mathbf{v}}_{ s}}\left[\widehat{\delta\mathbf{E}}+\frac{1}{\tilde{\omega}_{s}}\tilde{\mathbf{v}}_{s} \times\left(\hat{\mathbf{k}}\times\widehat{\delta\mathbf{E}}\right)\right]\mathbf{\cdot} \,\frac{\partial\tilde{f}_{s0}}{\partial\tilde{\mathbf{v}}_{s}}\,, \tag{112f}\] where \(\tilde{\omega}_{s}=\omega/kv_{\mathrm{ths}}\). For a Maxwellian distribution, with \[\tilde{f}_{s0}(\tilde{\mathbf{v}}_{s})=\exp\left(-\tilde{v}_{s}^{2}\right), \tag{112g}\] the second term in (112g) vanishes, leaving \[\widehat{\delta\mathbf{j}}=\mathbf{\sigma}\mathbf{\cdot}\widehat{\delta\mathbf{E}}\,, \tag{112g}\] where the conductivity tensor is \[\mathbf{\sigma}=\frac{\mathrm{i}}{4\pi\omega}\sum_{s}\omega_{\mathrm{ps}}^{2} \frac{2\tilde{\omega}_{s}}{\uppi^{3/2}}\int\mathrm{d}^{3}\tilde{\mathbf{v}}_{s}\, \frac{\tilde{\mathbf{v}}_{s}\tilde{\mathbf{v}}_{s}}{\tilde{\omega}_{s}-\hat{\mathbf{k}} \mathbf{\cdot}\tilde{\mathbf{v}}_{s}}\exp\left(-\tilde{v}_{s}^{2}\right). \tag{112g}\] The integral can be evaluated to give \[\mathbf{\sigma}=-\frac{\mathrm{i}}{4\pi\omega}\sum_{s}\omega_{\mathrm{ps}}^{2} \tilde{\omega}_{s}\left\{Z(\tilde{\omega}_{s})\left(\mathbf{l}-\hat{\mathbf{k}}\hat{ \mathbf{k}}\right)+2\left[\tilde{\omega}_{s}+\tilde{\omega}_{s}^{2}Z(\tilde{ \omega}_{s})\right]\hat{\mathbf{k}}\hat{\mathbf{k}}\right\}\,. \tag{112g}\] The dielectric tensor in an unmagnetised Maxwellian plasma for general \(\tilde{\omega}_{s}\) is, therefore, \[\mathbf{\mathfrak{E}}^{\mathrm{(UM)}}=\sum_{s}\frac{\omega_{\mathrm{ps}}^{2}}{ \omega^{2}}\tilde{\omega}_{s}\left\{Z(\tilde{\omega}_{s})\left(\mathbf{l}-\hat{\bm {k}}\hat{\mathbf{k}}\right)+2\left[\tilde{\omega}_{s}+\tilde{\omega}_{s}^{2}Z( \tilde{\omega}_{s})\right]\hat{\mathbf{k}}\hat{\mathbf{k}}\right\}\,. \tag{112g}\] Note that it follows from (112g) that \(\mathbf{\mathfrak{E}}\mathbf{\cdot}\hat{\mathbf{k}}=0\), so we conclude that for non-zero fluctuations, either \(\hat{\mathbf{k}}\mathbf{\cdot}\widehat{\delta\mathbf{E}}=0\) or \(1+\tilde{\omega}_{s}Z(\tilde{\omega}_{s})=0\). We do not find the conventional longitudinal plasma waves because we have neglected the displacement current in Maxwell's equations. The only modes that satisfy \(1+\tilde{\omega}_{s}Z(\tilde{\omega}_{s})=0\) are strongly damped, with \(\tilde{\omega}_{s}\sim 1\). Thus, all modes satisfying \(\tilde{\omega}_{s}\ll 1\) must be purely transverse. For \(\tilde{\omega}_{s}\ll 1\), the unmagnetised dielectric response therefore simplifies to \[\mathbf{\mathfrak{E}}^{\mathrm{(UM)}}=\mathrm{i}\sqrt{\pi}\left(\mathbf{l}-\hat{\mathbf{k}} \hat{\mathbf{k}}\right)\sum_{s}\frac{\omega_{\mathrm{ps}}^{2}}{\omega^{2}}\tilde{ \omega}_{s}\left[1+\mathit{O}(\tilde{\omega}_{s})\right]\,. \tag{112g}\] 6.1 Validity of approximation \(\boldsymbol{\mathsf{M}}_{s}\approx\boldsymbol{\mathsf{M}}_{s}^{(0)}\) for large or small \(k_{\parallel}\rho_{s}\) and \(k_{\perp}\rho_{s}\) In carrying out the expansion of the Maxwellian dielectric tensor (17) in \(\tilde{\omega}_{s\parallel}\), we assumed that \(k\rho_{s}\sim 1\); however, in general, we will wish to consider microinstabilities that exist at typical wavenumbers \(k\rho_{s}\ll 1\) or \(k\rho_{s}\gg 1\). Indeed, since the mass ratio \(\mu_{e}=m_{e}/m_{i}\) is very small, if we wish to consider the combined response of both species, it is inevitable that for one of them, \(k\rho_{s}\ll 1\) or \(k\rho_{s}\gg 1\). Thus, it remains to assess when the approximation \(\boldsymbol{\mathsf{M}}_{s}\approx\boldsymbol{\mathsf{M}}_{s}^{(0)}\) is valid in these limits. We show in this appendix that this approximation is appropriate in the limit \(k_{\parallel}\rho_{s}\gg 1\), for arbitrary \(k_{\perp}\rho_{s}\); however, for \(k_{\parallel}\rho_{s}\ll 1\), the approximation breaks down for some dielectric components - indeed, in the limit \(k_{\parallel}\rho_{s},k_{\perp}\rho_{s}\ll 1\), it breaks down for all but two components. For these instances, an alternative expression for the dielectric tensor is derived below. The validity of the \(k_{\parallel}\rho_{s}\gg 1\) limit is most simply demonstrated by comparing the components of \(\boldsymbol{\mathsf{M}}_{s}^{(0)}\) to the unmagnetised dielectric response (18). Recalling that \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{11} = \mathrm{i}\tilde{\omega}_{s\parallel}\frac{k^{2}}{k_{\parallel}^{ 2}}F\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\, \tag{19a}\] \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{12} = \mathrm{i}\tilde{\omega}_{s\parallel}\frac{k}{k_{\parallel}}G \big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] (19b) \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{21} = -(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{12}\,,\] (19c) \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{22} = \mathrm{i}\tilde{\omega}_{s\parallel}H\big{(}k_{\parallel}\tilde{ \rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\, \tag{19d}\] and applying the asymptotic results (195), we find \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{11} \approx \mathrm{i}\sqrt{\pi}\frac{\tilde{\omega}_{s\parallel}k_{\parallel }}{k}\,, \tag{19a}\] \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{12} \approx -2\mathrm{i}\frac{\tilde{\omega}_{s\parallel}k_{\parallel}^{2}}{k ^{2}}\,\frac{1}{k\rho_{s}}\,,\] (19b) \[(\boldsymbol{\mathsf{M}}_{s}^{(0)})_{22} \approx \mathrm{i}\sqrt{\pi}\frac{\tilde{\omega}_{s\parallel}k_{\parallel }}{k}\,. \tag{19c}\] We note these expressions are valid for arbitrary \(k_{\perp}\rho_{s}\). The equivalent components of the unmagnetised (normalised) dielectric tensor \(\boldsymbol{\mathsf{M}}_{s}\approx\omega^{2}\boldsymbol{\mathsf{E}}_{s}^{( \mathrm{UM})}/\omega_{ps}^{2}\) are \[(\boldsymbol{\mathsf{M}}_{s})_{11} = \mathrm{i}\sqrt{\pi}\tilde{\omega}_{s}\,, \tag{19a}\] \[(\boldsymbol{\mathsf{M}}_{s})_{12} = (\boldsymbol{\mathsf{M}}_{s}^{(0)})_{21}=0\,,\] (19b) \[(\boldsymbol{\mathsf{M}}_{s})_{22} = \mathrm{i}\sqrt{\pi}\tilde{\omega}_{s}\,. \tag{19c}\] Noting that \(\tilde{\omega}_{s}=\tilde{\omega}_{s\parallel}k_{\parallel}/k\), we see that the diagonal terms are identical, while the non-zero \(\boldsymbol{e}_{1}\boldsymbol{e}_{2}\) term present in the \(k\rho_{s}\gg 1\) limit of \(\boldsymbol{\mathsf{M}}_{s}^{(0)}\) becomes asymptotically small in \(1/k\rho_{s}\ll 1\). To demonstrate that the approximation \(\boldsymbol{\mathsf{M}}_{s}\approx\boldsymbol{\mathsf{M}}_{s}^{(0)}\) is not accurate in the limit \(k_{\parallel}\rho_{s}\ll 1\), we consider the full Maxwellian dielectric tensor assuming \(\tilde{\omega}_{s\parallel}\lesssim 1\) and \(k_{\parallel}\rho_{s}\ll 1\). If this long-wavenumber dielectric tensor subsequently evaluated at low frequencies \(\tilde{\omega}_{s\parallel}\ll 1\) gives the same result as \(\boldsymbol{\mathsf{M}}_{s}^{(0)}\) for any particular component of \(\boldsymbol{\mathsf{M}}_{s}\), then the approximation for that component is reasonable; otherwise, the approximation has to be modified at sufficiently small \(k_{\parallel}\rho_{s}\ll 1\). If \(k_{\parallel}\rho_{s}\ll 1\) and \(\tilde{\omega}_{s\parallel}\lesssim 1\), it follows that for \(n\neq 0\), \[|\zeta_{sn}|\equiv\left|\tilde{\omega}_{s\parallel}-\frac{n}{k_{\parallel} \tilde{\rho}_{s}}\right|\gg 1\,. \tag{192}\] In this case, we can simplify the plasma dispersion function via a large-argument expansion: \[Z(\zeta_{sn})\approx-\frac{1}{\zeta_{sn}}-\frac{1}{2\zeta_{sn}^{3}}+\ldots \tag{110}\] The long-wavelength dielectric tensor is then \[(\textbf{M}_{s})_{xx} \approx-2\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty} \frac{n^{2}}{\zeta_{sn}k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde {\rho}_{s}^{2}}{2}\right), \tag{111a}\] \[(\textbf{M}_{s})_{xy} \approx-\mathrm{i}\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{ \infty}\frac{n}{\zeta_{sn}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \right]\,,\] (111b) \[(\textbf{M}_{s})_{xz} \approx-\tilde{\omega}_{s\parallel}\sum_{n=-\infty}^{\infty} \frac{n}{\zeta_{sn}^{2}k_{\perp}\tilde{\rho}_{s}}\exp\left(-\frac{k_{\perp}^{2 }\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{ s}^{2}}{2}\right),\] (111c) \[(\textbf{M}_{s})_{yx} =-(\textbf{M}_{s})_{xy},\] (111d) \[(\textbf{M}_{s})_{yy} \approx-\tilde{\omega}_{s\parallel}\Bigg{[}\sum_{n\in\mathbb{Z}^ {\pm}}\bigg{\{}\frac{1}{\zeta_{sn}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{2}\right)\] \[\times\left[\left(\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2} }+k_{\perp}^{2}\tilde{\rho}_{s}^{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{n}\!\left( \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\bigg{\}}\] \[-Z\!\left(\tilde{\omega}_{s\parallel}\right)k_{\perp}^{2}\tilde{ \rho}_{s}^{2}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \left\{I_{0}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)-I_{1} \!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right\}\Bigg{]}\,,\] (111e) \[(\textbf{M}_{s})_{yz} \approx\mathrm{i}\tilde{\omega}_{s\parallel}\Bigg{[}\sum_{n\in \mathbb{Z}^{\pm}}\bigg{\{}\frac{1}{2\zeta_{sn}^{2}}k_{\perp}\tilde{\rho}_{s} \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{n}^{ \prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)-I_{n}\! \left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\bigg{\}}\] \[+\left[1+\tilde{\omega}_{s\parallel}Z\!\left(\tilde{\omega}_{s \parallel}\right)\right]k_{\perp}\tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\left\{I_{0}\!\left(\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)-I_{1}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)\right\}\Bigg{]},\] (11f) \[(\textbf{M}_{s})_{zx} =(\textbf{M}_{s})_{xz}\,,\] (111g) \[(\textbf{M}_{s})_{zy} =-(\textbf{M}_{s})_{yz}\,,\] (111h) \[(\textbf{M}_{s})_{zz} \approx-\tilde{\omega}_{s\parallel}\Bigg{[}\sum_{n\in\mathbb{Z}^ {\pm}}\bigg{\{}\frac{1}{\zeta_{sn}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\bigg{\}}\] \[\qquad-2\tilde{\omega}_{s\parallel}\left[1+\tilde{\omega}_{s \parallel}Z\!\left(\tilde{\omega}_{s\parallel}\right)\right]\exp\left(-\frac{k_ {\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\Bigg{]}\,, \tag{112i}\] where \(\mathbb{Z}^{\pm}\) denotes non-zero integers. We note that the error associated with neglecting higher-order terms in \(\zeta_{sn}\) is \(O(k_{\parallel}^{2}\rho_{s}^{2})\). Next, using \[-\frac{1}{\zeta_{sn}}=\frac{1}{n/k_{\parallel}\tilde{\rho}_{s}-\tilde{\omega}_ {s\parallel}}\approx\frac{k_{\parallel}\tilde{\rho}_{s}}{n}\left[1+\frac{ \tilde{\omega}_{s\parallel}k_{\parallel}\tilde{\rho}_{s}}{n}+O\!\left(\frac{ \omega^{2}}{\Omega_{e}^{2}}\right)\right], \tag{113}\] we can isolate the dependence of each dielectric tensor component on \(\tilde{\omega}_{s\parallel}\). It is clear that any sum involving an odd power of \(n\) vanishes, meaning that the leading-order contributions in \(k_{\parallel}\tilde{\rho}_{s}\) from the summation terms arise from the highest power of \(\tilde{\omega}_{s\parallel}\) gives an even power of \(n\). The resulting approximate expressions are \[(\textbf{M}_{s})_{xx}\approx\frac{2k_{\parallel}^{2}}{k_{\perp}^{2}}\tilde{ \omega}_{s\parallel}^{2}\left[1-\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)I_{0}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right], \tag{114a}\] \[(\mathbf{M}_{s})_{xy} \approx {\rm i}\tilde{\omega}_{s\parallel}k_{\parallel}\tilde{\rho}_{s}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0}\biggl{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}-I_{1}\biggl{(}\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\right]\,, \tag{116b}\] \[(\mathbf{M}_{s})_{xz} \approx -4k_{\parallel}^{2}\tilde{\rho}_{s}^{2}\frac{k_{\parallel}}{k_{ \perp}}\tilde{\omega}_{s\parallel}^{2}\sum_{n=1}^{\infty}\frac{1}{n^{2}}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\biggl{(}\frac{k _{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\,,\] (116c) \[(\mathbf{M}_{s})_{yy} \approx \tilde{\omega}_{s\parallel}\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\left\{Z\bigl{(}\tilde{\omega}_{s\parallel}\bigr{)}\, k_{\perp}^{2}\tilde{\rho}_{s}^{2}\left[I_{0}\biggl{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\biggr{)}-I_{1}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{2}\biggr{)}\right]\right.\] (116d) \[\left.+2\tilde{\omega}_{s\parallel}k_{\parallel}^{2}\tilde{\rho}_ {s}^{2}\sum_{n=1}^{\infty}\left[\left(\frac{2}{k_{\perp}^{2}\tilde{\rho}_{s}^ {2}}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{n^{2}}\right)I_{n}\biggl{(}\frac {k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}-\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{n^{2}}I_{n}^{\prime}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2} }{2}\biggr{)}\right]\right\}\!,\] \[(\mathbf{M}_{s})_{yz} \approx {\rm i}\tilde{\omega}_{s\parallel}\left[1+\tilde{\omega}_{s \parallel}Z\bigl{(}\tilde{\omega}_{s\parallel}\bigr{)}\right]\] (116e) \[\times k_{\perp}\tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0}\biggl{(}\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\biggr{)}-I_{1}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2 }}{2}\biggr{)}\right]\,,\] \[(\mathbf{M}_{s})_{zz} \approx 2\tilde{\omega}_{s\parallel}^{2}\left[1+\tilde{\omega}_{s \parallel}Z\bigl{(}\tilde{\omega}_{s\parallel}\bigr{)}\right]\exp\left(-\frac{ k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0}\biggl{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\biggr{)}\, \tag{116f}\] where we have again used the sum identities (116). Note that we have retained a term in \((\mathbf{M}_{s})_{yy}\) which is quadratic in \(k_{\parallel}\tilde{\rho}_{s}\), even though there exists another term which is independent of \(k_{\parallel}\tilde{\rho}_{s}\). This is because the latter term becomes arbitrarily small in the limit \(k_{\perp}\rho_{s}\ll 1\), whereas the former is independent of \(k_{\perp}\rho_{s}\); hence, if \(k_{\perp}\rho_{s}\ll k_{\parallel}\rho_{s}\), the latter term can become dominant. Now considering the limit \(\tilde{\omega}_{s\parallel}\ll 1\), while holding \(k_{\parallel}\rho_{s}\ll 1\) at some fixed value, the plasma dispersion function can now be approximated by its small-argument expansion \[Z\bigl{(}\tilde{\omega}_{s\parallel}\bigr{)}\approx i\sqrt{\pi}\,, \tag{116g}\] to give \[(\mathbf{M}_{s})_{xx} \approx \frac{2k_{\parallel}^{2}}{k_{\perp}^{2}}\tilde{\omega}_{s \parallel}^{2}\left[1-\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{0}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \right], \tag{116a}\] \[(\mathbf{M}_{s})_{xy} \approx {\rm i}\tilde{\omega}_{s\parallel}k_{\parallel}\tilde{\rho}_{s} \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0} \biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}-I_{1}\biggl{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\right]\,,\] (116b) \[(\mathbf{M}_{s})_{xz} \approx -4k_{\parallel}^{2}\tilde{\rho}_{s}^{2}\frac{k_{\parallel}}{k_{ \perp}}\tilde{\omega}_{s\parallel}^{2}\sum_{n=1}^{\infty}\frac{1}{n^{2}}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\biggl{(}\frac{k_ {\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\,,\] (116c) \[(\mathbf{M}_{s})_{yy} \approx \tilde{\omega}_{s\parallel}\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\left\{{\rm i}\sqrt{\pi}k_{\perp}^{2}\tilde{\rho}_{s}^{2 }\left[I_{0}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}-I_{1} \biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\right]\right.\] (116d) \[+2\tilde{\omega}_{s\parallel}^{2}k_{\parallel}^{2}\tilde{\rho}_{s}^ {2}\sum_{n=1}^{\infty}\left[\left(\frac{2}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+ \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{n^{2}}\right)I_{n}\biggl{(}\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{ n^{2}}I_{n}^{\prime}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \right]\Bigg{\}},\] \[(\mathbf{M}_{s})_{yz} \approx {\rm i}\tilde{\omega}_{s\parallel}\left[1+{\rm i}\sqrt{\pi}\tilde{ \omega}_{s\parallel}\right]\] (116e) \[\times k_{\perp}\tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0}\biggl{(}\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\biggr{)}-I_{1}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{ 2}\biggr{)}\right]\,,\] \[(\mathbf{M}_{s})_{zz} \approx 2\tilde{\omega}_{s\parallel}^{2}\left[1+{\rm i}\sqrt{\pi} \tilde{\omega}_{s\parallel}\right]\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{ 2}\right)I_{0}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}. \tag{116f}\] For comparison, we state below the long-wavelength limit of \(\mathbf{M}_{s}^{(0)}\) using asymptotic expressions (G 36): \[(\textbf{{M}}_{s}^{(0)})_{xx} = 4{\rm i}\sqrt{\pi}\frac{\tilde{\omega}_{s}\|}{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right) \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{1}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\] (G 59a) \[(\textbf{{M}}_{s}^{(0)})_{xy} = {\rm i}\tilde{\omega}_{s\parallel}\|k_{\parallel}|\tilde{\rho}_{s }\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{1}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right]\,\] (G 59b) \[(\textbf{{M}}_{s}^{(0)})_{xz} = -4{\rm i}\sqrt{\pi}\frac{\tilde{\omega}_{s\parallel}}{k_{\perp}k _{\parallel}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\tilde{ \rho}_{s}^{2}}\right)\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{1}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\] (G 59c) \[(\textbf{{M}}_{s}^{(0)})_{yy} = {\rm i}\sqrt{\pi}\tilde{\omega}_{s\parallel}k_{\perp}^{2}\tilde{ \rho}_{s}^{2}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \left[I_{0}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{1} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right],\] (G 59d) \[(\textbf{{M}}_{s}^{(0)})_{yz} = {\rm i}\tilde{\omega}_{s\parallel}k_{\perp}\tilde{\rho}_{s}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{1}\bigg{(}\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right]\,\] (G 59e) \[(\textbf{{M}}_{s}^{(0)})_{zz} = 4{\rm i}\sqrt{\pi}\frac{\tilde{\omega}_{s\parallel}}{k_{\parallel }^{2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\tilde{\rho}_{ s}^{2}}\right)\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{1} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\.\] (G 59f) Assuming \(k_{\perp}\rho_{s}\sim 1\), we observe that while three of the six unique dielectric tensor components are identical for both \(\tilde{\omega}_{s\parallel}\to 0\), \(k_{\parallel}\rho_{s}\ll 1\) fixed, and \(k_{\parallel}\rho_{s}\to 0\), \(\tilde{\omega}_{s\parallel}\ll 1\) fixed [\((\textbf{{M}}_{s})_{xy}\), \((\textbf{{M}}_{s})_{yy}\), and \((\textbf{{M}}_{s})_{yz}\)], the other three [\((\textbf{{M}}_{s})_{xx}\), \((\textbf{{M}}_{s})_{xz}\), and \((\textbf{{M}}_{s})_{zz}\)] are not. Instead, the dominant terms are the quadratic terms \((\textbf{{M}}_{s}^{(1)})_{xx}\), \((\textbf{{M}}_{s}^{(1)})_{xz}\), and \((\textbf{{M}}_{s}^{(1)})_{zz}\) in the \(\tilde{\omega}_{s\parallel}\ll 1\) expansion. In the limit \(k_{\perp}\rho_{s}\ll 1\), \((\textbf{{M}}_{s})_{yy}\) also departs from the approximation \((\textbf{{M}}_{s}^{(0)})_{yy}\) for sufficiently small \(k_{\perp}\rho_{s}\) as compared to \(k_{\parallel}\rho_{s}\), instead being accurately described by \((\textbf{{M}}_{s}^{(1)})_{yy}\). As a consequence, we must assess the conditions under which one approximation or the other is valid. This is most simply answered by observing that the expressions for \((\textbf{{M}}_{s}^{(0)})_{xx}\), \((\textbf{{M}}_{s}^{(0)})_{xz}\), and \((\textbf{{M}}_{s}^{(0)})_{zz}\) from (G 59a), (G 59c) and (G 59f) are exponentially small; thus, for \(k_{\parallel}\rho_{s}\ll 1/\log\left(1/\tilde{\omega}_{s\parallel}\right)\), we must use approximations (G 58a), (G 58c), (G 58e) for \((\textbf{{M}}_{s})_{xx}\), \((\textbf{{M}}_{s})_{xz}\), and \((\textbf{{M}}_{s})_{zz}\). In addition, if \(k_{\perp}^{2}\rho_{s}^{2}\ll\tilde{\omega}_{s\parallel}k_{\parallel}^{2}\rho_ {s}^{2}\ll 1\), then \[(\textbf{{M}}_{s})_{yy}\approx\frac{2\omega_{\rm ps}^{2}}{\omega^{2}}\tilde{ \omega}_{s\parallel}^{2}k_{\parallel}^{2}\tilde{\rho}_{s}^{2}\] (G 60) becomes the appropriate approximation for \((\textbf{{M}}_{s})_{yy}\). #### g.1.7 Calculation of second-order corrections to dispersion relation In this appendix, we justify the relations (K 20) used in appendix K - that is, for \(k_{\parallel}\rho_{s}\ll 1\), \[\frac{\left[(\textbf{{M}}_{s})_{13}\right]^{2}}{\left(\textbf{{M} }_{s}^{(1)}\right)_{33}} \lesssim (\textbf{{M}}_{s})_{11}\,,\] (G 61a) \[\frac{(\textbf{{M}}_{s})_{13}(\textbf{{M}}_{s})_{23}}{\left(\textbf{{M }}_{s}^{(1)}\right)_{33}} \lesssim \tilde{\omega}_{\parallel}(\textbf{{M}}_{s})_{12}\ll(\textbf{{M}}_{s})_{12}\,,\] (G 61b) \[\frac{\left[(\textbf{{M}}_{s})_{23}\right]^{2}}{\left(\textbf{{M} }_{s}^{(1)}\right)_{33}} \lesssim \tilde{\omega}_{e\parallel}(\textbf{{M}}_{s})_{22}\ll(\textbf{{M}}_{s})_{2 2}\,.\] (G 61c) We also prove the identity (115), or \[(\mbox{\it M\kern-1.0ptM}_{e}^{(1)}+\mbox{\it M\kern-1.0ptM}_{i}^{(1)})_{11}-\frac{ \left[(\mbox{\it M\kern-1.0ptM}_{e}^{(1)}+\mbox{\it M\kern-1.0ptM}_{i}^{(1)})_{13 }\right]^{2}}{2(\mbox{\it M\kern-1.0ptM}_{e}^{(1)})_{33}}=-\frac{4}{3}W_{e}- \frac{4}{3}W_{i}-\frac{1}{4}\left(L_{e}+L_{i}\right)^{2} \tag{116}\] used to derive the dispersion relation (115). To complete the first task, we begin with the expressions (116) for the dielectric components, and substitute (116), (116), (117) and (118) for \((\mbox{\it M\kern-1.0ptM}_{s}^{(1)})_{xx}\), \((\mbox{\it M\kern-1.0ptM}_{s}^{(0)})_{xy}\), \((\mbox{\it M\kern-1.0ptM}_{s}^{(0)})_{xy}\), \((\mbox{\it M\kern-1.0ptM}_{s}^{(0)})_{xy}\), \((\mbox{\it M\kern-1.0ptM}_{s}^{(0)})_{yy}\) and \((\mbox{\it M\kern-1.0ptM}_{s}^{(1)})_{xy}\), respectively. This gives (116) directly in terms of special functions \(G(x,y)\), \(H(x,y)\), \(L(x,y)\), \(N(x,y)\), \(W(x,y)\) and \(Y(x,y)\): \[(\mbox{\it M\kern-1.0ptM}_{s})_{11} \approx-\frac{4k^{2}}{3k_{\parallel}^{2}}\tilde{\omega}_{s \parallel}^{2}W\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}+2\tilde{\omega}_{s\parallel}^{2}\left[\frac{k_{\perp}^{2}}{k^{2}}+ \frac{k_{\perp}}{k_{\parallel}}L\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp }\tilde{\rho}_{s}\big{)}\right]\,, \tag{119a}\] \[(\mbox{\it M\kern-1.0ptM}_{s})_{12} \approx-\mathrm{i}\frac{k}{k_{\parallel}}\tilde{\omega}_{s \parallel}G\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\,\] (119b) \[(\mbox{\it M\kern-1.0ptM}_{s})_{13} \approx-\tilde{\omega}_{s\parallel}^{2}\left[\frac{2k_{\perp}k_{ \parallel}}{k^{2}}+L\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho} _{s}\big{)}\right]\,,\] (120c) \[(\mbox{\it M\kern-1.0ptM}_{s})_{22} \approx\mathrm{i}\tilde{\omega}_{s\parallel}H\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}-\frac{4}{3}\tilde{\omega}_{ s\parallel}^{2}Y\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\,\] (120d) \[(\mbox{\it M\kern-1.0ptM}_{s})_{23} \approx-\frac{k_{\parallel}}{k}\tilde{\omega}_{s\parallel}^{2}N \big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] (120e) \[(\mbox{\it M\kern-1.0ptM}_{s})_{33} \approx\frac{2k_{\parallel}^{2}}{k^{2}}\tilde{\omega}_{s\parallel}^{ 2}\,. \tag{120f}\] We then apply the \(k_{\parallel}\rho_{s}\ll 1\) limits of the aforementioned special functions using Appendices 11.4 and 11.2 - in particular, (119b), (119c), (119c), (119d), (119e), and (119c): \[(\mbox{\it M\kern-1.0ptM}_{s})_{11} \approx 2\tilde{\omega}_{s\parallel}^{2}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)I_{0}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s} ^{2}}{2}\bigg{)}\, \tag{120a}\] \[(\mbox{\it M\kern-1.0ptM}_{s})_{12} \approx\mathrm{i}\tilde{\omega}_{s\parallel}k\tilde{\rho}_{s}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left[I_{0}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{1}\bigg{(}\frac{k_{\perp }^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right]\,,\] (120b) \[(\mbox{\it M\kern-1.0ptM}_{s})_{13} \approx-\tilde{\omega}_{s\parallel}^{2}\frac{2k_{\parallel}}{k} \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0}\bigg{(}\frac{ k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\] (120c) \[(\mbox{\it M\kern-1.0ptM}_{s})_{22} \approx\mathrm{i}\sqrt{\tau}\tilde{\omega}_{s\parallel}k_{\perp}^{2} \tilde{\rho}_{s}^{2}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I_{0}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_ {1}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right]+\tilde{ \omega}_{s\parallel}^{2}k_{\parallel}^{2}\tilde{\rho}_{s}^{2}\,,\] (120d) \[(\mbox{\it M\kern-1.0ptM}_{s})_{23} \approx\sqrt{\tau}\tilde{\omega}_{s\parallel}^{2}k_{\parallel} \tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \left[I_{0}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}-I_{1} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right]\,,\] (120e) \[(\mbox{\it M\kern-1.0ptM}_{s})_{33} \approx\frac{2k_{\parallel}^{2}}{k^{2}}\tilde{\omega}_{s \parallel}^{2}\,. \tag{120f}\] We can now make the relevant comparisons presented in (119), and obtain the desired results: \[\frac{\left[(\mbox{\it M\kern-1.0ptM}_{s})_{13}\right]^{2}}{(\mbox{\it M \kern-1.0ptM}_{s})_{11}(\mbox{\it M\kern-1.0ptM}_{s})_{33}} \approx\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\lesssim 1, \tag{120a}\] \[\frac{(\mbox{\it M\kern-1.0ptM}_{s})_{13}(\mbox{\it M\kern-1.0ptM}_{ s})_{23}}{(\mbox{\it M\kern-1.0ptM}_{s})_{12}\mbox{\it M\kern-1.0ptM}_{s})_{33}} \approx\mathrm{i}\tilde{\omega}_{s\parallel}\exp\left(-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0}\bigg{(}\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\bigg{)}\lesssim\tilde{\omega}_{s\parallel}, \tag{120b}\] \[\frac{\left[(\mathbf{M}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### CE temperature-gradient-driven terms For the CE temperature-gradient-driven term arising from a Krook operator, which takes the form \[\tilde{f}_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\eta_{s}\tilde{v}_{s \parallel}\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right)\exp\left(-\tilde{v}_{s}^{ 2}\right), \tag{110}\] it follows (assuming \(\eta_{e}^{R}=0\)) that \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\eta_{s}\tilde{v}_{s \perp}\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right)\exp\left(-\tilde{v}_{s}^{2} \right), \tag{111}\] and \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\frac{\eta_{s}}{\tilde{ \omega}_{s\parallel}}\tilde{v}_{s\perp}\left(\tilde{v}_{s}^{2}-\frac{5}{2} \right)\exp\left(-\tilde{v}_{s}^{2}\right)+\textit{O}(\eta_{s}). \tag{112}\] Then, to leading order in \(\eta_{s}\), \[(\boldsymbol{P}_{s})_{xx}=\frac{2}{\sqrt{\pi}}\eta_{s}\sum_{n=- \infty}^{\infty}\left[\frac{n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\int_{C_{ L}}\frac{\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp \left(-\tilde{v}_{s\perp}^{2}\right)\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right) \right], \tag{113a}\] \[(\boldsymbol{P}_{s})_{xy}=\frac{2\mathrm{i}}{\sqrt{\pi}}\eta_{s} \sum_{n=-\infty}^{\infty}\left[\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}} \frac{\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n} ^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde{v}_{s \perp}^{2}\right)\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right)\right]\,,\] (113b) \[(\boldsymbol{P}_{s})_{xz}=\frac{2}{\sqrt{\pi}}\eta_{s}\sum_{n=- \infty}^{\infty}\left[\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{ \tilde{v}_{s\parallel}\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d} \tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (113c) \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp \left(-\tilde{v}_{s\perp}^{2}\right)\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right) \right],\] (113d) \[(\boldsymbol{P}_{s})_{yx}=-(\boldsymbol{P}_{s})_{xy}\,,\] (113e) \[(\boldsymbol{P}_{s})_{yy}=\frac{2}{\sqrt{\pi}}\eta_{s}\sum_{n=- \infty}^{\infty}\left[\int_{C_{L}}\frac{\exp\left(-\tilde{v}_{s\parallel}^{2} \right)\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}^{3}J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s \perp})^{2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\left(\tilde{v}_{s}^{2}- \frac{5}{2}\right)\right],\] (113f) \[(\boldsymbol{P}_{s})_{yz}=-\frac{2\mathrm{i}}{\sqrt{\pi}}\eta_{s} \sum_{n=-\infty}^{\infty}\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}\exp \left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] \[\left.\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\, \tilde{v}_{s\perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n}^{ \prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde{v}_{s \perp}^{2}\right)\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right)\right]\,,\] (113g) \[(\boldsymbol{P}_{s})_{zx}=(\boldsymbol{P}_{s})_{xz}\,,\] (113h) \[(\boldsymbol{P}_{s})_{zy}=-(\boldsymbol{P}_{s})_{yz}\,,\] (113h) \[(\boldsymbol{P}_{s})_{zz}=\frac{2}{\sqrt{\pi}}\eta_{s}\sum_{n=- \infty}^{\infty}\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2}\exp\left(- \tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_ {s\parallel}-\zeta_{sn}}\right.\] \[\times\int_{0}^{\infty}{\rm d}\tilde{v}_{s\perp}\,\tilde{v}_{s\perp}J_{n}(k_{ \perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp\left(-\tilde{v}_{s\perp}^{2} \right)\left(\tilde{v}_{s}^{2}-\frac{5}{2}\right)\right].\] In addition to the plasma-dispersion-function identities (G 15) and Bessel-function identities (G 16), we use \[\frac{1}{\sqrt{\pi}}\int_{C_{L}}\frac{u^{3}\exp\left(-u^{2}\right){ \rm d}u}{u-z} = \frac{1}{2}+z^{2}\left[1+zZ(z)\right]\,,\] (G 79a) \[\frac{1}{\sqrt{\pi}}\int_{C_{L}}\frac{u^{4}\exp\left(-u^{2}\right){ \rm d}u}{u-z} = z\left\{\frac{1}{2}+z^{2}\left[1+zZ(z)\,,\right]\right\}\] (G 79b) and \[\int_{0}^{\infty}{\rm d}t\,t^{3}\,J_{n}(\alpha t)^{2}\exp\left(- t^{2}\right) = \frac{1}{2}\exp\left(-\frac{\alpha^{2}}{2}\right)\left\{I_{n} \bigg{(}\frac{\alpha^{2}}{2}\bigg{)}\right.\] \[\left.+\frac{\alpha^{2}}{2}\left[I_{n}^{\prime}\bigg{(}\frac{ \alpha^{2}}{2}\bigg{)}-I_{n}\bigg{(}\frac{\alpha^{2}}{2}\bigg{)}\right]\, \right\},\] \[\int_{0}^{\infty}{\rm d}t^{4}\,t^{2}J_{n}(\alpha t)J_{n}^{\prime} (\alpha t)\exp\left(-t^{2}\right) = \frac{\alpha}{4}\exp\left(-\frac{\alpha^{2}}{2}\right)\left[ \left(\alpha^{2}-2+\frac{2n^{2}}{\alpha^{2}}\right)I_{n}\bigg{(}\frac{\alpha^ {2}}{2}\bigg{)}\right.\] \[\left.+\left(1-\alpha^{2}\right)I_{n}^{\prime}\bigg{(}\frac{\alpha ^{2}}{2}\bigg{)}\,\right],\] \[\int_{0}^{\infty}{\rm d}t^{5}\,t^{3}J_{n}^{\prime}(\alpha t)^{2} \exp\left(-t^{2}\right) = \frac{1}{2}\exp\left(-\frac{\alpha^{2}}{2}\right)\] \[\times\bigg{\{}\left[\frac{3\alpha^{2}}{2}-\frac{\alpha^{4}}{2}+ n^{2}\left(\frac{1}{\alpha^{2}}-\frac{3}{2}\right)\right]I_{n}\bigg{(}\frac{ \alpha^{2}}{2}\bigg{)}\] \[\left.+\left(\frac{\alpha^{4}}{2}+\frac{n^{2}}{2}-\alpha^{2} \right)I_{n}^{\prime}\bigg{(}\frac{\alpha^{2}}{2}\bigg{)}\,\right\},\] to obtain again the expressions for the dielectric components (G 78) in terms of special mathematical functions (a tedious, but elementary calculation): \[(\mathbf{\not{P}}_{s})_{xx} = \eta_{s}\sum_{n=-\infty}^{\infty}\frac{n^{2}}{k_{\perp}^{2}\tilde {\rho}_{s}^{2}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \left\{\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}Z(\zeta_{sn})\,I_{n}^{\prime} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right.\] (G 81a) \[\left.+\left[\zeta_{sn}+Z(\zeta_{sn})\left(\zeta_{sn}^{2}-\frac{3 }{2}-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]I_{n}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\right\},\] \[(\mathbf{\not{P}}_{s})_{xy} = \frac{{\rm i}\eta_{s}}{2}\sum_{n=-\infty}^{\infty}n\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left\{\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]I_{n}^{\prime} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right.\] \[\left.+\left[Z(\zeta_{sn})\left(\frac{1}{2}+\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}+\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}- \zeta_{sn}^{2}\right)-\zeta_{sn}\right]I_{n}\bigg{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\right\},\] \[(\mathbf{\not{P}}_{s})_{xz} = \eta_{s}\sum_{n=-\infty}^{\infty}\frac{n}{k_{\perp}\tilde{\rho}_{s} }\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\left\{\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]I_{n} ^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\right.\] (G 81c) \[\left.+\left[\zeta_{sn}^{2}-1-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^ {2}}{2}+\zeta_{sn}Z(\zeta_{sn})\left(\zeta_{sn}^{2}-\frac{3}{2}-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]I_{n}\bigg{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\right\},\] \[(\mathbf{\not{P}}_{s})_{yx} = (\mathbf{\not{P}}_{s})_{xy}\,,\] (G 81d) \[(\boldsymbol{\mathcal{P}}_{s})_{yy} = \eta_{s}\sum_{n=-\infty}^{\infty}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\{}\bigg{[}\left(\frac{n^{2}}{k_{\perp}^{ 2}\tilde{\rho}_{s}^{2}}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \zeta_{sn} \tag{124}\] \[+Z(\zeta_{sn})\left(\frac{n^{2}\zeta_{sn}^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}+\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{4}-\frac{k_{\perp}^{4}\tilde{\rho}_{s}^{4}}{2} -\frac{3n^{2}}{2}-\frac{3n^{2}}{2k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\right) \bigg{]}I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\, \Bigg{\}}\] \[+\left[Z(\zeta_{sn})\left(\frac{1}{2}+k_{\perp}^{2}\tilde{\rho}_ {s}^{2}+\frac{n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}-\zeta_{sn}^{2}\right) -\zeta_{sn}\right]\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}I_{n}^{\prime} \bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\,\] \[(\boldsymbol{\mathcal{P}}_{s})_{yz} = -\frac{\mathrm{i}\eta_{s}}{2}\sum_{n=-\infty}^{\infty}k_{\perp} \tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] (125) \[\times\Bigg{\{}\bigg{[}k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{2 n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}-\zeta_{sn}^{2}+\zeta_{sn}Z(\zeta_{sn}) \left(k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{1}{2}+\frac{2n^{2}}{k_{\perp}^{ 2}\tilde{\rho}_{s}^{2}}-\zeta_{sn}^{2}\right)\bigg{]}\,I_{n}\bigg{(}\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] \[+\bigg{[}\zeta_{sn}^{2}-1-k_{\perp}^{2}\tilde{\rho}_{s}^{2}+ \zeta_{sn}Z(\zeta_{sn})\left(\zeta_{sn}^{2}-\frac{3}{2}-k_{\perp}^{2}\tilde{ \rho}_{s}^{2}\right)\bigg{]}I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\bigg{)}\Bigg{\}}\,,\] \[(\boldsymbol{\mathcal{P}}_{s})_{zx} = (\boldsymbol{\mathcal{P}}_{s})_{xz}\,,\] \[(\boldsymbol{\mathcal{P}}_{s})_{zy} = -(\boldsymbol{\mathcal{P}}_{s})_{yz}\,,\] \[(\boldsymbol{\mathcal{P}}_{s})_{zz} = \eta_{s}\sum_{n=-\infty}^{\infty}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\{}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2} }{2}\zeta_{sn}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]I_{n}^{\prime}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] (125) \[+ \left[\zeta_{sn}^{3}-\zeta_{sn}-\frac{k_{\perp}^{2}\tilde{\rho}_ {s}^{2}\zeta_{sn}}{2}+\zeta_{sn}^{2}Z(\zeta_{sn})\left(\zeta_{sn}^{2}-\frac{3 }{2}-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]I_{n}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\Bigg{\}}.\] #### b.3.1 Dielectric tensor in low-frequency limit In the low-frequency limit \(\tilde{\omega}_{s\parallel}\ll 1\) under the ordering \(k_{\parallel}\rho_{s}\sim k_{\perp}\rho_{s}\sim 1\), the expressions (125) can be approximated by the leading-order term of the expansion of \(\boldsymbol{\mathcal{P}}_{s}\), that is \[\boldsymbol{\mathcal{P}}_{s}\approx\boldsymbol{\mathcal{P}}_{s}^{(0)}+O(\tilde{ \omega}_{s\parallel}^{2})\,, \tag{126}\] where \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{xx} = \eta_{s}\sum_{n=-\infty}^{\infty}\frac{n^{2}}{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right) \Bigg{\{}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}Z\bigg{(}-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\bigg{)}\,I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\bigg{)} \tag{127a}\] \[+\left[-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}+Z\bigg{(}- \frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\bigg{)}\left(\frac{n^{2}}{|k_{ \parallel}|^{2}\tilde{\rho}_{s}^{2}}-\frac{3}{2}-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\right]I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^ {2}}{2}\bigg{)}\,\Bigg{\}}\,,\] \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{xy} = \frac{\mathrm{i}\eta_{s}}{2}\sum_{n=-\infty}^{\infty}n\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] (127b) \[\times\Bigg{\{}\bigg{[}Z\bigg{(}-\frac{n}{|k_{\parallel}|\tilde{ \rho}_{s}}\bigg{)}\left(\frac{1}{2}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}+ \frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}-\frac{n^{2}}{|k_{\parallel}|^{2} \tilde{\rho}_{s}^{2}}\right)+\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\bigg{]} \,I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] \[+\left[-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}+Z\bigg{(}- \frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\bigg{)}\left(\frac{n^{2}}{|k_{ \parallel}|^{2}\tilde{\rho}_{s}^{2}}-\frac{3}{2}-\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{2}\right)\right]I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho }_{s}^{2}}{2}\bigg{)}\Bigg{\}}\,,\] \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{xz} = \eta_{s}\sum_{n=-\infty}^{\infty}\frac{n}{k_{\perp}\tilde{\rho}_ {s}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\{} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\left[1-\frac{n}{|k_{\parallel}| \tilde{\rho}_{s}}Z\bigg{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\bigg{)} \right]I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \bigg{)}\] \[+I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \left[\frac{n^{2}}{|k_{\parallel}|^{2}\tilde{\rho}_{s}^{2}}-1-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right.\] \[\left.-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(}-\frac{n }{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\left(\frac{n^{2}}{k_{\parallel}^ {2}\tilde{\rho}_{s}^{2}}-\frac{3}{2}-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\right)\,\right]\right\},\] (G 83 _c_) \[(\mathbf{P}_{s}^{(0)})_{yx} = (\mathbf{P}_{s}^{(0)})_{xy}\,,\] (G 83 _d_) \[(\mathbf{P}_{s}^{(0)})_{yy} = \eta_{s}\sum_{n=-\infty}^{\infty}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\left\{\Bigg{[}-\left(\frac{n^{2}}{k_{\perp}^{2 }\tilde{\rho}_{s}^{2}}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\frac {n}{|k_{\parallel}|\tilde{\rho}_{s}}\right.\] (G 83 _e_) \[\left.-\frac{3n^{2}}{2}-\frac{3n^{2}}{2k_{\perp}^{2}\tilde{\rho} _{s}^{2}}\right)\Bigg{]}I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\biggr{)}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}I_{n}^{\prime}\biggl{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\] \[\times\left[Z\biggl{(}-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s} }\biggr{)}\left(\frac{1}{2}+k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{n^{2}}{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s }^{2}}\right)+\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right]\Bigg{\}}\,,\] \[(\mathbf{P}_{s}^{(0)})_{yz} = -\frac{\mathrm{i}\eta_{s}}{2}\sum_{n=-\infty}^{\infty}k_{\perp} \tilde{\rho}_{s}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] (G 83 _e_) \[\times\Biggl{\{}I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s} ^{2}}{2}\biggr{)}\left[k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{2n^{2}}{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s }^{2}}\right.\] \[\left.-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(}-\frac{n }{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\left(k_{\perp}^{2}\tilde{\rho}_{s }^{2}+\frac{1}{2}+\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}-\frac{n^{2} }{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\,\right]\] \[+I_{n}^{\prime}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\biggr{)}\left[\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}-1-k_{ \perp}^{2}\tilde{\rho}_{s}^{2}\right.\] \[\left.-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(}-\frac{n }{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\left(\frac{n^{2}}{k_{\parallel}^ {2}\tilde{\rho}_{s}^{2}}-\frac{3}{2}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}\right) \,\right]\right\},\] \[(\mathbf{P}_{s}^{(0)})_{zx} = (\mathbf{P}_{s}^{(0)})_{xz}\,,\] (G 83 _g_) \[(\mathbf{P}_{s}^{(0)})_{zy} = -(\mathbf{P}_{s}^{(0)})_{yz}\,,\] (G 83 _h_) \[(\mathbf{P}_{s}^{(0)})_{zz} = \eta_{s}\sum_{n=-\infty}^{\infty}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\{}-\frac{nk_{\perp}^{2}\tilde{\rho}_{s} }{2|k_{\parallel}|}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\biggl{(} -\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\right]I_{n}^{\prime} \biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)}\] (G 83 _i_) \[+I_{n}\biggl{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\biggr{)} \left[\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}-\frac{n^{3}}{|k_{\parallel}|^ {3}\tilde{\rho}_{s}^{3}}+\frac{nk_{\perp}^{2}\tilde{\rho}_{s}}{2|k_{\parallel}|}\right.\] \[\left.+\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}Z\biggl{(} -\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\biggr{)}\left(\frac{n^{2}}{k_{ \parallel}^{2}\tilde{\rho}_{s}^{2}}-\frac{3}{2}-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\,\right]\Bigg{\}}\,.\] In this limit, we have utilised the approximation \(\zeta_{sn}\approx-n/|k_{\parallel}|\tilde{\rho}_{s}\). Similarly to the Maxwellian case, we can use the Bessel-function-summation identities (G 24) and the symmetry properties of the plasma dispersion function with a real argument to show that \[(\mathbf{P}_{s}^{(0)})_{xx}=2\mathrm{i}\sqrt{\pi}\eta_{s}\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=1}^{\infty}\frac{n^{2}} {k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{n^{2}}{k_{\parallel}^{2} \tilde{\rho}_{s}^{2}}\right)\] \[\times\Bigg{[}\left(\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}-\frac{3}{2 }-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\bigg{(}\frac{k_{\perp }^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\Bigg{]}\] \[={\rm i}\eta_{s}I\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s }\big{)}\,\] \[(\mathbf{P}_{s}^{(0)})_{xy}=-{\rm i}\eta_{s}\Bigg{\{}\frac{1}{2|k_{ \parallel}|\tilde{\rho}_{s}}+\frac{1}{2}\sum_{n=-\infty}^{\infty}n\,{\rm Re} \bigg{[}Z\left(\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\bigg{]}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] \[\times\Bigg{\{}\left(\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}- \frac{3}{2}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}\right)I_{n}^{\prime}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] \[+\left(\frac{1}{2}+k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{2n^{2}}{k_{\perp}^ {2}\tilde{\rho}_{s}^{2}}-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}} \right)I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\Bigg{\}}\] \[=-{\rm i}\eta_{s}J\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{ s}\big{)}\,\] \[(\mathbf{P}_{s}^{(0)})_{xz}=-2{\rm i}\sqrt{\pi}\eta_{s}\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=1}^{\infty}\frac{n^{ 2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{n^{2}}{|k_{\parallel}|k _{\perp}\tilde{\rho}_{s}^{2}}\right)\] \[\times\Bigg{[}\left(\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}- \frac{3}{2}-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}+\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2} }{2}\bigg{)}\Bigg{]}\] \[(\mathbf{P}_{s}^{(0)})_{yy}=\frac{{\rm i}\sqrt{\pi}}{2}\eta_{s}\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=-\infty}^{ \infty}\exp\left(-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\] \[\times\Bigg{\{}\left(n^{2}+\frac{1}{2}k_{\perp}^{2}\tilde{\rho}_{s}^{2}+k_{ \perp}^{4}\tilde{\rho}_{s}^{4}-\frac{n^{2}k_{\perp}^{2}}{k_{\parallel}^{2}} \right)I_{n}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \bigg{)}\] \[+\left(\frac{2n^{4}}{k_{\parallel}^{2}k_{\perp}^{2}\tilde{\rho}_{s}^{4}}- \frac{3n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}-3n^{2}+\frac{1}{2}k_{\perp}^ {2}\tilde{\rho}_{s}^{2}-k_{\perp}^{4}\tilde{\rho}_{s}^{4}+\frac{n^{2}k_{\perp }^{2}}{k_{\parallel}^{2}}\right)I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s} ^{2}}{2}\bigg{)}\Bigg{\}}\] \[={\rm i}\eta_{s}K\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\,\] \[(\mathbf{P}_{s}^{(0)})_{yz}=-{\rm i}\eta_{s}\Bigg{\{}\frac{k_{\perp}}{2 k_{\parallel}^{2}\tilde{\rho}_{s}}+\frac{1}{2}\sum_{n=-\infty}^{\infty}\frac{nk_{ \perp}}{|k_{\parallel}|}\,{\rm Re}\bigg{[}Z\left(\frac{n}{|k_{\parallel}|\tilde{ \rho}_{s}}\right)\bigg{]}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] \[\times\Bigg{\{}\left(\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}- \frac{3}{2}-k_{\perp}^{2}\tilde{\rho}_{s}^{2}\right)I_{n}^{\prime}\bigg{(} \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\] \[+\left(\frac{1}{2}+k_{\perp}^{2}\tilde{\rho}_{s}^{2}+\frac{2n^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}} \right)I_{n}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\Bigg{\}}\] \[=-\frac{{\rm i}k_{\perp}}{|k_{\parallel}|}\eta_{s}J\big{(}k_{\parallel}\tilde{ \rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] \[(\mathbf{P}_{s}^{(0)})_{zz}=2{\rm i}\sqrt{\pi}\eta_{s}\exp\left(-\frac{k _{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=1}^{\infty}\frac{n^{2}}{k_{ \parallel}^{2}\tilde{\rho}_{s}^{2}}\exp\left(-\frac{n^{2}}{k_{\parallel}^{2} \tilde{\rho}_{s}^{2}}\right)\] \[\times\Bigg{[}\left(\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}-\frac{3}{2 }-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\bigg{(}\frac{k_{\perp}^ {2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}+\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}I_{n }^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\Bigg{]}\] \[I(x,y) =\frac{\sqrt{\pi}}{2}\left(\frac{1}{x^{2}}-\frac{1}{2}\right)\exp \left(-\frac{1}{x^{2}}\right)\left[1+\textit{O}\!\left(y^{2}\right)\right]\,,\] (G 86a) \[J(x,y) =\frac{\sqrt{\pi}}{2}\left(\frac{1}{x^{2}}-\frac{1}{2}\right)\exp \left(-\frac{1}{x^{2}}\right)\left[1+\textit{O}\!\left(y^{2}\right)\right]\,.\] (G 86c) * \(x\ll 1,y\sim 1\): \[I(x,y) =\frac{2\sqrt{\pi}}{x^{2}y^{2}}\exp\left(-\frac{y^{2}}{2}-\frac{1}{x^{2}} \right)I_{1}\!\left(\frac{y^{2}}{2}\right)\left[1+\textit{O}\!\left(x^{2} \right)\right]\,,\] (G 86a) \[J(x,y) = -\frac{x}{2}\exp\left(-\frac{y^{2}}{2}\right) \tag{110}\] \[\times\left[y^{2}\left(I_{0}\!\left(\frac{y^{2}}{2}\right)-I_{1} \!\left(\frac{y^{2}}{2}\right)\right)-I_{1}\!\left(\frac{y^{2}}{2}\right) \right]\left[1+\textit{O}\!\left(x^{2}\right)\right]\,,\] \[K(x,y) = \frac{\sqrt{\pi}}{2}\exp\left(-\frac{y^{2}}{2}\right)\!\left[ \,\left(\frac{1}{2}y^{2}-y^{4}\right)I_{0}\!\left(\frac{y^{2}}{2}\right)\right.\] (111) \[\left.+\left(\frac{1}{2}y^{2}+y^{4}\right)I_{1}\!\left(\frac{y^{2}}{2} \right)\,\right]\left[1+\textit{O}\!\left(x^{2}\right)\right]\,.\] * \(x,y\ll 1\): \[I(x,y) = \frac{\sqrt{\pi}}{2x^{2}}\exp\left(-\frac{1}{x^{2}}\right)\left[ 1+\textit{O}\!\left(\exp\left(-\frac{3}{x^{2}}\right)\!,y^{2}\right)\right]\,,\] (112a) \[J(x,y) = -x\left(\frac{3}{8}y^{2}-\frac{1}{4}x^{2}\right)\left[1+\textit{O }\!\left(x^{4},x^{2}y^{2},y^{4}\right)\right]\,,\] (112b) \[K(x,y) = \frac{\sqrt{\pi}}{4}y^{2}\left[1+\textit{O}\!\left(x^{2},y^{2} \right)\right]\,.\] (112c) ### CE shear terms For a CE shear term of the form \[\tilde{f}_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp})=-\epsilon_{s}\left( \tilde{v}_{s\parallel}^{2}-\frac{\tilde{v}_{s\perp}^{2}}{2}\right)\exp\left(- \tilde{v}_{s}^{2}\right), \tag{112d}\] we have \[\Lambda_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) = -3\epsilon_{s}\tilde{v}_{s\parallel}\tilde{v}_{s\perp}\exp\left( -\tilde{v}_{s}^{2}\right), \tag{112a}\] \[\Xi_{s}(\tilde{v}_{s\parallel},\tilde{v}_{s\perp}) = -\frac{3\epsilon_{s}}{\tilde{\omega}_{s\parallel}}\tilde{v}_{s \parallel}\tilde{v}_{s\perp}\exp\left(-\tilde{v}_{s}^{2}\right)+\textit{O}\! \left(\epsilon_{s}\right). \tag{112b}\] This gives \[(\textit{P}_{s})_{xx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\int_{C_{L}}\frac{\tilde{v}_{s \parallel}\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112a) \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{s \perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp\left(-\tilde {v}_{s\perp}^{2}\right)\right],\] \[(\textit{P}_{s})_{xy} = \frac{6\mathrm{i}}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty} \left[\frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s \parallel}\exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112b) \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{s \perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})J_{n}^{\prime}(k_{ \perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde{v}_{s\perp}^{2} \right)\right]\,,\] \[(\textit{P}_{s})_{xz} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112c) \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{s \perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp\left(-\tilde{v }_{s\perp}^{2}\right)\right],\] \[(\textit{P}_{s})_{yx} = (\textit{P}_{s})_{xy}\] (112d) \[(\textit{P}_{s})_{xy} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112d) \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{s \perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp\left(-\tilde{v }_{s\perp}^{2}\right)\right],\] \[(\textit{P}_{s})_{yx} = (\textit{P}_{s})_{xy}\] (112d) \[(\textit{P}_{s})_{yx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112e) \[\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s\perp}\,\tilde{v}_{s \perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^{2}\exp\left(-\tilde{v }_{s\perp}^{2}\right)\right],\] \[(\textit{P}_{s})_{yx} = (\textit{P}_{s})_{xy}\] (112e) \[(\textit{P}_{s})_{yx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{ \tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112f) \[(\textit{P}_{s})_{yx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112f) \[(\textit{P}_{s})_{yx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right.\] (112g) \[(\textit{P}_{s})_{yx} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \frac{n}{k_{\perp}\tilde{\rho}_{s}}\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2} \exp\left(-\tilde{v}_{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s \parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right. \[(\boldsymbol{P}_{s})_{yy} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[ \int_{C_{L}}\frac{\tilde{v}_{s\parallel}\exp\left(-\tilde{v}_{s\parallel}^{2} \right)\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}-\zeta_{sn}}\right. \tag{126}\] \[\qquad\qquad\qquad\qquad\times\int_{0}^{\infty}\mathrm{d}\tilde{ v}_{s\perp}\,\tilde{v}_{s\perp}^{3}J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s} \tilde{v}_{s\perp})^{2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\right],\] \[(\boldsymbol{P}_{s})_{yz} = -\frac{6\mathrm{i}}{\sqrt{\pi}}\epsilon_{s}\sum_{n=-\infty}^{ \infty}\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{2}\exp\left(-\tilde{v} _{s\parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s \parallel}-\zeta_{sn}}\right.\] (127) \[\qquad\qquad\qquad\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s \perp}\,\tilde{v}_{s\perp}^{2}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp })J_{n}^{\prime}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})\exp\left(-\tilde {v}_{s\perp}^{2}\right)\right]\,,\] \[(\boldsymbol{P}_{s})_{zx} = (\boldsymbol{P}_{s})_{xz}\,,\] (128) \[(\boldsymbol{P}_{s})_{zy} = -(\boldsymbol{P}_{s})_{yz}\,,\] (129) \[(\boldsymbol{P}_{s})_{zz} = \frac{6}{\sqrt{\pi}}\epsilon_{s}\Bigg{\{}\sum_{n=-\infty}^{\infty }\left[\int_{C_{L}}\frac{\tilde{v}_{s\parallel}^{3}\exp\left(-\tilde{v}_{s \parallel}^{2}\right)\mathrm{d}\tilde{v}_{s\parallel}}{\tilde{v}_{s\parallel}- \zeta_{sn}}\right.\] (129) \[\qquad\qquad\qquad\times\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s \perp}\,\tilde{v}_{s\perp}J_{n}(k_{\perp}\tilde{\rho}_{s}\tilde{v}_{s\perp})^ {2}\exp\left(-\tilde{v}_{s\perp}^{2}\right)\right]\] \[\qquad\qquad-\int_{-\infty}^{\infty}\mathrm{d}\tilde{v}_{s \parallel}\,\tilde{v}_{s\parallel}^{2}\int_{0}^{\infty}\mathrm{d}\tilde{v}_{s \perp}\tilde{v}_{s\perp}\exp\left(-\tilde{v}_{s}^{2}\right)\Bigg{\}}\,.\] Again using the Bessel-function identities (125), and the identities (125) and (126) applicable to the plasma dispersion function, the dielectric tensor's elements become \[(\boldsymbol{P}_{s})_{xx} = 3\epsilon_{s}\sum_{n=-\infty}^{\infty}\frac{n^{2}}{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]\exp\left(-\frac{k_ {\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right), \tag{127a}\] \[(\boldsymbol{P}_{s})_{xy} = \frac{3\mathrm{i}\epsilon_{s}}{2}\sum_{n=-\infty}^{\infty}n\left[ 1+\zeta_{sn}Z(\zeta_{sn})\right]\] (127b) \[\qquad\qquad\qquad\times\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}} {2}\right)\right]\,,\] \[(\boldsymbol{P}_{s})_{xz} = 3\epsilon_{s}\sum_{n=-\infty}^{\infty}\frac{n}{k_{\perp}\tilde{ \rho}_{s}}\zeta_{sn}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]\exp\left(-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right),\] (127c) \[(\boldsymbol{P}_{s})_{yx} = (\boldsymbol{P}_{s})_{xy},\] (127d) \[(\boldsymbol{P}_{s})_{yy} = \frac{3}{2}\epsilon_{s}\sum_{n=-\infty}^{\infty}\left[1+\zeta_{sn }Z(\zeta_{sn})\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[\left(\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{\perp}^ {2}\tilde{\rho}_{s}^{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^ {2}}{2}\right)-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{n}^{\prime}\!\left(\frac{k _{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right],\] \[(\boldsymbol{P}_{s})_{yz} = -\frac{3\mathrm{i}\epsilon_{s}}{2}\sum_{n=-\infty}^{\infty}k_{ \perp}\tilde{\rho}_{s}\zeta_{sn}\left[1+\zeta_{sn}Z(\zeta_{sn})\right]\] \[\qquad\qquad\qquad\times\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\right)\right]\,,\] \[(\boldsymbol{P}_{s})_{zx} = (\boldsymbol{P}_{s})_{xz}\,,\] (127e) \[(\boldsymbol{P}_{s})_{zy} = -(\boldsymbol{P}_{s})_{yz}\,, \tag{127f}\] \[(\boldsymbol{P}_{s})_{zz}=3\epsilon_{s}\sum_{n=-\infty}^{\infty}\zeta_{sn}^{2} \left[1+\zeta_{sn}Z(\zeta_{sn})\right]\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho} _{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\,. \tag{115a}\] #### b.4.1 Dielectric tensor in low-frequency limit As with the CE temperature-gradient term, under the ordering \(k_{\parallel}\rho_{s}\sim k_{\perp}\rho_{s}\sim 1\), the expressions (115a) can be approximated by the leading-order term of the expansion of \(\boldsymbol{P}_{s}\) in the low-frequency limit \(\tilde{\omega}_{s\parallel}\ll 1\). Namely, we have \[\boldsymbol{P}_{s}\approx\boldsymbol{P}_{s}^{(0)}+\mbox{$O$}(\tilde{\omega}_{ s\parallel}^{2})\,, \tag{115b}\] where \[(\boldsymbol{P}_{s}^{(0)})_{xx}=3\epsilon_{s}\sum_{n=-\infty}^{ \infty}\frac{n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}\left[1-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{ s}}\right)\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\,, \tag{115c}\] \[(\boldsymbol{P}_{s}^{(0)})_{xy}=-3\epsilon_{s}\sum_{n=-\infty}^{ \infty}\frac{n^{2}}{k_{\perp}|k_{\parallel}|\tilde{\rho}_{s}^{2}}\left[1-\frac{ n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{ s}}\right)\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right),\] (115d) \[(\boldsymbol{P}_{s}^{(0)})_{yx}=(\boldsymbol{P}_{s}^{(0)})_{xy},\] (115e) \[(\boldsymbol{P}_{s}^{(0)})_{yy}=\frac{3}{2}\epsilon_{s}\sum_{n=- \infty}^{\infty}\left[1-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}Z\!\left(- \frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[\left(\frac{2n^{2}}{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}+k_{\perp} ^{2}\tilde{\rho}_{s}^{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{ s}^{2}}{2}\right)-k_{\perp}^{2}\tilde{\rho}_{s}^{2}I_{n}^{\prime}\!\left(\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right],\] (115f) \[(\boldsymbol{P}_{s}^{(0)})_{yz}=\frac{3i\epsilon_{s}}{2}\sum_{n=- \infty}^{\infty}\frac{nk_{\perp}}{|k_{\parallel}|}\left[1-\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s} }\right)\right]\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\,,\] (115f) \[(\boldsymbol{P}_{s}^{(0)})_{zx}=(\boldsymbol{P}_{s}^{(0)})_{xz}\,,\] (115g) \[(\boldsymbol{P}_{s}^{(0)})_{zy}=-(\boldsymbol{P}_{s}^{(0)})_{yz}\,,\] (115h) \[(\boldsymbol{P}_{s}^{(0)})_{zz}=3\sum_{n=-\infty}^{\infty}\frac{n^ {2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\left[1-\frac{n}{|k_{\parallel}| \tilde{\rho}_{s}}Z\!\left(-\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right) \right]\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\! \left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\,. \tag{115i}\] In this calculation, we have utilised the approximation \(\zeta_{sn}\approx-n/|k_{\parallel}|\tilde{\rho}_{s}\). Similarly to the Maxwellian case, we can use the Bessel-function-summation identities (115) and the symmetry properties of the plasma dispersion function with a real argument to show that \[(\boldsymbol{P}_{s}^{(0)})_{xx}=3\epsilon_{s}\Bigg{\{}\frac{1}{2}+\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=-\infty}^{\infty} \frac{n^{3}}{|k_{\parallel}|k_{\perp}^{2}\tilde{\rho}_{s}^{3}}\mbox{Re}\! \left[Z\!\left(\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\right]\!I_{n} \!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\}}\] \[=\epsilon_{s}W(|k_{\parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s})\,, \tag{100a}\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{xy} = 3\sqrt{\pi}\epsilon_{s}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_ {s}^{2}}{2}\right)\] \[\qquad\times\sum_{n=1}^{\infty}\frac{n^{2}}{|k_{\parallel}|\tilde {\rho}_{s}}\exp\left(-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}} \right)\left[I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\] \[= -\epsilon_{s}X(|k_{\parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s})\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{xz} = -3\epsilon_{s}\Bigg{\{}\frac{k_{\perp}}{2|k_{\parallel}|}+\exp \left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\] (100b) \[\qquad\times\sum_{n=-\infty}^{\infty}\frac{n^{3}}{k_{\perp}k_{ \parallel}^{2}\tilde{\rho}_{s}^{3}}\mbox{Re}\bigg{[}Z\!\left(\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\right)\bigg{]}I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\}}\] \[= -\frac{k_{\perp}}{|k_{\parallel}|}\epsilon_{s}W(|k_{\parallel}| \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s})\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{yx} = (\mathbf{\not\!\!P}_{s}^{(0)})_{xy},\] (100b) \[(\mathbf{\not\!\!P}_{s}^{(0)})_{yy} = \frac{3}{2}\epsilon_{s}\Bigg{\{}1+\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=-\infty}^{\infty}\frac{2n^{3}}{|k_{ \parallel}|k_{\perp}^{2}\tilde{\rho}_{s}^{3}}\mbox{Re}\bigg{[}Z\!\left(\frac{ n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\!\bigg{]}I_{n}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\] \[\qquad+k_{\perp}^{2}\tilde{\rho}_{s}^{2}\exp\left(-\frac{k_{\perp }^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=-\infty}^{\infty}\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\] \[\qquad\qquad\times\mbox{Re}\bigg{[}Z\!\left(\frac{n}{|k_{ \parallel}|\tilde{\rho}_{s}}\right)\bigg{]}\left[I_{n}\!\left(\frac{k_{\perp}^ {2}\tilde{\rho}_{s}^{2}}{2}\right)-I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\right]\Bigg{\}},\] \[= \epsilon_{s}Y(|k_{\parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s})\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{yz} = 3\sqrt{\pi}\epsilon_{s}\exp\left(-\frac{k_{\perp}^{2}\tilde{ \rho}_{s}^{2}}{2}\right)\] (100b) \[\qquad\times\sum_{n=1}^{\infty}\frac{k_{\perp}n^{2}}{k_{\parallel }^{2}\tilde{\rho}_{s}}\exp\left(-\frac{n^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s }^{2}}\right)\bigg{[}I_{n}^{\prime}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s }^{2}}{2}\right)-I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\bigg{]}\] \[= -\frac{k_{\perp}}{|k_{\parallel}|}\epsilon_{s}X(|k_{\parallel}| \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s})\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{zx} = (\mathbf{\not\!\!P}_{s}^{(0)})_{xz}\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{zy} = -(\mathbf{\not\!\!P}_{s}^{(0)})_{yz}\,,\] \[(\mathbf{\not\!\!P}_{s}^{(0)})_{zz} = 3\epsilon_{s}\Bigg{\{}\frac{k_{\perp}^{2}}{2k_{\parallel}^{2}}+ \exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\sum_{n=-\infty}^{ \infty}\frac{n^{3}}{|k_{\parallel}|^{3}\tilde{\rho}_{s}^{3}}\mbox{Re}\bigg{[}Z \!\left(\frac{n}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\!\bigg{]}I_{n}\!\left( \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\Bigg{\}}\] (100b) \[= \frac{k_{\perp}^{2}}{k_{\parallel}^{2}}\epsilon_{s}W(|k_{\parallel }|\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s})\,,\] where the functions \(W(x,y)\), \(Y(x,y)\) and \(X(x,y)\) are defined by \[W(x,y) \equiv \frac{3}{2}+\frac{3}{xy^{2}}\exp\left(-\frac{y^{2}}{2}\right) \sum_{m=-\infty}^{\infty}m^{3}\,\mbox{Re}\;Z\!\left(\frac{m}{x}\right)\!I_{m} \!\left(\frac{y^{2}}{2}\right)\,, \tag{100b}\] \[X(x,y) \equiv \frac{3\sqrt{\pi}}{x}\exp\left(-\frac{y^{2}}{2}\right)\sum_{m=1}^ {\infty}m^{2}\left[I_{m}\!\left(\frac{y^{2}}{2}\right)-I_{m}^{\prime}\!\left( \frac{y^{2}}{2}\right)\right]\exp\left(-\frac{m^{2}}{x^{2}}\right), \tag{100b}\] \[Y(x,y)\equiv W(x,y)-\frac{3}{2}\frac{y^{2}G(x,y)}{x}\,.\] (G 97 \(c\) ) #### a.4.2 Asymptotic limits of \(\boldsymbol{\mathsf{P}}_{s}^{(0)}\) As we have done for the other special functions defined in this paper, in this appendix we provide asymptotic expressions in the limits where \(x\) and \(y\) are very small or large for the special functions \(W(x,y)\), \(X(x,y)\) and \(Y(x,y)\) defined in (G 97). These limits again correspond to parallel and perpendicular wavenumbers that are very small or very large with respect to the inverse Larmor radius of species \(s\). Considering various asymptotic limits in a systematic fashion, we find * \(x\sim 1\), \(y\ll 1\): \[W(x,y) =\left[\frac{3}{2}+\frac{3}{2x}\text{Re}\;Z\left(\frac{1}{x} \right)\right]\left[1+\textit{O}\!\left(y^{2}\right)\right]\,,\] (G 98 \(a\) ) \[X(x,y) =-\frac{3\sqrt{\pi}}{2x}\exp\left(-\frac{1}{x^{2}}\right)\left[1+ \textit{O}\!\left(y^{2}\right)\right]\,,\] (G 98 \(b\) ) \[Y(x,y) =\left[\frac{3}{2}+\frac{3}{2x}\text{Re}\;Z\left(\frac{1}{x} \right)\right]\left[1+\textit{O}\!\left(y^{2}\right)\right]\,.\] (G 98 \(c\) ) * \(x,y\gg 1\): \[W(x,y) =\frac{3x^{2}\left(x^{2}-y^{2}\right)}{2\left(x^{2}+y^{2}\right)^ {2}}\left[1+\textit{O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 99 \(a\) ) \[X(x,y) =\frac{3\sqrt{\pi}x^{2}\left(y^{2}-2x^{2}\right)}{4\left(x^{2}+y^{2} \right)^{5/2}}\left[1+\textit{O}\!\left(\frac{1}{x^{2}+y^{2}}\right)\right]\,,\] (G 99 \(b\) ) \[Y(x,y) =\frac{3x^{2}}{2\left(x^{2}+y^{2}\right)}\left[1+\textit{O}\! \left(\frac{1}{x^{2}+y^{2}}\right)\right]\,.\] (G 99 \(c\) ) * \(x\ll 1\), \(y\sim 1\): \[W(x,y) =-\frac{3x^{2}}{2y^{2}}\left[1-\exp\left(-\frac{y^{2}}{2}\right) I_{0}\!\left(\frac{y^{2}}{2}\right)\right]\left[1+\textit{O}\!\left(x^{2}\right) \right]\,,\] (G 100 \(a\) ) \[X(x,y) =\frac{3\sqrt{\pi}}{x}\exp\left(-\frac{y^{2}}{2}\right)\left[I_{0} \!\left(\frac{y^{2}}{2}\right)-I_{1}\!\left(\frac{y^{2}}{2}\right)\right]\] \[\times\exp\left(-\frac{1}{x^{2}}\right)\left\{1+\textit{O}\! \left[\exp\left(-\frac{3}{x^{2}}\right)\right]\right\}\,,\] (G 100 \(b\) ) \[Y(x,y) =\frac{3}{2}y^{2}\exp\left(-\frac{y^{2}}{2}\right)\left[I_{0}\! \left(\frac{y^{2}}{2}\right)-I_{1}\!\left(\frac{y^{2}}{2}\right)\right]\left[1 +\textit{O}\!\left(x^{2}\right)\right]\,.\] (G 100 \(c\) ) * \(x,y\ll 1\): \[W(x,y) =-\frac{3}{4}x^{2}\left[1+\textit{O}\!\left(x^{2},y^{2}\right) \right]\,,\] (G 101 \(a\) ) \[X(x,y) =\frac{3\sqrt{\pi}}{x}\exp\left(-\frac{1}{x^{2}}\right)\left\{1+ \textit{O}\!\left[\exp\left(-\frac{3}{x^{2}}\right),y^{2}\right]\right\}\,,\] (G 101 \(b\) ) \[Y(x,y) =\left[\frac{3}{2}y^{2}-\frac{3}{4}x^{2}-\frac{9}{8}\left(x^{4}- \frac{2}{3}x^{2}y^{2}+y^{4}\right)\right]\] \[\times\left[1+\textit{O}\!\left(x^{6},x^{4}y^{2},x^{2}y^{4},y^{6} \right)\right]\,.\] (G 101 \(c\) ) * \(x\ll 1\), \(y\gg 1\): \[W(x,y) =-\frac{3x^{2}}{2y^{2}}\left[1+\textit{O}\!\left(x^{2},\frac{1}{y^ {2}}\right)\right]\,,\] (G 102 _a_) \[X(x,y) =\frac{3}{xy^{3}}\exp\left(-\frac{1}{x^{2}}\right)\left\{1+ \textit{O}\!\left[\exp\left(-\frac{3}{x^{2}}\right),\frac{1}{y^{2}}\right] \right\}\,,\] (G 102 _b_) \[Y(x,y) =\frac{3}{2\sqrt{\pi}y}\left[1+\textit{O}\!\left(x^{2},\frac{1}{y ^{2}}\right)\right]\,.\] (G 102 _c_) ## Appendix H Density perturbations for low-frequency modes In this appendix, we derive an expression for the (Fourier-transformed) perturbation of number density \(\widehat{\delta n}_{s}\) of species \(s\) associated with a low-frequency mode, in terms of the expanded terms of the dielectric tensor \(\mathfrak{E}_{s}=\tilde{\omega}_{s\parallel}\mathfrak{E}_{s}^{(0)}+\tilde{ \omega}_{s\parallel}^{2}\mathfrak{E}_{s}^{(1)}+\ldots\) of species \(s\) and the perturbed electric field, \(\widehat{\delta\mathbf{E}}\); we will show that \(\widehat{\delta n}_{s}\) is, in fact, independent of \(\mathfrak{E}_{s}^{(0)}\). We then derive an expression for the perturbed density of all sub-ion-Larmor scale (\(k\rho_{i}\gg 1\)), low-frequency modes. ### Derivation of general expressions We begin with the continuity equation (4.4_a_), which describes the time evolution of the density of species \(s\) in terms of itself and the bulk velocity of the same species. For any small-amplitude perturbation (with perturbed density \(\delta n_{s}\) and bulk velocity \(\delta\mathbf{V}_{s}\)) of some (much more slowly evolving) quasi-equilibrium state (with mean density \(n_{s0}\gg\delta n_{s}\) and bulk velocity \(\mathbf{V}_{s0}\gg\delta\mathbf{V}_{s}\)), viz., \[n_{s}=n_{s0}+\delta n_{s},\quad\mathbf{V}_{s}=\mathbf{V}_{s0}+\delta\mathbf{V}_{s}\,,\] (H 1) the continuity equation governing that perturbation then becomes \[\frac{\partial\delta n_{s}}{\partial t}+n_{s0}\mathbf{\nabla}\mathbf{\cdot}\delta\mathbf{ V}_{s}=0\,.\] (H 2) Assuming the perturbation has the form \[\delta n_{s} =\widehat{\delta n}_{s}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{ \cdot}\mathbf{r}-\omega t\right)\right\},\] (H 3 _a_) \[\delta\mathbf{V}_{s} =\widehat{\delta\mathbf{V}}_{s}\exp\left\{\mathrm{i}\left(\mathbf{k}\mathbf{ \cdot}\mathbf{r}-\omega t\right)\right\},\] (H 3 _b_) we deduce from (H 2) that \[\widehat{\delta n}_{s}=\frac{n_{s0}\mathbf{k}\mathbf{\cdot}\delta\mathbf{V}_{s}}{\omega}\,.\] (H 4) The perturbed velocity \(\widehat{\delta\mathbf{V}}_{s}\) can be written in terms of the dielectric tensor of species \(s\) using Ohm's law (C 13) and (2.95): \[\delta\mathbf{V}_{s}=-\frac{\mathrm{i}\omega}{4\uppi Z_{s}en_{s0}}\mathfrak{E}_{ s}\cdot\widehat{\delta\mathbf{E}}\,,\] (H 5) whence, by way of (H 4), \[\widehat{\delta n}_{s}=-\frac{\mathrm{i}}{4\uppi Z_{s}e}\mathbf{k}\mathbf{\cdot} \mathfrak{E}_{s}\cdot\widehat{\delta\mathbf{E}}\,.\] (H 6) Finally, we note that the symmetries (2.101) of \(\mathfrak{E}_{s}^{(0)}\) imply that it does not contribute to the right-hand side of (H.6), which implies in turn that \[\widehat{\delta n}_{s}\approx-\frac{\mathrm{i}\tilde{\omega}_{s\parallel}^{2}}{4 \pi Z_{s}e}\boldsymbol{k\cdot\mathfrak{E}_{s}^{(1)}\boldsymbol{\cdot}\, \widehat{\delta\boldsymbol{E}}}\,.\] (H.7) Thus, for low-frequency modes, \(\widehat{\delta n}_{s}\) is a function of the electric field and \(\mathfrak{E}_{s}^{(1)}\), but not of \(\mathfrak{E}_{s}^{(0)}\). We note that the condition (2.108) implies that, for low-frequency modes, quasi-neutrality is maintained: \[\sum_{s}Z_{s}\widehat{\delta n}_{s}=-\frac{\mathrm{i}}{4\pi e}\boldsymbol{k \cdot\mathfrak{E}_{s}\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}}=0\,.\] (H.8) Thus, in a two-species plasma, the ion number density associated with a perturbation can be calculated if the electron number density is known, and visa versa. ### Special case: sub-ion-Larmor scale modes in a two-species plasma In the special case of a two-species plasma whose characteristic parallel wavenumber satisfies \(k_{\parallel}\rho_{i}\gg 1\), a particularly simple expression for the perturbed number densities of ions (and electrons) can be derived: the Boltzmann response. This arises because the ion dielectric tensor \(\mathfrak{E}_{i}\) is unmagnetised, and so takes the simple form (valid for arbitrary \(\tilde{\omega}_{i}=\omega/kv_{\mathrm{th}i}\)) that was derived in appendix G.1.5: \[\mathfrak{E}_{i}\approx\mathfrak{E}_{i}^{(\mathrm{UM})}=\frac{\omega_{pi}^{2} }{\omega^{2}}\tilde{\omega}_{i}\left\{\left(\boldsymbol{l}-\hat{\boldsymbol{ k}}\hat{\boldsymbol{k}}\right)Z(\tilde{\omega}_{i})+2\left[\tilde{\omega}_{i}+ \tilde{\omega}_{i}^{2}Z(\tilde{\omega}_{i})\right]\hat{\boldsymbol{k}}\hat{ \boldsymbol{k}}\right\}\,.\] (H.9) It follows that \[\boldsymbol{k\cdot\mathfrak{E}_{i}\boldsymbol{\cdot}\,\widehat{\delta \boldsymbol{E}}}\approx\frac{\omega_{pi}^{2}}{\omega^{2}}2\tilde{\omega}_{i} ^{2}\left[1+\tilde{\omega}_{i}Z(\tilde{\omega}_{i})\right]\boldsymbol{k\cdot \widehat{\delta\boldsymbol{E}}}\,.\] (H.10) Now assuming that \(\tilde{\omega}_{i}\ll 1\), it follows that \[\boldsymbol{k\cdot\mathfrak{E}_{i}^{(1)}\boldsymbol{\cdot}\,\widehat{\delta \boldsymbol{E}}}\approx\frac{2\omega_{pi}^{2}}{\omega^{2}}\frac{k_{\parallel }^{2}}{k^{2}}\boldsymbol{k\cdot\widehat{\delta\boldsymbol{E}}}\,.\] (H.11) Expression (H.7) with \(s=i\) then gives \[\widehat{\delta n}_{i}\approx-\frac{Ze\mathrm{i}n_{i0}}{T_{i}}\frac{\hat{ \boldsymbol{k}\boldsymbol{\cdot}\,\widehat{\delta\boldsymbol{E}}}}{k}\,.\] (H.12) Finally, introducing the electrostatic potential \(\varphi\), whose Fourier transform is related to the electrostatic component of the electric field via \[\hat{\varphi}=\frac{\mathrm{i}\hat{\boldsymbol{k}\boldsymbol{\cdot}\,\widehat {\delta\boldsymbol{E}}}}{k}\,,\] (H.13) we deduce that \[\widehat{\delta n}_{i}\approx-\frac{Ze\mathrm{i}n_{i0}}{T_{i}}\hat{\varphi}\,,\] (H.14) and \[\widehat{\delta n}_{e}\approx-\frac{Ze\mathrm{i}n_{e0}}{T_{i}}\hat{\varphi}\,,\] (H.15) where we have used the quasi-neutrality relation \(n_{e0}=Zn_{i0}\) for the equilibrium state. ## Appendix I Calculating the electrostatic field from the transverse electric field In appendix G.1.3, it was shown that for any function with a small anisotropy, \[\mathfrak{C}_{s}^{(0)}\cdot\hat{\mathbf{k}}=0\,, \tag{11}\] which implies that the leading-order terms (in \(\tilde{\omega}_{s\parallel}\ll 1\)) of the dielectric tensor are insufficient to determine the electrostatic field. To do this, we must go to the next order in \(\tilde{\omega}_{s\parallel}\ll 1\). To illustrate how such a calculation is done, in this appendix, we derive an expression for the electrostatic field component \(\hat{\mathbf{k}}\cdot\widehat{\delta\mathbf{E}}\) in terms of the transverse electric field \(\widehat{\delta\mathbf{E}}_{T}\) and special functions when the underlying particle distribution function is Maxwellian. To achieve this aim, we first derive a relation between the components of the electric field in the coordinate basis \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\). We begin with the consistency condition (109) appropriate for non-relativistic electromagnetic fluctuations: \[\mathbf{k}\cdot\mathfrak{C}\cdot\widehat{\delta\mathbf{E}}=0\,. \tag{12}\] Writing \(\hat{\mathbf{k}}\), \(\mathfrak{C}\) and \(\widehat{\delta\mathbf{E}}\) in the basis \(\{\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}}\}\), this becomes \[\big{(}k_{\perp}\mathfrak{C}_{xx}+k_{\parallel}\mathfrak{C}_{xz}\big{)}\, \widehat{\delta\mathbf{E}}_{x}+\big{(}k_{\perp}\mathfrak{C}_{xy}-k_{\parallel} \mathfrak{C}_{yz}\big{)}\,\widehat{\delta\mathbf{E}}_{y}+\big{(}k_{\perp} \mathfrak{C}_{xz}+k_{\parallel}\mathfrak{C}_{zz}\big{)}\,\widehat{\delta\mathbf{ E}}_{z}=0\,. \tag{13}\] Now considering the case of fluctuations that satisfy \(\tilde{\omega}_{s\parallel}\ll 1\) for all particle species \(s\), and expanding the components of the dielectric in \(\tilde{\omega}_{s\parallel}\ll 1\), we find \[\Big{(}k_{\perp}\mathfrak{C}_{xx}^{(1)}+k_{\parallel}\mathfrak{C}_{xz}^{(1)} \Big{)}\,\widehat{\delta\mathbf{E}}_{x}+\Big{(}k_{\perp}\mathfrak{C}_{xy}^{(1)}-k _{\parallel}\mathfrak{C}_{yz}^{(1)}\Big{)}\,\widehat{\delta\mathbf{E}}_{y}+ \Big{(}k_{\perp}\mathfrak{C}_{xz}^{(1)}+k_{\parallel}\mathfrak{C}_{zz}^{(1)} \Big{)}\,\widehat{\delta\mathbf{E}}_{z}=\,O(\tilde{\omega}_{s\parallel}^{3})\,, \tag{14}\] where \[\mathfrak{C}^{(1)}=\sum_{s}\tilde{\omega}_{s\parallel}^{2}\mathfrak{C}_{s}^{( 1)}\,. \tag{15}\] From (106), we have \[k_{\perp}\mathfrak{C}_{xx}^{(1)}+k_{\parallel}\mathfrak{C}_{xz}^ {(1)}=-\sum_{s}\frac{2k_{\parallel}\omega_{ps}^{2}\tilde{\omega}_{s \parallel}^{2}}{\omega^{2}}\sum_{m=-\infty}^{\infty}\frac{m}{k_{\perp}\tilde{ \rho}_{s}}\mathrm{Re}\;Z\left(\frac{m}{|k_{\parallel}|\tilde{\rho}_{s}}\right)\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\;, \tag{15a}\] \[k_{\perp}\mathfrak{C}_{xy}^{(1)}-k_{\parallel}\mathfrak{C}_{yz}^ {(1)}=\sum_{s}\frac{\sqrt{\pi}k_{\parallel}\omega_{ps}^{2}\tilde{\omega}_{s \parallel}^{2}}{\omega^{2}}\sum_{m=-\infty}^{\infty}k_{\perp}\tilde{\rho}_{s }\exp\left(-\frac{m^{2}}{k_{\parallel}^{2}\tilde{\rho}_{s}^{2}}\right)\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)\left[I_{m}^{\prime}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2 }\bigg{)}-I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)} \right]\,,\] (15b) \[k_{\perp}\mathfrak{C}_{xz}^{(1)}+k_{\parallel}\mathfrak{C}_{zz}^ {(1)}=\sum_{s}\frac{2k_{\parallel}\omega_{ps}^{2}\tilde{\omega}_{s \parallel}^{2}}{\omega^{2}}\left[1+\sum_{m=-\infty}^{\infty}\frac{m}{|k_{ \parallel}|\tilde{\rho}_{s}}\mathrm{Re}\;Z\left(\frac{m}{|k_{\parallel}| \tilde{\rho}_{s}}\right)\right.\] \[\times\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2} \right)I_{m}\bigg{(}\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\bigg{)}\bigg{]}\;. \tag{15c}\] Thus, we have the following relationship between \(\widehat{\delta\mathbf{E}}_{x}\), \(\widehat{\delta\mathbf{E}}_{y}\) and \(\widehat{\delta\mathbf{E}}_{z}\): \[\sum_{s}\frac{k_{\mathrm{D}s}^{2}}{2k_{\parallel}^{2}}\Bigg{\{}-L\big{(}|k_{ \parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\widehat{\delta\bm {E}}_{x}+N\big{(}|k_{\parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\,\widehat{\delta\mathbf{E}}_{y}\] \[+\left[2+\frac{k_{\perp}}{k_{\parallel}}L\big{(}|k_{\parallel}| \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\right]\widehat{\delta E}_{z} \Bigg{\}}=0\,, \tag{17}\] where \(k_{\mathrm{D}s}\) is the Debye wavenumber (D 12), and \(L(x,y)\) and \(N(x,y)\) were defined previously by (G 32). Using the identities \[\widehat{\delta E}_{x} = \frac{k_{\parallel}}{k}\widehat{\delta E}_{1}+\frac{k_{\perp}}{k }\widehat{\delta E}_{3}\,, \tag{18a}\] \[\widehat{\delta E}_{y} = \widehat{\delta E}_{2}\,,\] (18b) \[\widehat{\delta E}_{z} = -\frac{k_{\perp}}{k}\widehat{\delta E}_{1}+\frac{k_{\parallel}}{k }\widehat{\delta E}_{3}\,, \tag{18c}\] we can rearrange (17) to give \[\frac{1}{k_{\parallel}k}\left(\sum_{s}k_{\mathrm{D}s}^{2}\right) \widehat{\delta E}_{3} = \sum_{s}\frac{k_{\mathrm{D}s}^{2}}{2k_{\parallel}^{2}}\Bigg{\{} \left[\frac{k}{k_{\parallel}}L\big{(}|k_{\parallel}|\tilde{\rho}_{s},k_{\perp }\tilde{\rho}_{s}\big{)}+2\frac{k_{\perp}}{k}\right]\widehat{\delta E}_{1} \tag{19}\] \[-N\big{(}|k_{\parallel}|\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{ s}\big{)}\,\widehat{\delta E}_{2}\Bigg{\}}\,.\] Thus, the electrostatic field is related to the transverse field by \[\hat{\mathbf{k}}\cdot\widehat{\delta\mathbf{E}}=\left(\sum_{s}\frac{Z_{s} T_{e}}{T_{s}}\right)^{-1}\sum_{s}\frac{Z_{s}T_{e}}{T_{s}}\Bigg{\{} \left[\frac{k^{2}}{2k_{\parallel}^{2}}L\big{(}|k_{\parallel}|\tilde{\rho}_{s },k_{\perp}\tilde{\rho}_{s}\big{)}+\frac{k_{\perp}}{k_{\parallel}}\right] \widehat{\delta E}_{1} \tag{10}\] \[-\frac{k}{2k_{\parallel}}N\big{(}|k_{\parallel}|\tilde{\rho}_{s}, k_{\perp}\tilde{\rho}_{s}\big{)}\,\widehat{\delta E}_{2}\Bigg{\}}\,.\] ## Appendix J Methodology for characterising CET microinstabilities In this appendix, we describe our method for calculating the real frequencies and growth rates of microinstabilities driven by the CE electron- and ion-temperature-gradient, and electron-friction terms when the Krook collision operator is assumed. The method follows that outlined in section 2.5: that is, motivated by the considerations of section 2.3.4, we assume that all significant CET microinstabilities are low frequency (\(\omega\ll k_{\parallel}v_{\mathrm{th}s}\) for at least one particle species), and derive algebraic dispersion relations of such microinstabilities [a particular example of which is given by (117)]. The growth rate of CET microinstabilities [and, therefore, the stability of the electron and ion CE distribution functions (1\(a\)) and (1\(b\))] as a function of their parallel and perpendicular wavenumbers \(k_{\parallel}\) and \(k_{\perp}\) is assessed by solving this dispersion relation for the complex frequency \(\omega\), and then evaluating its imaginary part. As we explained in section 2.5, to construct the algebraic, low-frequency dispersion relation for particular forms of CE distribution function for each particle species \(s\), we must evaluate its (leading-order) non-Maxwellian contribution to the dielectric tensor, \(\mathbf{P}_{s}\approx\mathbf{P}_{s}^{(0)}\) [see (96) and (G 10) for the precise relation of this quantity to the dielectric tensor \(\mathbf{\mathfrak{E}}_{s}\)]. This is done for the CE electron-friction term in appendix J.1, and for the CE temperature-gradient terms in appendix J.2. We then deduce the algebraic dispersion relations of CE electron-temperature-gradient-driven microinstabilities in appendix J.3, and of CE ion-temperature-gradient-driven microinstabilities in appendix J.4. Within these two appendices, respectively, we also present derivations of the (further) simplified dispersion relations for the parallel CET whistler instability (appendix J.3.1), the parallel CET slow-hydromagnetic-wave instability (appendix J.4.1), and the CET long-wavelength KAW instability (appendix J.4.2), from which the frequencies and growth rates of these instabilities that are stated in section 3.3 are calculated. ### Dielectric response of CE electron-friction term We first consider the CE electron-friction term when evaluating \(\boldsymbol{\mathsf{P}}_{e}^{(0)}\), defined in (96). We showed in appendix G.2 that, when a Krook collision operator was assumed, if \(\eta_{e}^{T}=\eta_{i}=0\), then [see (100)] \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{11} =\frac{\eta_{e}^{R}}{2}(\boldsymbol{\mathsf{M}}_{e}^{(0)})_{11}\,, \tag{101a}\] \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{12} =\frac{\eta_{e}^{R}}{2}(\boldsymbol{\mathsf{M}}_{e}^{(0)})_{12}\,,\] (101b) \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{21} =\frac{\eta_{e}^{R}}{2}(\boldsymbol{\mathsf{M}}_{e}^{(0)})_{21}\,,\] (101c) \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{22} =\frac{\eta_{e}^{R}}{2}(\boldsymbol{\mathsf{M}}_{e}^{(0)})_{22}\,. \tag{101d}\] It follows that the dispersion relation of all plasma modes is identical to that in a Maxwellian plasma, only with shifted complex frequencies \(\tilde{\omega}_{e\parallel}^{*}\equiv\tilde{\omega}_{e\parallel}+\eta_{e}^{R}/2\). Since \(\mathrm{Im}(\tilde{\omega}_{e\parallel})<0\) for all modes in a Maxwellian plasma, we conclude that \(\mathrm{Im}(\tilde{\omega}_{e\parallel}^{*})<0\) also, and hence the CE electron-friction term cannot drive any microinstabilities when a Krook collision operator is employed: instead, it merely modifies the real frequency of the waves. Thus, when characteristing CET microinstabilities, we henceforth ignore the CE electron-friction term, as well as the electron-ion-drift term (viz., \(\eta_{e}^{R}=\eta_{e}^{u}=0\)). ### Dielectric response of CE temperature-gradient terms Now consider the CE temperature-gradient terms. It is shown in appendix G.3 that \(\boldsymbol{\mathsf{P}}_{s}^{(0)}\) is given by \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{11} =\mathrm{i}\eta_{e}^{T}\frac{k^{2}}{k_{\parallel}^{2}}I\big{(}k_{ \parallel}\tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}\, \tag{102a}\] \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{12} =-\mathrm{i}\eta_{e}^{T}\frac{k}{k_{\parallel}}J\big{(}k_{ \parallel}\tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}\,\] (102b) \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{21} =\mathrm{i}\eta_{e}^{T}\frac{k}{k_{\parallel}}J\big{(}k_{ \parallel}\tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}\,\] (102c) \[(\boldsymbol{\mathsf{P}}_{e}^{(0)})_{22} =\mathrm{i}\eta_{e}^{T}K\big{(}k_{\parallel}\tilde{\rho}_{e},k_{ \perp}\tilde{\rho}_{e}\big{)}\, \tag{102d}\] where the special functions \(I(x,y)\), \(J(x,y)\) and \(K(x,y)\) are defined by (100). Note that \(\tilde{\rho}_{e}<0\), by definition. The contribution \(\boldsymbol{\mathsf{P}}_{i}^{(0)}\) associated with the CE ion-temperature-gradient terms is given by \[(\boldsymbol{\mathsf{P}}_{i}^{(0)})_{11} =\mathrm{i}\eta_{i}\frac{k^{2}}{k_{\parallel}^{2}}I\big{(}k_{ \parallel}\rho_{i},k_{\perp}\rho_{i}\big{)}\, \tag{103a}\] \[(\boldsymbol{\mathsf{P}}_{i}^{(0)})_{12} =-\mathrm{i}\eta_{i}\frac{k}{k_{\parallel}}J\big{(}k_{ \parallel}\rho_{i},k_{\perp}\rho_{i}\big{)}\,\] (103b) \[(\boldsymbol{\mathsf{P}}_{i}^{(0)})_{21} =\mathrm{i}\eta_{i}\frac{k}{k_{\parallel}}J\big{(}k_{ \parallel}\rho_{i},k_{\perp}\rho_{i}\big{)}\,\] (103c) \[(\boldsymbol{\mathsf{P}}_{i}^{(0)})_{22} =\mathrm{i}\eta_{i}K\big{(}k_{\parallel}\rho_{i},k_{\perp}\rho_{ i}\big{)}. \tag{103d}\] ### Approximate dispersion relation of CE electron-temperature-gradient-driven microinstabilities We first consider microinstabilities for which \(\tilde{\omega}_{e\parallel}=\omega/k_{\parallel}v_{\rm th\/e}\sim\eta_{e}^{T}\). It follows that \(\tilde{\omega}_{i\parallel}=\omega/k_{\parallel}v_{\rm th\/i}\sim\eta_{e}^{T} \mu_{e}^{-1/2}\gg\eta_{i}\). Therefore, the CE ion-temperature-gradient term is irrelevant for such instabilities, and we need consider only the electron-temperature-gradient term. We also assume that the Maxwellian contribution to the dielectric tensor, \(\mathbf{M}_{i}\), can be ignored for such microinstabilities - the validity of this assumption is discussed at the end of this section. The dispersion relation for microinstabilities under the ordering \(\tilde{\omega}_{e\parallel}\sim\eta_{e}^{T}\sim 1/\beta_{e}\) is then given by (117), with \(\mathbf{M}_{e}^{(0)}\) and \(\mathbf{P}_{e}^{(0)}\) substituted for by (120) and (121), respectively: \[\big{[}\tilde{\omega}_{e\parallel}F\big{(}k_{\parallel}\tilde{ \rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)} + \eta_{e}^{T}I\big{(}k_{\parallel}\tilde{\rho}_{e},k_{\perp} \tilde{\rho}_{e}\big{)}+{\rm i}k_{\parallel}^{2}d_{e}^{2}\big{]} \tag{122}\] \[\times \big{[}\tilde{\omega}_{e\parallel}H\big{(}k_{\parallel}\tilde{ \rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}+\eta_{e}^{T}K\big{(}k_{\parallel} \tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}+{\rm i}k^{2}d_{e}^{2}\big{]}\] \[+ \big{[}\tilde{\omega}_{e\parallel}G\big{(}k_{\parallel}\tilde{ \rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}+\eta_{e}^{T}J\big{(}k_{\parallel} \tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e}\big{)}\big{]}^{2}=0\,.\] We remind the reader that we have ordered \(k^{2}d_{e}^{2}\sim\eta_{e}^{T}\) and \(k\rho_{e}\sim 1\). Noting that \(\beta_{e}=\rho_{e}^{2}/d_{e}^{2}\), we can rewrite the skin-depth terms as follows: \[k_{\parallel}^{2}d_{e}^{2}=\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\,, \quad k^{2}d_{e}^{2}=\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\,. \tag{123}\] This allows for the dispersion relation (122) to be arranged as a quadratic in the complex variable \(\tilde{\omega}_{e\parallel}\beta_{e}\): \[A_{\rm T}\big{(}k_{\parallel}\rho_{e},k_{\perp}\rho_{e}\big{)}\,\tilde{\omega }_{e\parallel}^{2}\beta_{e}^{2}+B_{\rm T}\big{(}k_{\parallel}\rho_{e},k_{\perp }\rho_{e}\big{)}\,\tilde{\omega}_{e\parallel}\beta_{e}+C_{\rm T}\big{(}k_{ \parallel}\rho_{e},k_{\perp}\rho_{e}\big{)}=0\,, \tag{124}\] where \[A_{\rm T}\big{(}k_{\parallel}\rho_{e},k_{\perp}\rho_{e}\big{)} = F_{e}H_{e}+G_{e}^{2}\,, \tag{125}\] \[B_{\rm T}\big{(}k_{\parallel}\rho_{e},k_{\perp}\rho_{e}\big{)} = \eta_{e}^{T}\beta_{e}\left(F_{e}K_{e}+H_{e}I_{e}+2G_{e}J_{e} \right)+{\rm i}\left(F_{e}k^{2}\rho_{e}^{2}+H_{e}k_{\parallel}^{2}\rho_{e}^{2 }\right)\,,\] (126) \[C_{\rm T}\big{(}k_{\parallel}\rho_{e},k_{\perp}\rho_{e}\big{)} = \big{(}\eta_{e}^{T}\beta_{e}\big{)}^{2}\,\big{(}I_{e}K_{e}+J_{e}^{ 2}\big{)}-k^{2}k_{\parallel}^{2}\rho_{e}^{4}+{\rm i}\eta_{e}^{T}\beta_{e} \left(I_{e}k^{2}\rho_{e}^{2}+K_{e}k_{\parallel}^{2}\rho_{e}^{2}\right)\,, \tag{127}\] and \(F_{e}\equiv F\big{(}k_{\parallel}\tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e} \big{)}\), \(G_{e}\equiv G\big{(}k_{\parallel}\tilde{\rho}_{e},k_{\perp}\tilde{\rho}_{e} \big{)}\), etc. Solving (124) gives two roots; restoring dimensions to the complex frequency, they are \[\omega=\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{-B_{\rm T}\pm \sqrt{B_{\rm T}^{2}+4A_{\rm T}C_{\rm T}}}{2A_{\rm T}}\,, \tag{128}\] recovering (112). For a given wavenumber, we use (128) to calculate the growth rates of the perturbations - and, in particular, to see if positive growth rates are present. If they are, it is anticipated that they will have typical size \(\gamma\sim\Omega_{e}/\beta_{e}\sim\eta_{e}^{T}\Omega_{e}\) (or \(\tilde{\omega}_{e\parallel}\sim 1/\beta_{e}\sim\eta_{e}^{T}\)). When deriving (128), we assumed that neglecting the Maxwellian ion response was legitimate. It is clear that if \(\tilde{\omega}_{i\parallel}\gg 1\), then thermal ions are effectively static to electromagnetic perturbations, and so their contribution \(\mathbf{M}_{i}\) to the dielectric tensor can be ignored. In terms of a condition on \(\eta_{e}^{T}\), the scaling \(\eta_{e}^{T}\sim\tilde{\omega}_{e\parallel}\) gives \(\eta_{e}^{T}\gg\mu_{e}^{1/2}\), so this regime is valid for sufficiently large \(\eta_{e}^{T}\). For \(\tilde{\omega}_{i\parallel}\lesssim 1\), it is not immediately clear in the same way that the ion contribution to the dielectric tensor is small. However, having deduced the typical magnitude of the complex frequency of perturbations whilst ignoring ion contributions, we are now able to confirm that our neglect of \(\mathbf{M}_{i}\) was justified. Since \(k\rho_{e}\sim 1\) under the ordering assumed when deriving (111), we conclude that the Maxwellian ion response is unmagnetised: \(k\rho_{i}\gg 1\). As a consequence, it can be shown (see appendix G.1.5) that the transverse components of \(\boldsymbol{M}_{i}\) are given by \[\left(\boldsymbol{M}_{i}\right)_{11}=\left(\boldsymbol{M}_{i}\right)_{22}= \tilde{\omega}_{i}Z(\tilde{\omega}_{i})\,\quad\left(\boldsymbol{M}_{i}\right)_{12}=\left( \boldsymbol{M}_{i}\right)_{21}=0\,, \tag{123}\] where \(\tilde{\omega}_{i}\equiv\omega/kv_{\rm thi}=k_{\parallel}\tilde{\omega}_{i \parallel}/k\). Then, estimating the size of the neglected Maxwellian ion contribution to the dielectric tensor (assuming \(k_{\parallel}\sim k\)) as compared with the equivalent electron contribution, we find \[\frac{(\boldsymbol{\mathfrak{E}}_{i})_{11}}{(\boldsymbol{\mathfrak{E}}_{e}^{( 0)})_{11}}\sim\frac{(\boldsymbol{\mathfrak{E}}_{i})_{22}}{(\boldsymbol{ \mathfrak{E}}_{e}^{(0)})_{22}}\sim\frac{\mu_{e}\tilde{\omega}_{i}}{\tilde{ \omega}_{e\parallel}}|Z(\tilde{\omega}_{i})\,|\sim\mu_{e}^{1/2}|Z(\tilde{ \omega}_{i})\,|, \tag{124}\] where we have used \(\boldsymbol{\mathfrak{E}}_{i}=\mu_{e}\boldsymbol{M}_{i}\) and \(\boldsymbol{\mathfrak{E}}_{e}^{(0)}=\tilde{\omega}_{e\parallel}\boldsymbol{M} _{e}^{(0)}+\boldsymbol{P}_{e}^{(0)}\) (see section 2.5.3). Since \(|Z(z)\,|\lesssim 1\) for all \(z\) with positive imaginary part (Fried & Conte, 1961), we conclude that the ion contribution to the dielectric tensor is indeed small for unstable perturbations, irrespective of the value of \(\tilde{\omega}_{i\parallel}\), and so its neglect was valid. #### j.3.1 Derivation of frequency and growth rate of the parallel CET whistler instability The dispersion relation of unstable whistler waves with their wavevector parallel to \(\boldsymbol{B}_{0}\) is obtained by taking the subsidiary limit \(k_{\perp}\rho_{e}\to 0\) in (111), and substituting \(\tilde{\rho}_{e}=-\rho_{e}\): \[\left[\tilde{\omega}_{e\parallel}\beta_{e}\sqrt{\pi}\exp\left(- \frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+\eta_{e}^{T}\beta_{e}\frac{ \sqrt{\pi}}{2}\left(\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}-\frac{1}{2}\right) \exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+{\rm i}k_{\parallel }^{2}\rho_{e}^{2}\right]^{2}\] \[+\left\{\tilde{\omega}_{e\parallel}\beta_{e}\operatorname{Re}\,Z \!\left(\frac{1}{k_{\parallel}\rho_{e}}\right)+\eta_{e}^{T}\beta_{e}\left[ \frac{1}{2k_{\parallel}\rho_{e}}+\left(\frac{1}{2k_{\parallel}^{2}\rho_{e}^{ 2}}-\frac{1}{4}\right)\operatorname{Re}\,Z\!\left(\frac{1}{k_{\parallel}\rho_ {e}}\right)\right]\right\}^{2}=0\,. \tag{125}\] This can be factorised to give two roots; separating the complex frequency into real and imaginary parts via \(\omega=\varpi+{\rm i}\gamma\), and defining \[\tilde{\varpi}_{e\parallel}\equiv\frac{\varpi}{k_{\parallel}v_{\rm the}}\,, \quad\tilde{\gamma}_{e\parallel}\equiv\frac{\gamma}{k_{\parallel}v_{\rm the}}\,, \tag{126}\] we have \[\tilde{\varpi}_{e\parallel}\beta_{e}=\eta_{e}^{T}\beta_{e}\left( \frac{1}{2k_{\parallel}^{2}\rho_{e}^{2}}-\frac{1}{4}\right)+\frac{\left(\eta_{ e}^{T}\beta_{e}/2k_{\parallel}\rho_{e}-k_{\parallel}^{2}\rho_{e}^{2}\right) \operatorname{Re}\,Z\!\left(1/k_{\parallel}\rho_{e}\right)}{\left[\operatorname {Re}\,Z\!\left(1/k_{\parallel}\rho_{e}\right)\right]^{2}+\pi\exp\left(-2/k_{ \parallel}^{2}\rho_{e}^{2}\right)}\,, \tag{127a}\] \[\tilde{\gamma}_{e\parallel}\beta_{e}=\frac{\sqrt{\pi}\!\left(\eta_{ e}^{T}\beta_{e}/2k_{\parallel}\rho_{e}-k_{\parallel}^{2}\rho_{e}^{2}\right)}{ \left[\operatorname{Re}\,Z\!\left(1/k_{\parallel}\rho_{e}\right)\right]^{2} \exp\left(1/k_{\parallel}^{2}\rho_{e}^{2}\right)+\pi\exp\left(-1/k_{ \parallel}^{2}\rho_{e}^{2}\right)}\,, \tag{127b}\] whence (111) follows immediately. ### Approximate dispersion relation of CE ion-temperature-gradient-driven microinstabilities We now explain the method used to characterise microinstabilities driven by the ion-temperature-gradient term. For these, we set the electron-temperature-gradient terms to zero, \(\eta_{e}^{T}=0\), assume the ordering \(\tilde{\omega}_{i\parallel}\sim\eta_{i}\), and anticipate that such microinstabilities will occur on ion rather than electron scales, i.e., \(k\rho_{i}\sim 1\). Under the ordering \(\tilde{\omega}_{i\parallel}\sim\eta_{i}\ll 1\), it follows that \(\tilde{\omega}_{e\parallel}\sim\mu_{e}^{1/2}\tilde{\omega}_{i\parallel}\ll 1\); therefore, we can use (122) to quantity the contribution of Maxwellian electrons to the total dielectric tensor. However, since \(k\rho_{i}\sim 1\), we must consider the matrix \(\mathbf{M\!\!\!\!/}_{e}^{(0)}\) in the limit \(k_{\parallel}\rho_{e}\sim k_{\perp}\rho_{e}\sim\mu_{e}^{1/2}\ll 1\). Asymptotic forms of (120) appropriate for this limit are given by (109), and lead to1 Footnote 1: As noted in section 2.5.6, for \(k_{\parallel}\rho_{e}\ll 1\), the approximation \((\mathbf{M\!\!\!/}_{e})_{11}\approx\tilde{\omega}_{e\parallel}(\mathbf{M\!\!\!/}_{e}^{(0)})_{11}\) in fact breaks down, on account of \((\mathbf{M\!\!\!/}_{e}^{(0)})_{11}\) becoming exponentially small in \(k_{\parallel}\rho_{e}\ll 1\). However, it turns out that when \(k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\sim 1\), \((\mathbf{M\!\!\!/}_{e})_{11}\ll(\mathbf{M\!\!\!/}_{i})_{11}\), and so this subtlety can be ignored for the CE ion-temperature-gradient-driven instabilities. \[(\mathbf{M\!\!\!/}_{e}^{(0)})_{11} = O\left[\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right) \right]\,, \tag{109a}\] \[(\mathbf{M\!\!\!/}_{e}^{(0)})_{12} \approx -{\rm i}\frac{k}{k_{\parallel}}\left[k_{\parallel}\rho_{e}+\,O(k ^{3}\rho_{e}^{3})\right]\,,\] (109b) \[(\mathbf{M\!\!\!/}_{e}^{(0)})_{21} = {\rm i}\frac{k}{k_{\parallel}}\left[k_{\parallel}\rho_{e}+\,O(k ^{3}\rho_{e}^{3})\right]\,,\] (109c) \[(\mathbf{M\!\!\!/}_{e}^{(0)})_{22} = {\rm i}\left[\sqrt{\pi}k_{\perp}^{2}\rho_{e}^{2}+\,O(k_{\perp}^{4 }\rho_{e}^{4})\right]\,. \tag{109d}\] We now combine (109) with (120) for \(\mathbf{M\!\!\!/}_{i}^{(0)}\) and (107) for \(\mathbf{P\!\!\!/}_{i}^{(0)}\), and find the dispersion relation for CE ion-temperature-gradient-driven microinstabilities by substituting the dielectric tensor (107) into (116): \[\left[\tilde{\omega}_{i\parallel}F\big{(}k_{\parallel}\rho_{i},k _{\perp}\rho_{i}\big{)}\right. + \left.\eta_{i}I\big{(}k_{\parallel}\rho_{i},k_{\perp}\rho_{i} \big{)}+{\rm i}k_{\parallel}^{2}d_{i}^{2}\right] \tag{110}\] \[\times \left[\tilde{\omega}_{i\parallel}H\big{(}k_{\parallel}\rho_{i},k _{\perp}\rho_{i}\big{)}+\eta_{i}K\big{(}k_{\parallel}\rho_{i},k_{\perp}\rho_{ i}\big{)}+{\rm i}k^{2}d_{i}^{2}\right]\] \[+ \left[\tilde{\omega}_{i\parallel}\left[G\big{(}k_{\parallel}\rho_ {i},k_{\perp}\rho_{i}\big{)}+k_{\parallel}\rho_{i}\right]+\eta_{i}J\big{(}k_{ \parallel}\rho_{i},k_{\perp}\rho_{i}\big{)}\right]^{2}=0\,,\] where \(d_{i}=c/\omega_{pi}\) is the ion inertial scale, and we have ordered \(\eta_{i}\sim 1/\beta_{i}\sim k^{2}d_{i}^{2}\). This dispersion relation is very similar to (109), save for the addition of one term [the middle term in the third line of (110)] providing a linear coupling between the \(\widehat{\delta\mathbf{E}}_{1}\) and \(\widehat{\delta\mathbf{E}}_{2}\) components of the electric field perturbation. Similarly to (109), the dispersion relation (110) can be written as a quadratic in \(\tilde{\omega}_{i\parallel}\beta_{i}\), which is then solved to give the following expression for the complex frequency: \[\omega=\frac{\Omega_{i}}{\beta_{i}}k_{\parallel}\rho_{i}\,\frac{-\tilde{B}_{ \rm T}\pm\sqrt{\tilde{B}_{\rm T}^{2}+4\tilde{A}_{\rm T}\tilde{C}_{\rm T}}}{2 \tilde{A}_{\rm T}}\,, \tag{111}\] where \[\tilde{A}_{\rm T} = F_{i}H_{i}+\left[G_{i}+k_{\parallel}\rho_{i}\right]^{2}\,, \tag{112}\] \[\tilde{B}_{\rm T} = \eta_{i}\beta_{i}\left[F_{i}K_{i}+H_{i}I_{i}+2J_{i}\left(G_{i}+k_{ \parallel}\rho_{i}\right)\right]+{\rm i}\left(F_{i}k^{2}\rho_{e}^{2}+H_{i}k_{ \parallel}^{2}\rho_{e}^{2}\right)\,,\] (113) \[\tilde{C}_{\rm T} = \left(\eta_{i}\beta_{i}\right)^{2}\left(I_{i}K_{i}+J_{i}^{2} \right)-k^{2}k_{\parallel}^{2}\rho_{e}^{4}+{\rm i}\eta_{i}\beta_{i}\left(I_{i}k^ {2}\rho_{e}^{2}+K_{i}k_{\parallel}^{2}\rho_{e}^{2}\right)\,. \tag{114}\] This expression is the one that is used to evaluate the real frequencies and growth rates of ion-scale CET microinstabilities in sections 3.3.3. d.4.1 Derivation of frequency and growth rate of the parallel CET slow-hydromagnetic-wave instability We obtain the dispersion relation of the parallel slow-wave instability by considering the general dispersion relation (110) of CE ion-temperature-gradient-driven instabilities in the limit \(k_{\perp}\to 0\): \[\left[\tilde{\omega}_{i\parallel}\beta_{i}\sqrt{\pi}\exp\left(-\frac{ 1}{k_{\parallel}^{2}\rho_{i}^{2}}\right) + \eta_{i}\beta_{i}\frac{\sqrt{\pi}}{2}\left(\frac{1}{k_{\parallel}^{ 2}\rho_{i}^{2}}-\frac{1}{2}\right)\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{ i}^{2}}\right)+\mathrm{i}k_{\parallel}^{2}\rho_{i}^{2}\right]^{2} \tag{192}\] \[+ \left\{\tilde{\omega}_{i\parallel}\beta_{i}\,\left[\mathrm{Re}\; Z\!\left(\frac{1}{k_{\parallel}\rho_{i}}\right)+k_{\parallel}\rho_{i}\right]\right.\] \[+ \left.\eta_{i}\beta_{i}\left[\frac{1}{2k_{\parallel}\rho_{i}}+ \left(\frac{1}{2k_{\parallel}^{2}\rho_{i}^{2}}-\frac{1}{4}\right)\mathrm{Re} \;Z\!\left(\frac{1}{k_{\parallel}\rho_{i}}\right)\right]\right\}^{2}=0\,.\] As before, this can be factorised to give two roots; for \(\tilde{\omega}_{i\parallel}=\tilde{\varpi}_{i\parallel}+\mathrm{i}\tilde{\gamma }_{i\parallel}\) [cf. (164)], it follows that \[\tilde{\varpi}_{i\parallel}\beta_{i} =\eta_{i}\beta_{i}\left(\frac{1}{2k_{\parallel}^{2}\rho_{i}^{2}}- \frac{1}{4}\right)+\frac{k_{\parallel}\rho_{i}\left[\mathrm{Re}\;Z\!\left( \frac{1}{k_{\parallel}\rho_{i}}\right)+k_{\parallel}\rho_{i}\right]\left(\eta_ {i}\beta_{i}/4-k_{\parallel}\rho_{i}\right)}{\left[\mathrm{Re}\;Z\!\left(\frac {1}{k_{\parallel}\rho_{i}}\right)+k_{\parallel}\rho_{i}\right]^{2}+\pi\exp \left(-\frac{2}{k_{\parallel}^{2}\rho_{i}^{2}}\right)}\,, \tag{193a}\] \[\tilde{\gamma}_{i\parallel}\beta_{i} =\frac{\sqrt{\pi}k_{\parallel}\rho_{i}\left(\eta_{i}\beta_{i}/4-k _{\parallel}\rho_{i}\right)}{\left[\mathrm{Re}\;Z\!\left(\frac{1}{k_{ \parallel}\rho_{i}}\right)+k_{\parallel}\rho_{i}\right]^{2}\exp\left(\frac{1} {k_{\parallel}^{2}\rho_{i}^{2}}\right)+\pi\exp\left(-\frac{1}{k_{\parallel}^{2 }\rho_{i}^{2}}\right)}\,. \tag{193b}\] These can be rearranged to give (183). #### d.4.2 Derivation of frequency and growth rate of the CET long-wavelength KAW instability In the limit \(k_{\parallel}\rho_{i}\ll 1\), \(k_{\perp}\rho_{i}\sim 1\), the general dispersion relation (167) of CE ion-temperature-gradient-driven instabilities becomes \[\left[\tilde{\omega}_{i\parallel}(1- \mathcal{F}_{i})-\frac{\eta_{i}}{2}\mathcal{G}_{i}\right]^{2} \tag{194}\] \[+ \frac{k_{\perp}^{2}\rho_{i}^{2}}{\beta_{i}}\Bigg{[}\mathrm{i} \sqrt{\pi}\left(\mathcal{F}_{i}+\sqrt{\frac{\mu_{e}Z^{2}}{\tau}}\right)\tilde{ \omega}_{i\parallel}-\frac{1}{\beta_{i}}+\frac{\mathrm{i}\sqrt{\pi}\eta_{i}}{2 }\left(\mathcal{G}_{i}-\frac{1}{2}\mathcal{F}_{i}\right)\Bigg{]}=0,\] where we remind the reader that \(\mathcal{F}_{i}=\mathcal{F}(k_{\perp}\rho_{i})\), \(\mathcal{G}_{i}=\mathcal{G}(k_{\perp}\rho_{i})\), with the functions \(\mathcal{F}(\alpha)\) and \(\mathcal{G}(\alpha)\) being defined by (194). Equation (192) for the complex frequency of the CET KAW modes in the main text is then derived by solving (194) for \(\tilde{\omega}_{i\parallel}=\omega/k_{\parallel}v_{\mathrm{thi}}\). ## Appendix K Methodology for characterising CES microinstabilities This appendix outlines the method used to determine the growth rates of microinstabilities driven by the CE electron- and ion-shear terms. Once again (cf. appendix J), section 2.5 presents the general framework of our approach: determine a simplified algebraic dispersion relation satisfied by the (complex) frequencies \(\omega\) of CES microinstabilities with parallel and perpendicular wavenumber \(k_{\parallel}\) and \(k_{\perp}\) under the assumption that they are low frequency [viz., \(\omega\ll k_{\parallel}v_{\mathrm{th}}\)s; cf. (93)], solve for \(\omega\), then calculate the growth rate \(\gamma\) from its imaginary part (and the real frequency \(\varpi\) from its real part). To construct the dispersion relation, we first need to know the tensor \(\boldsymbol{\mathsf{P}}_{s}^{(0)}\) for a CE distribution function of the form (181); this result is given in appendix K.1. Then, in appendix K.2.1, we determine an approximate quadratic dispersion relation for CES microinstabilities, show in appendix K.2.2 how that dispersion relation can be used in certain cases to evaluate the CES instability thresholds semi-analytically, then demonstrate the significant shortcomings of the quadratic approximation in appendix K.2.3. In appendix K.3.1, we address these shortcomings by constructing a revised quartic dispersion relation for CES microinstabilities. This quartic dispersion relation is then used to derive simplified dispersion relations for the various different CES microinstabilities discussed in the main text: the mirror instability in appendix K.3.2, the parallel (CES) whistler instability in appendix K.3.3, the transverse instability in appendix K.3.4, the electron mirror instability in appendix K.3.5, the parallel, oblique and critical-line firehose instabilities in Appendicies K.3.6, K.3.7, and K.3.8, the parallel and oblique electron firehose instabilities in Appendices K.3.9 and K.3.10, the EST instability in appendix K.3.11, and the whisper instability in appendix K.3.12. Finally, in appendix K.3.13, we derive the dispersion relation of the CET ordinary-mode instabilty - the one CES (or CET) microinstability that does not satisfy \(\omega\ll k_{\parallel}v_{\mathrm{th}s}\) for either electrons or ions (see section 2.5.8) - directly from the hot-plasma dispersion relation. ### Dielectric response of CE shear terms First, we evaluate the elements of \(\boldsymbol{\mathcal{P}}_{s}^{(0)}\): \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{11} =\epsilon_{s}\frac{k^{2}}{k_{\parallel}^{2}}W\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\, \tag{10a}\] \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{12} =-\epsilon_{s}\frac{k}{k_{\parallel}}X\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] (10b) \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{21} =\epsilon_{s}\frac{k}{k_{\parallel}}X\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\] (10c) \[(\boldsymbol{\mathcal{P}}_{s}^{(0)})_{22} =\epsilon_{s}Y\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp} \tilde{\rho}_{s}\big{)}\, \tag{10d}\] where the special functions \(W(x,y)\), \(Y(x,y)\) and \(X(x,y)\) are defined by (107). These results are derived in appendix G.4. ### Quadratic approximation to dispersion relation of CE shear-driven microinstabilities #### k.2.1 Derivation Considering the relative magnitude of \(\tilde{\omega}_{i\parallel}=\omega/k_{\parallel}v_{\mathrm{th}i}\) and \(\tilde{\omega}_{e\parallel}=\omega/k_{\parallel}v_{\mathrm{th}e}\ll\tilde{ \omega}_{i\parallel}\), we observe that, unlike CET microinstabilities, CES microinstabilities satisfy the low-frequency condition (93) for both electrons and ions. This claim holds because any microinstability involving the CE electron-shear term must satisfy \(\tilde{\omega}_{e\parallel}\sim\epsilon_{e}\ll(m_{e}/m_{i})^{1/2}\), where the last inequality arises from the scaling relation \(\epsilon_{e}\sim(m_{e}/m_{i})^{1/2}\epsilon_{i}\) given by (42d); thus, from the scaling relation (105) with \(T_{e}=T_{i}\), it follows that \(\tilde{\omega}_{i\parallel}\sim\epsilon_{e}(m_{i}/m_{e})^{1/2}\sim\epsilon_{i} \ll 1\). Therefore, it is consistent to expand both the Maxwellian electron and ion terms in \(\tilde{\omega}_{s\parallel}\ll 1\). We therefore initially approximate \(\boldsymbol{\mathfrak{E}}\) as follows: \[\boldsymbol{\mathfrak{E}}\approx\tilde{\omega}_{e\parallel}\boldsymbol{ \mathfrak{E}}^{(0)}=\frac{\omega_{\mathrm{pe}}^{2}}{\omega^{2}}\left(\sum_{s} \tilde{\omega}_{s\parallel}\mu_{s}\boldsymbol{M}_{s}^{(0)}+\sum_{s}\mu_{s} \boldsymbol{\mathcal{P}}_{s}^{(0)}\right)\,, \tag{106}\] where the expansion of \(\boldsymbol{M}_{s}\) and \(\boldsymbol{\mathcal{P}}_{s}\) in \(\tilde{\omega}_{s\parallel}\), i.e., \[\boldsymbol{M}_{s}\big{(}\tilde{\omega}_{s\parallel},\boldsymbol{k}\big{)} \approx\tilde{\omega}_{s\parallel}\boldsymbol{M}_{s}^{(0)}(\boldsymbol{k})\,\quad \boldsymbol{\mathcal{P}}_{s}\big{(}\tilde{\omega}_{s\parallel},\boldsymbol{k} \big{)}\approx\boldsymbol{\mathcal{P}}_{s}^{(0)}(\boldsymbol{k})\, \tag{107}\] applies to both ion and electron species. By analogy to the derivation presented in section 2.5.5, this approximation gives rise to a simplified dispersion relation [cf. (2.116)] \[\left(\tilde{\omega}_{e\parallel}\,\hbox{\germ C}^{(0)}_{11}-\frac{k^{2}c^{2}}{ \omega^{2}}\right)\left(\tilde{\omega}_{e\parallel}\,\hbox{\germ C}^{(0)}_{22} -\frac{k^{2}c^{2}}{\omega^{2}}\right)+\left(\tilde{\omega}_{e\parallel}\,\hbox {\germ C}^{(0)}_{12}\right)^{2}=0\,. \tag{111}\] We emphasise that here each component of \(\hbox{\germ C}^{(0)}\) has both electron and ion contributions. Expressing \(\tilde{\omega}_{i\parallel}=\tilde{\omega}_{e\parallel}\mu_{e}^{-1/2}\) in (111), (111) can be written as \[\left[\tilde{\omega}_{e\parallel}(\hbox{\germ M}^{(0)}_{e}+\mu_{e }^{1/2}\hbox{\germ M}^{(0)}_{i})_{11}+(\hbox{\germ P}^{(0)}_{e}+\mu_{e}^{1/2} \hbox{\germ P}^{(0)}_{i})_{11}-k^{2}d_{e}^{2}\right]\] \[\times\left[\tilde{\omega}_{e\parallel}(\hbox{\germ M}^{(0)}_{e }+\mu_{e}^{1/2}\hbox{\germ M}^{(0)}_{i})_{22}+(\hbox{\germ P}^{(0)}_{e}+\mu_{e} ^{1/2}\hbox{\germ P}^{(0)}_{i})_{22}-k^{2}d_{e}^{2}\right]\] \[\qquad\qquad\qquad+\left[\tilde{\omega}_{e\parallel}(\hbox{\germ M }^{(0)}_{e}+\mu_{e}^{1/2}\hbox{\germ M}^{(0)}_{i})_{12}+(\hbox{\germ P}^{(0)}_ {e}+\mu_{e}^{1/2}\hbox{\germ P}^{(0)}_{i})_{12}\right]^{2}=0\,. \tag{112}\] Combining the expressions (111) for \(\hbox{\germ P}^{(0)}_{s}\) with (2.120) for \(\hbox{\germ M}^{(0)}_{s}\) and substituting \(\hbox{\germ M}^{(0)}_{s}\) and \(\hbox{\germ P}^{(0)}_{s}\) into (112) gives \[\left[\tilde{\omega}_{e\parallel}\left(F_{e}+\mu_{e}^{1/2}F_{i} \right)+\epsilon_{e}\left(W_{e}+\mu_{e}^{1/2}W_{i}\right)-k_{\parallel}^{2}d_ {e}^{2}\right]\] \[\times\left[\tilde{\rm i}\tilde{\omega}_{e\parallel}\left(H_{e}+ \mu_{e}^{1/2}H_{i}\right)+\epsilon_{e}\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)-k ^{2}d_{e}^{2}\right]\] \[\qquad\qquad\qquad+\left[\tilde{\rm i}\tilde{\omega}_{e \parallel}\left(G_{e}+\mu_{e}^{1/2}G_{i}\right)+\epsilon_{e}\left(X_{e}+\mu_{ e}^{1/2}X_{i}\right)\right]^{2}=0\,, \tag{113}\] where we have used \(\epsilon_{i}=\epsilon_{e}\mu_{e}^{-1/2}\). For brevity of notation, we have also defined \(F_{s}\equiv F\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\), \(G_{s}\equiv G\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\), and so on. Using (2.56_b_) for the terms \(\propto d_{e}^{2}\) explicitly introduces a \(\beta_{e}\) dependence into (113). After some elementary manipulations, we obtain the quadratic \[A_{\rm S}\tilde{\omega}_{e\parallel}^{2}\beta_{e}^{2}+{\rm i}B_{\rm S}\tilde{ \omega}_{e\parallel}\beta_{e}-C_{\rm S}=0\,, \tag{114}\] where \[A_{\rm S} = \left(F_{e}+\mu_{e}^{1/2}F_{i}\right)\left(H_{e}+\mu_{e}^{1/2}H_ {i}\right)+\left(G_{e}+\mu_{e}^{1/2}G_{i}\right)^{2}\,, \tag{115a}\] \[B_{\rm S} = \left(H_{e}+\mu_{e}^{1/2}H_{i}\right)\left[k_{\parallel}^{2}\rho_ {e}^{2}-\epsilon_{e}\beta_{e}\left(W_{e}+\mu_{e}^{1/2}W_{i}\right)\right]-2 \epsilon_{e}\beta_{e}\left(G_{e}+\mu_{e}^{1/2}G_{i}\right)\left(X_{e}+\mu_{e}^{ 1/2}X_{i}\right)\] \[+\left(F_{e}+\mu_{e}^{1/2}F_{i}\right)\left[k^{2}\rho_{e}^{2}- \epsilon_{e}\beta_{e}\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)\right]\,,\] \[C_{\rm S} = \left[k_{\parallel}^{2}\rho_{e}^{2}-\epsilon_{e}\beta_{e}\left(W_{ e}+\mu_{e}^{1/2}W_{i}\right)\right]\left[k^{2}\rho_{e}^{2}-\epsilon_{e}\beta_{e} \left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)\right]\] (115b) \[+\epsilon_{e}^{2}\beta_{e}^{2}\left(X_{e}+\mu_{e}^{1/2}X_{i} \right)^{2}\,.\] As before, this can be solved explicitly for the complex frequency: \[\omega=\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{-{\rm i}B_{\rm S }\pm\sqrt{-B_{\rm S}^{2}+4A_{\rm S}C_{\rm S}}}{2A_{\rm S}}\,. \tag{116}\] From this expression, we can extract the real frequency \(\varpi\) and the growth rate \(\gamma\) explicitly. In the case when \(4A_{\rm S}C_{\rm S}>B_{\rm S}^{2}\), we have two oppositely propagating modes with the same growth rate: \[\varpi = \pm\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{\sqrt{-B _{\rm S}^{2}+4A_{\rm S}C_{\rm S}}}{2A_{\rm S}}\,, \tag{117a}\] \[\gamma = \frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{B_{S}}{2A_ {\rm S}}\,. \tag{117b}\] For \(4A_{\rm S}C_{\rm S}<B_{\rm S}^{2}\), both modes are non-propagating, with distinct growth rates: \[\gamma=\frac{\Omega_{e}}{\beta_{e}}k_{\parallel}\rho_{e}\frac{B_{\rm S}\pm\sqrt{B _{\rm S}^{2}-4A_{\rm S}C_{\rm S}}}{2A_{\rm S}}\,. \tag{111}\] #### 1.2.2 Semi-analytic estimates of CES instability thresholds using quadratic approximation In the case of non-propagating modes whose growth rate is given by (111), we can determine semi-analytic formulae for the thresholds of any instabilities. This is done by noting that, at marginal stability, \(\tilde{\omega}_{e\parallel}=0\). Therefore, it follows from (110) that \(C_{\rm S}=0\), or, equivalently, \[\left[k_{\parallel}^{2}\rho_{e}^{2}-\epsilon_{e}\beta_{e}\left(W_{e}+\mu_{e}^ {1/2}W_{i}\right)\right]\left[k^{2}\rho_{e}^{2}-\epsilon_{e}\beta_{e}\left(Y_{ e}+\mu_{e}^{1/2}Y_{i}\right)\right]+\epsilon_{e}^{2}\beta_{e}^{2}\left(X_{e}+\mu_{e}^ {1/2}X_{i}\right)^{2}=0\,. \tag{112}\] This is a quadratic in \(\epsilon_{e}\beta_{e}\) which can be solved exactly to give the threshold value of \(\epsilon_{e}\beta_{e}\) as a function of perpendicular and parallel wavenumber: \[\epsilon_{e}\beta_{e} = \frac{1}{2}\left[\left(W_{e}+\mu_{e}^{1/2}W_{i}\right)\left(Y_{e}+ \mu_{e}^{1/2}Y_{i}\right)+\left(X_{e}+\mu_{e}^{1/2}X_{i}\right)^{2}\right]^{-1} \tag{113}\] \[\times\Bigg{(}k^{2}\rho_{e}^{2}\left(W_{e}+\mu_{e}^{1/2}W_{i} \right)+k_{\parallel}^{2}\rho_{e}^{2}\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)\] \[\pm\Bigg{\{}\left[k^{2}\rho_{e}^{2}\left(W_{e}+\mu_{e}^{1/2}W_{i} \right)+k_{\parallel}^{2}\rho_{e}^{2}\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right) \right]^{2}\] \[-4k_{\parallel}^{2}k^{2}\rho_{e}^{4}\left[\left(W_{e}+\mu_{e}^{1/ 2}W_{i}\right)\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)+\left(X_{e}+\mu_{e}^{1/2}X _{i}\right)^{2}\right]\Bigg{\}}^{1/2}\Bigg{)}.\] Expression (113) is used in sections 4.4.1 and 4.4.7 to evaluate the wavevector-dependent thresholds of the CES ion and electron firehose instabilities, respectively. #### 1.2.3 Shortcomings of quadratic approximation In contrast to quadratic approximations to the dispersion relations of CET microinstabilities being sufficient to characterise all instabilities of note (see, e.g., appendix 1.3), not all CES microinstabilities are captured by the quadratic dispersion relation (110), because there are important microinstabilities whose correct description requires keeping higher-order terms in the \(\tilde{\omega}_{s\parallel}\ll 1\) expansion. The mathematical reason for this is that some microinstabilities occur in wavenumber regimes where either \(k_{\parallel}\rho_{i}\ll 1\) and/or \(k_{\parallel}\rho_{e}\ll 1\). As a result, the issues raised in section 2.5.6 regarding the commutability of the \(\omega_{s\parallel}\ll 1\) and \(k_{\parallel}\rho_{s}\ll 1\) limits must be carefully resolved. In appendix 1.6, it is shown that, if \(k_{\parallel}\rho_{s}\ll 1/\log\left(1/\tilde{\omega}_{s\parallel}\right)\), then the dominant contributions to \((\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{ \kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{ \kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt \hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt \hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt \hbox{\kern 1.0pt\hbox{\kern 1.0pt \hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt \hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern 1.0pt\hbox{\kern \hbox 1.0pt\hbox{\kern \hbox 1.0pt {\kern 1.0pt\hbox{\kern \kern 1.0pt\hbox{\kern \hbox 1.0pt{\kern \hbox \hbox {\kern \hboxhbox {\kern \hboxhboxhbox \ {\kern \hboxhboxhbox {\hboxhbox \ {\hboxhbox In the \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) coordinate frame, this means that the dominant contributions to each component of \(\mathbf{M}_{s}\) are (see appendix G.1.3) \[(\mathbf{M}_{s})_{11} \approx\tilde{\omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{11}=\frac {k^{2}}{k_{\parallel}^{2}}\omega_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{xx}+2 \tilde{\omega}_{s\parallel}^{2}\left[\,\frac{k_{\perp}^{2}}{k^{2}}+\frac{k_{ \perp}}{k_{\parallel}}L\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{ \rho}_{s}\big{)}\right]\,, \tag{116a}\] \[(\mathbf{M}_{s})_{12} \approx\tilde{\omega}_{s\parallel}(\mathbf{M}_{s}^{(0)})_{12}=\frac {k}{k_{\parallel}}\tilde{\omega}_{s\parallel}(\mathbf{M}_{s}^{(0)})_{xy}\,,\] (116b) \[(\mathbf{M}_{s})_{13} \approx\tilde{\omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{13}=- \tilde{\omega}_{s\parallel}^{2}\left[\frac{2k_{\perp}k_{\parallel}}{k^{2}}+L \big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\right]\,,\] (116c) \[(\mathbf{M}_{s})_{22} \approx\tilde{\omega}_{s\parallel}(\mathbf{M}_{s}^{(0)})_{22}+\tilde{ \omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{22}=\tilde{\omega}_{s\parallel}( \mathbf{M}_{s}^{(0)})_{yy}+\tilde{\omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{yy}\,,\] (116d) \[(\mathbf{M}_{s})_{23} \approx\tilde{\omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{23}=- \frac{k_{\parallel}}{k}\tilde{\omega}_{s\parallel}^{2}N\big{(}k_{\parallel} \tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s}\big{)}\,\,,\] (116e) \[(\mathbf{M}_{s})_{33} \approx\tilde{\omega}_{s\parallel}^{2}(\mathbf{M}_{s}^{(1)})_{33}= \frac{2k_{\parallel}^{2}}{k^{2}}\tilde{\omega}_{s\parallel}^{2}\,, \tag{116f}\] where the special functions \(L(x,y)\) and \(N(x,y)\) are given by (102). The quadratic dispersion relation (116) must, therefore, be revised to capture correctly all relevant microinstabilities. ### Quartic approximation to dispersion relation of CE shear-driven microinstabilities #### c.3.1 Derivation of general quartic CES dispersion relation To assess how the new terms identified in section K.2.3 change the dispersion relation (116), we now return to the full hot-plasma dispersion relation (74), which we write in the form \[\left(\mathbf{\mathfrak{E}}_{11}-\frac{k^{2}c^{2}}{\omega^{2}}-\frac{\mathbf{ \mathfrak{E}}_{13}^{2}}{\mathbf{\mathfrak{E}}_{33}}\right)\left(\mathbf{\mathfrak{E}}_{ 22}-\frac{k^{2}c^{2}}{\omega^{2}}+\frac{\mathbf{\mathfrak{E}}_{23}^{2}}{\mathbf{ \mathfrak{E}}_{33}}\right)+\left(\mathbf{\mathfrak{E}}_{12}-\frac{\mathbf{\mathfrak{E}} _{13}\mathbf{\mathfrak{E}}_{23}}{\mathbf{\mathfrak{E}}_{33}}\right)^{2}=0\,. \tag{117}\] Reminding the reader that, for a two-species plasma, \[\mathbf{\mathfrak{E}}=\sum_{s}\mathbf{\mathfrak{E}}_{s}=\frac{\omega_{pe}^{2}}{\omega^{ 2}}\sum_{s}\mu_{s}\,(\mathbf{M}_{s}+\mathbf{P}_{s})\,\,, \tag{118}\] and also that the electrostatic component of the dielectric tensor is determined by the Maxwellian components only (which in turn are equal for electrons and ions when \(T_{i}=T_{e}\) - see appendix D.2), viz., \[\mathbf{\mathfrak{E}}_{33}\approx\tilde{\omega}_{e\parallel}^{2}\mathbf{\mathfrak{E}}_{ 33}^{(1)}=\frac{\omega_{pe}^{2}}{\omega^{2}}\sum_{s}\mu_{s}\tilde{\omega}_{s \parallel}^{2}(\mathbf{M}_{s}^{(1)})_{33}=\frac{4\omega_{pe}^{2}}{\omega^{2}} \tilde{\omega}_{e\parallel}^{2}\frac{k_{\parallel}^{2}}{k^{2}}\,, \tag{119}\] we show in appendix G.1.7 that, in the limit \(k_{\parallel}\rho_{s}\ll 1\), \[\frac{\left[(\mathbf{M}_{s})_{13}\right]^{2}}{\left(\mathbf{M}_{s}^{(1)} \right)_{33}} \lesssim(\mathbf{M}_{s})_{11}\,, \tag{120a}\] \[\frac{(\mathbf{M}_{s})_{13}(\mathbf{M}_{s})_{23}}{\left(\mathbf{M}_{s}^{(1)} \right)_{33}} \lesssim\tilde{\omega}_{e\parallel}(\mathbf{M}_{s})_{12}\ll(\mathbf{M}_{s}) _{12}\,,\] (120b) \[\frac{\left[(\mathbf{M}_{s})_{23}\right]^{2}}{\left(\mathbf{M}_{s}^{(1)} \right)_{33}} \lesssim\tilde{\omega}_{e\parallel}(\mathbf{M}_{s})_{22}\ll(\mathbf{M}_{s}) _{22}\,. \tag{120c}\] On the other hand, the shear-perturbation components \(\mathbf{{\cal P}}_{s}\) satisfy \[(\mathbf{{\cal P}}_{s})_{11}\sim(\mathbf{{\cal P}}_{s})_{22} \gg(\mathbf{{\cal P}}_{s})_{12}\,. \tag{100}\] Substituting for \(\mathbf{{\cal M}}_{s}\) and \(\mathbf{{\cal P}}_{s}\) in (100) using (101) and (102b), respectively, and then substituting (100) into (100), we obtain the following quartic dispersion relation: \[\left\{\tilde{\omega}_{e\parallel}^{2}\left[(\mathbf{{ \cal M}}_{e}^{(1)}+\mathbf{{\cal M}}_{i}^{(1)})_{11}-\frac{(\mathbf{{\cal M}}_{e}^{(1)}+\mathbf{{\cal M}}_{i}^{(1)})_{13}^{2} }{2(\mathbf{{\cal M}}_{e}^{(1)})_{33}}\right]+(\mathbf{{\cal P }}_{e}^{(0)}+\mu_{e}^{1/2}\mathbf{{\cal P}}_{i}^{(0)})_{11}-k^{2}d_{ e}^{2}\right\}\] \[\times\left\{\tilde{\omega}_{e\parallel}^{2}\left[(\mathbf{{\cal M}}_{e}^{(1)}+\mathbf{{\cal M}}_{i}^{(1)})_{22} \right]+\tilde{\omega}_{e\parallel}\left[(\mathbf{{\cal M}}_{e}^{(0 )}+\mu_{e}^{1/2}\mathbf{{\cal M}}_{i}^{(0)})_{22}\right]+(\mathbf{{\cal P}}_{e}^{(0)}+\mu_{e}^{1/2}\mathbf{{\cal P}}_{i}^{( 0)})_{22}-k^{2}d_{e}^{2}\right\}\] \[\qquad\qquad\qquad\qquad+\tilde{\omega}_{e\parallel}^{2}\left[( \mathbf{{\cal M}}_{e}^{(0)}+\mu_{e}^{1/2}\mathbf{{\cal M}}_ {i}^{(0)})_{12}\right]^{2}=0\,. \tag{101}\] We have assumed \(k\rho_{e}\ll k\rho_{i}\ll 1\) and so we now have additional quadratic terms for both electrons and ions, as explained in section 10.2.3. We note that the dispersion relation (100) is similar to (100) except for the addition of two quadratic terms in \(\tilde{\omega}_{e\parallel}\), and the absence of the linear terms \(\tilde{\omega}_{e\parallel}(\mathbf{{\cal M}}_{s}^{(0)})_{11}\) and \((\mathbf{{\cal P}}_{s}^{(0)})_{12}\). This motivates our approach to finding modes at arbitrary wavevectors: we solve a quartic dispersion relation that includes all the terms in (100) and also those linear terms which were present in (100), but absent in (100). Explicitly, this dispersion relation is \[\left\{-\tilde{\omega}_{e\parallel}^{2}\left[\frac{4}{3}W_{e}+ \frac{4}{3}W_{i}+\frac{1}{4}\left(L_{e}+L_{i}\right)^{2}\right]+\mathrm{i} \tilde{\omega}_{e\parallel}\left(F_{e}+\mu_{e}^{1/2}F_{i}\right)+\epsilon_{e} \left(W_{e}+\mu_{e}^{1/2}W_{i}\right)-k_{\parallel}^{2}d_{e}^{2}\right\}\] \[\times\left[-\tilde{\omega}_{e\parallel}^{2}\left(\frac{4}{3}Y_{i} +\frac{4}{3}Y_{e}\right)+\mathrm{i}\tilde{\omega}_{e\parallel}\left(H_{e}+\mu _{e}^{1/2}H_{i}\right)+\epsilon_{e}\left(Y_{e}+\mu_{e}^{1/2}Y_{i}\right)-k^{2} d_{e}^{2}\right]\] \[\qquad\qquad+\left[\mathrm{i}\tilde{\omega}_{e\parallel}\left(G_{ e}+\mu_{e}^{1/2}G_{i}\right)+\epsilon_{e}\left(X_{e}+\mu_{e}^{1/2}X_{i}\right) \right]^{2}=0\,, \tag{102}\] where \(L_{s}\equiv L\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)}\). The special functions \(W(x,y)\) and \(Y(x,y)\), defined in (100), appear due to their relationship to the matrix \((\mathbf{{\cal M}}_{s}^{(1)})\) (derived in appendix 10.2): \[W\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)} = -\frac{3}{4}(\mathbf{{\cal M}}_{s}^{(1)})_{xx}\,, \tag{103a}\] \[Y\big{(}k_{\parallel}\tilde{\rho}_{s},k_{\perp}\tilde{\rho}_{s} \big{)} = -\frac{3}{4}(\mathbf{{\cal M}}_{e}^{(1)})_{yy}\,, \tag{103b}\] combined with the identity \[(\mathbf{{\cal M}}_{e}^{(1)}+\mathbf{{\cal M}}_{i}^{(1)})_{11 }-\frac{(\mathbf{{\cal M}}_{e}^{(1)}+\mathbf{{\cal M}}_{i}^{( 1)})_{13}^{2}}{2(\mathbf{{\cal M}}_{e}^{(1)})_{33}}=-\frac{k^{2}}{k_{ \parallel}^{2}}\left[\frac{4}{3}W_{e}+\frac{4}{3}W_{i}+\frac{1}{4}\left(L_{e}+L _{i}\right)^{2}\right]\,, \tag{104}\] proven in appendix 10.7. The dispersion relation (103) recovers all the roots of interest because it captures approximate values for all of the roots of the dispersion relations (100) and (101) in their respective wavenumber regions of validity. We note that, in situations when there are fewer than four physical modes (e.g., in the \(k_{\parallel}\rho_{e}\gtrsim 1\) regime), solving (103) will also return non-physical modes that are the result of the addition of higher-order terms in a regime where such terms are illegitimate. However, by construction, such modes can be distinguished by their large magnitude (\(\tilde{\omega}_{e\parallel}\sim 1\)) as compared to the others. We acknowledge that our approach does not maintain consistent orderings: indeed, depending on the scale of a particular instability, there may be terms retained that are, in fact, smaller than other terms we have neglected when carrying out the \(\tilde{\omega}_{i\parallel}\ll 1\) expansion. However, unlike the quadratic dispersion relation (K 7), the quartic dispersion relation (K 23) always captures the leading order terms for arbitrary wavevectors, and so provides reasonable approximations to the complex frequency of all possible CES microinstabilities. #### k.3.2 Derivation of frequency and growth rate of the CES mirror instability To derive the CES mirror instability's growth rate when it is close to marginality, we consider the dispersion relation (K 23) under the orderings (4.6), viz., \[k_{\parallel}\rho_{i}\sim k_{\perp}^{2}\rho_{i}^{2}\sim\varGamma_{i}\ll 1 \,,\quad\tilde{\omega}_{i\parallel}=\mu_{e}^{-1/2}\tilde{\omega}_{e\parallel }\sim\frac{\varGamma_{i}}{\beta_{i}}\,,\] (K 26) where \(\varGamma_{i}=\varDelta\beta_{i}-1\), and \(\varDelta=\varDelta_{i}+\varDelta_{e}=3(\epsilon_{i}+\epsilon_{e})/2\). Using the asymptotic identities (G 37) for the special functions \(F_{s}\), \(G_{s}\), \(H_{s}\), \(L_{s}\), and \(N_{s}\), and (G 101) for \(W_{s}\), \(X_{s}\), and \(Y_{s}\), (K 23) becomes, after dropping terms that are asymptotically small under the ordering (K 26), \[\mathrm{i}\sqrt{\pi}k_{\perp}^{2}\rho_{i}^{2}\tilde{\omega}_{i \parallel}+\varDelta\left(k_{\perp}^{2}\rho_{i}^{2}-\frac{1}{2}k_{\parallel}^ {2}\rho_{i}^{2}-\frac{3}{4}k_{\perp}^{4}\rho_{i}^{4}\right)-\frac{k^{2}\rho_{ i}^{2}}{\beta_{i}}=0\,,\] (K 27) which in turn can be rearranged to give (4.7) in section 4.3.1 and the subsequent results. We note that, save for the term \(\propto G_{e}\), which cancels to leading order with its ion equivalent, and the term \(\propto Y_{e}\), which we retain in order to capture correctly the mirror instability's exact stability threshold, the electron terms in (K 23) are negligibly small under the ordering (K 26). We also observe that by assuming frequency ordering (K 26), we have removed the shear Alfven wave from the dispersion relation. As we demonstrate when characterising the growth rate of firehose-unstable shear Alfven waves (see section 4.4.3 and appendix K.3.7), a different ordering is required to extract this mode (which is, in any case, stable for \(\varDelta_{i}>0\)). To derive the growth rate of long-wavelength (\(k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\ll 1\)) mirror modes away from marginality, when \(\varGamma_{i}\gtrsim 1\), we adopt the alternative ordering (4.10), which is equivalent to \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}}\sim\varDelta\ll 1\,.\] (K 28) Again using the identities (G 37) and (G 101) to evaludate the special functions, the dispersion relation (K 23) is then \[\mathrm{i}\sqrt{\pi}k_{\perp}^{2}\rho_{i}^{2}\tilde{\omega}_{i \parallel}+\varDelta\left(k_{\perp}^{2}\rho_{i}^{2}-\frac{1}{2}k_{\parallel}^ {2}\rho_{i}^{2}\right)-\frac{k^{2}\rho_{i}^{2}}{\beta_{i}}=0\,,\] (K 29) which, after some algebraic manipulation, gives (4.11) in section 4.3.1 and the subsequent results. Finally, the expression (4.16) for the growth rate of sub-ion-Larmor scale mirror modes is derived by adopting the orderings (4.15): \[k_{\parallel}\rho_{i}\sim k_{\perp}\rho_{i}\sim(\varDelta_{i} \beta_{i})^{1/2}\gg 1,\quad\tilde{\omega}_{i\parallel}\sim\frac{\varDelta_{i} ^{1/2}}{\beta_{i}^{1/2}},\] (K 30) and then using the asymptotic identities (G 35) for evaluating \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\), (G 37) for \(F_{e}\), \(G_{e}\), \(H_{e}\), \(L_{e}\), and \(N_{e}\), (G 99) for \(W_{i}\), \(X_{i}\), and \(Y_{i}\), and (G 101) for \(W_{e}\), \(X_{e}\) and \(Y_{e}\). Once again neglecting small terms under the assumed ordering, the dispersion relation (K 23) simplifies to a quadratic of the form (K 6): \[\left[-\frac{\Delta_{i}}{2}\frac{2k_{\parallel}^{2}\left(k_{\parallel}^{2}-k_{ \perp}^{2}\right)}{k^{4}}+\frac{k_{\parallel}^{2}\rho_{i}^{2}}{\beta_{i}} \right]\left(-\Delta_{i}\frac{k_{\parallel}^{2}}{k^{2}}+\frac{k^{2}\rho_{i}^{2} }{\beta_{i}}\right)-\tilde{\omega}_{i\parallel}^{2}k_{\parallel}^{2}\rho_{i}^ {2}=0\,,\] (K 31) from which follow (4.16) and the subsequent results in 4.3.1. #### k.3.3 Derivation of frequency and growth rate of the parallel CES whistler instability We derive the expressions (4.21) for the real frequency and growth rate of the parallel CES whistler instability by adopting the ordering (4.20), \[\tilde{\omega}_{e\parallel}\sim\Delta_{e}\sim\frac{1}{\beta_{e}},\,k_{ \parallel}\rho_{e}\sim 1,\] (K 32) and evaluating \(F_{s}\), \(G_{s}\), \(H_{s}\), \(L_{s}\), and \(N_{s}\) via (G 34), and \(W_{s}\), \(X_{s}\), and \(Y_{s}\) via (G 98). The special functions with \(s=i\) are simplified further by assuming additionally that \(k_{\parallel}\rho_{i}\gg 1\). Under these assumptions and simplifications, the dispersion relation (K 23) becomes \[\left\{{\rm i}\tilde{\omega}_{e\parallel}\sqrt{\pi}\left[\exp \left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+\mu_{e}^{1/2}\right]+ \Delta_{e}\left[1+\frac{1}{k_{\parallel}\rho_{e}}{\rm Re}\;Z\!\left(\frac{1}{ k_{\parallel}\rho_{e}}\right)+\mu_{e}^{1/2}\right]-\frac{k_{\parallel}^{2}\rho_{e}^ {2}}{\beta_{e}}\right\}^{2}\] \[+\left\{{\rm i}\tilde{\omega}_{e\parallel}{\rm Re}\;Z\!\left( \frac{1}{k_{\parallel}\rho_{e}}\right)-\frac{\Delta_{e}}{k_{\parallel}\rho_{e }}\left[\sqrt{\pi}\!\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right) +\mu_{e}\right]\right\}^{2}=0\,,\] (K 33) where we have substituted \(\tilde{\rho}_{e}=-\rho_{e}\), and the only ion terms that we retain - the terms proportional to \(\mu_{e}^{1/2}\) or \(\mu_{e}\) - are those that we find to affect the dispersion relation qualitatively (as explained in the main text, these terms are formally small under the assumed ordering, but cannot be neglected in certain subsidiary limits, e.g. \(k_{\parallel}\rho_{e}\ll 1\), which we will subsequently wish to explore). (K 33) can then be factorised to give two complex roots, the real and imaginary parts of which become (4.21_a_) and (4.21_b_), respectively. #### k.3.4 Derivation of frequency and growth rate of the CES transverse instability To obtain the growth rate (4.29) of the two CES transverse modes, we take directly the unmagnetised limit of the full CES dispersion relation (K 23) under the orderings \[k_{\perp}\rho_{e}\sim k_{\parallel}\rho_{e}\sim\left(\Delta_{e}\beta_{e} \right)^{1/2}\gg 1,\qquad\tilde{\omega}_{e\parallel}\sim\Delta_{e}\ll 1,\] (K 34) and then employ asymptotic identities (G 35) for \(F_{s}\), \(G_{s}\), \(H_{s}\), \(L_{s}\), and \(N_{s}\), and (G 99) for \(W_{s}\), \(X_{s}\), and \(Y_{s}\). We then obtain a dispersion relation similar to (K 6), but with two separable roots: \[\left[{\rm i}\tilde{\omega}_{e\parallel}\sqrt{\pi}\frac{k_{\parallel}^{3}}{k ^{3}}+\Delta_{e}\frac{k_{\parallel}^{2}(k_{\parallel}^{2}-k_{\perp}^{2})}{k^{4 }}-\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\right]\left({\rm i}\tilde{ \omega}_{e\parallel}\sqrt{\pi}\frac{k_{\parallel}}{k}+\Delta_{e}\frac{k_{ \parallel}^{2}}{k^{2}}-\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\right)=0\,.\] (K 35) When rearranged, the first bracket gives expression (4.29_a_), and the second bracket gives (4.29_b_). #### k.3.5 Derivation of frequency and growth rate of the CES electron mirror instability When its marginality parameter \(\Gamma_{e}=\Delta_{e}\beta_{e}-1\) is small, the growth rate (4.35) (and zero real frequency) of the CES electron mirror instability's can be derived from the dispersion relation (K 23) by adopting the ordering (4.34), viz., \[k_{\perp}^{2}\rho_{e}^{2}\sim k_{\parallel}\rho_{e}\sim\tilde{\omega}_{e \parallel}\beta_{e}\sim\Gamma_{e}\ll 1,\] (K 36) and assuming that \(\Gamma_{e}\gg\mu_{e}^{1/2}\). This latter inequality implies that \(1\ll k_{\parallel}\rho_{i}\ll k_{\perp}\rho_{i}\), so we use the asymptotic identities (109) to simplify \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\), (111) to simplify \(W_{i}\), \(X_{i}\), and \(Y_{i}\), (111) for \(W_{e}\), \(X_{e}\) and \(Y_{e}\). Collecting terms, using the identity \(\Delta_{e}=(1+\Gamma_{e})/\beta_{e}\), and keeping only leading-order ones, the dispersion relation simplifies to \[\frac{3}{2\beta_{e}}k_{\parallel}^{2}\rho_{e}^{2}\left(-\frac{\Gamma_{e}}{ \beta_{e}}k_{\perp}^{2}\rho_{e}^{2}+\frac{3}{2\beta_{e}}k_{\parallel}^{2}\rho_ {e}^{2}+\frac{3}{4\beta_{e}}k_{\perp}^{4}\rho_{e}^{4}+\mathrm{i}\sqrt{\pi}k_{ \perp}^{2}\rho_{e}^{2}\tilde{\omega}_{e\parallel}\right)-\tilde{\omega}_{e \parallel}^{2}k_{\parallel}^{2}\rho_{e}^{2}=0\,. \tag{112}\] Because the discriminant of the quadratic (112) is negative, it follows that its solution satisfies \(\omega=\mathrm{i}\gamma\), with \(\gamma\) being given by (109). To derive the expression (107) for the complex frequency of long-wavelength electron mirror modes, we adopt the ordering (108), \[\tilde{\omega}_{e\parallel}\sim\frac{k\rho_{e}}{\beta_{e}}\sim\Delta_{e}k\rho _{e}, \tag{113}\] and then consider the subsidiary limit \(k_{\parallel}\rho_{e}\sim k_{\perp}\rho_{e}\sim\mu_{e}^{1/4}\ll 1\) of the dispersion relation (108). Using the asymptotic identities (109) for \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\), (111) for \(F_{e}\), \(G_{e}\), \(H_{e}\), \(L_{e}\), and \(N_{e}\), (111) for \(W_{i}\), \(X_{i}\), and \(Y_{i}\), and (111) for \(W_{e}\), \(X_{e}\) and \(Y_{e}\), we find that \[\left\{\frac{\Delta_{e}}{2}\left[k_{\parallel}^{2}\rho_{e}^{2}- \mu_{e}^{1/2}\frac{2k_{\parallel}^{2}\left(k_{\parallel}^{2}-k_{\perp}^{2} \right)}{k^{4}}\right]+\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\right\}\] \[\times\left[\frac{\Delta_{e}}{2}\left(k_{\parallel}^{2}\rho_{e}^{ 2}-2k_{\perp}^{2}\rho_{e}^{2}-\mu_{e}^{1/2}\frac{2k_{\parallel}^{2}}{k^{2}} \right)+\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\right]-\tilde{\omega}_{e\parallel }^{2}k_{\parallel}^{2}\rho_{e}^{2}=0\,, \tag{114}\] where both the CE ion- and electron-shear terms are kept on account of their equal size under the assumed ordering. Solving for \(\omega\) gives (107). #### k.3.6 Derivation of frequency and growth rate of the parallel CES firehose instability The relevant orderings of parameters to adopt in order to derive the complex frequency (106) of the parallel CES firehose instability is (100), viz., \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{1/2}}\sim|\Delta_{i}|^{1/ 2}\sim k_{\parallel}\rho_{i}\ll 1\,, \tag{115}\] with an additional small wavenumber-angle condition \(k_{\perp}\rho_{i}\ll\beta_{i}^{-3/4}\) (which we shall justify _a posteriori_). Under this ordering, the special functions \(F_{s}\), \(G_{s}\), \(H_{s}\), \(L_{s}\), and \(N_{s}\) can be simplified using (111), and \(W_{s}\), \(X_{s}\), and \(Y_{s}\) using (111), and so the dispersion relation (108) reduces to \[\left(\tilde{\omega}_{i\parallel}^{2}-\frac{\Delta_{i}}{2}-\frac{1}{\beta_{i} }\right)^{2}-\frac{\tilde{\omega}_{i\parallel}^{2}}{4}k_{\parallel}^{2}\rho_ {i}^{2}=0\,, \tag{116}\] where the only non-negligible electron term is the one \(\propto\tilde{\omega}_{e\parallel}G_{e}\). Similarly to the CES mirror instability (see appendix K.3.2), this term cancels to leading order with its ion equivalent, and the next-order electron term is much smaller than the equivalent ion term. This dispersion relation can be rearranged to give (106). We also note that, in deriving (111) from (108), we have assumed that the linear term \(\propto\tilde{\omega}_{e\parallel}\mu_{e}^{1/2}H_{i}\) is much smaller than the quadratic term \(\propto\tilde{\omega}_{e\parallel}^{2}Y_{i}\); their relative magnitude is given by \[\frac{\tilde{\omega}_{e\parallel}\mu_{e}^{1/2}H_{i}}{\tilde{\omega}_{e \parallel}^{2}Y_{i}}\sim\frac{k_{\perp}^{2}\rho_{i}^{2}}{\tilde{\omega}_{i \parallel}k_{\parallel}^{2}\rho_{i}^{2}}\sim\beta_{i}^{3/2}k_{\perp}^{2}\rho_{ i}^{2}. \tag{104}\] Thus, this assumption (which it is necessary to make in order for there to be both left-handed and right-handed Alfven modes in high-\(\beta\) plasma) is only justified if the small-angle condition \(k_{\perp}\rho_{i}\ll\beta_{i}^{-3/4}\ll 1\) holds true. #### x.3.7 Derivation of frequency and growth rate of the oblique CES firehose instability To derive the oblique firehose's growth rate (103), we use the ordering (103), viz., \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\beta_{i}^{1/2}}\sim\left| \Delta_{i}\right|^{1/2}\sim k_{\parallel}^{2}\rho_{i}^{2}\sim k_{\perp}^{2} \rho_{i}^{2}\ll 1. \tag{105}\] Simplifying the special functions \(F_{s}\), \(G_{s}\), \(H_{s}\), \(L_{s}\), and \(N_{s}\) via (102), and \(W_{s}\), \(X_{s}\), and \(Y_{s}\) via (102), the dispersion relation (103) becomes \[\mathrm{i}\sqrt{\pi}\left(\tilde{\omega}_{i\parallel}^{2}-\frac{ \Delta_{i}}{2}-\frac{1}{\beta_{i}}\right)k_{\perp}^{2}\rho_{i}^{2}\tilde{ \omega}_{i\parallel}-\frac{\tilde{\omega}_{i\parallel}^{2}}{4}\left(k_{ \parallel}^{2}\rho_{i}^{2}-\frac{3}{2}k_{\perp}^{2}\rho_{i}^{2}\right)^{2}=0\,, \tag{106}\] where, in contrast to the quasi-parallel firehose, the linear term \(\propto\tilde{\omega}_{e\parallel}\mu_{e}^{1/2}H_{i}\) in (103) is larger than the quadratic term \(\propto\tilde{\omega}_{e\parallel}^{2}Y_{i}\). (106) can be solved to give two roots: \(\omega\approx 0\), corresponding to the stable slow mode (whose damping rate is asymptotically small under the assumed ordering), and the expression (103) for the complex frequency of the (sometimes firehose-unstable) shear Alfven mode. #### x.3.8 Derivation of frequency and growth rate of the critical-line CES firehose instability To characterise the growth of the critical-line firehose when \(\beta_{i}\gg 10^{6}\), we set \(k_{\perp}=2k_{\parallel}/3\), and order \[\tilde{\omega}_{i\parallel}\sim\beta_{i}^{-3/5}\sim k_{\parallel}^{6}\rho_{i} ^{6}\sim\left|\Delta_{i}+\frac{2}{\beta_{i}}\right|^{1/2}\,. \tag{107}\] The dispersion relation (103) transforms similarly to (104) in this case, with two important exceptions: first, the term in (102) \(\propto\tilde{\omega}_{e\parallel}G_{e}+\mu_{e}^{1/2}\tilde{\omega}_{e \parallel}G_{i}\) is \(\textit{O}(k_{\parallel}^{5}\rho_{i}^{5})\) on the critical line, rather than \(\textit{O}(k_{\parallel}^{3}\rho_{i}^{3})\); secondly, our choice of ordering requires that we retain \(\textit{O}(k_{\parallel}^{4}\rho_{i}^{4})\). This gives \[\mathrm{i}\sqrt{\pi}\left(\tilde{\omega}_{i\parallel}^{2}-\frac{1}{2}\Delta_{i }-\frac{1}{\beta_{i}}-\frac{5}{8}\Delta_{i}k_{\parallel}^{2}\rho_{i}^{2} \right)\tilde{\omega}_{i\parallel}-\frac{6889}{13824}\tilde{\omega}_{i \parallel}^{2}k_{\parallel}^{6}\rho_{i}^{6}=0\,. \tag{108}\] To obtain the expression (105) for the critical-line firehose's growth rate in the limit \(\beta_{i}\gg 10^{6}\) that is valid under the ordering (104), we consider the subsidiary limit \[\left|\Delta_{i}+\frac{2}{\beta_{i}}\right|\gg\beta_{i}^{-6/5}, \tag{109}\] in which case (106) becomes \[\mathrm{i}\sqrt{\pi}\left(\tilde{\omega}_{i\parallel}^{2}-\frac{\Delta_{i}}{2 }-\frac{1}{\beta_{i}}\right)\tilde{\omega}_{i\parallel}-\frac{6889}{13824} \tilde{\omega}_{i\parallel}^{2}k_{\parallel}^{6}\rho_{i}^{6}=0\,. \tag{110}\] The expression (105) follows from solving (109) for \(\omega\) (and once again neglecting the \(\omega\approx 0\) solution). The expression (4.61) for the growth of critical-line firehose modes when \(\beta_{i}\simeq-2/\Delta_{i}\gg 10^{6}\), can be deduced by considering the opposite subsidiary limit to (K.47), viz., \[\left|\Delta_{i}+\frac{2}{\beta_{i}}\right|\ll\beta_{i}^{-6/5}.\] (K.49) In this limit, (K.46) simplifies to \[{\rm i}\sqrt{\pi}\left(\tilde{\omega}_{i\parallel}^{2}+\frac{5}{4\beta_{i}}k _{\parallel}^{2}\rho_{i}^{2}\right)\tilde{\omega}_{i\parallel}-\frac{6889}{13 824}\tilde{\omega}_{i\parallel}^{2}k_{\parallel}^{6}\rho_{i}^{6}=0\,.\] (K.50) Noting that the quadratic (K.50) has a negative discriminant, we deduce that \(\omega={\rm i}\gamma\); then solving (K.50) for \(\gamma\) gives (4.61). When \(\beta_{i}\ll 10^{6}\), the appropriate ordering to adopt in order to simplify the dispersion relation of critical-line is no longer (K.51), but instead \[\tilde{\omega}_{i\parallel}\sim\frac{1}{\sqrt{\beta_{i}\log\beta_{i}}}\sim \left|\Delta_{i}+\frac{2}{\beta_{i}}\right|^{1/2},\quad k_{\parallel}\rho_{i }\sim\frac{1}{\sqrt{\log\beta_{i}}}.\] (K.51) Under this ordering, the term \(\propto\mu_{e}^{1/2}\tilde{\omega}_{e\parallel}F_{i}\) in (K.23) is retained, while the term \(\propto\tilde{\omega}_{e\parallel}G_{e}+\mu_{e}^{1/2}\tilde{\omega}_{e \parallel}G_{i}\) is neglected. This gives \[\left[\tilde{\omega}_{i\parallel}^{2}+{\rm i}\frac{\sqrt{\pi}}{k_{\parallel} ^{2}\rho_{i}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right) \!\tilde{\omega}_{i\parallel}-\frac{1}{2}\Delta_{i}-\frac{1}{\beta_{i}}- \frac{5}{8}\Delta_{i}k_{\parallel}^{2}\rho_{i}^{2}\right]\tilde{\omega}_{i \parallel}=0\,.\] (K.52) To obtain the expression (4.65) for the critical-line firehose instability's growth rate in the case when ordering (4.64) holds - that is, when \(\Delta_{i}\beta_{i}+2|\sim 1\), we consider the appropriate subsidiary limit of (K.52): \[\left|\Delta_{i}+\frac{2}{\beta_{i}}\right|\gg\frac{1}{\beta_{i}\log\beta_{i}}.\] (K.53) In this case, the last term in the square brackets on the LHS of (K.52) can be neglected, leaving the only non-trivial roots to satisfy \[\tilde{\omega}_{i\parallel}^{2}+{\rm i}\frac{\sqrt{\pi}}{k_{\parallel}^{2} \rho_{i}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)\! \tilde{\omega}_{i\parallel}-\frac{\Delta_{i}}{2}-\frac{1}{\beta_{i}}=0\,,\] (K.54) whence (4.65) follows immediately. The case of growth when \(\Delta_{i}\simeq-2/\beta_{i}\) can be recovered from the opposite subsidiary limit, \[\left|\Delta_{i}+\frac{2}{\beta_{i}}\right|\ll\frac{1}{\beta_{i}\log\beta_{i}}.\] (K.55) In this case, the dispersion relation of the critical-line firehose modes is \[\tilde{\omega}_{i\parallel}^{2}+{\rm i}\frac{\sqrt{\pi}}{k_{\parallel}^{2} \rho_{i}^{2}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_{i}^{2}}\right)\! \tilde{\omega}_{i\parallel}+\frac{5}{4\beta_{i}}k_{\parallel}^{2}\rho_{i}^{2}= 0\,,\] (K.56) which, when solved for the growth rate \(\gamma=-i\omega\), gives (4.68). #### k.3.9 Derivation of frequency and growth rate of the CES parallel electron firehose instability This derivation is identical to that given in appendix K.3.3 for the frequency and growth rate of the parallel CES whistler instability, and the same expressions (4.21) are used in section 4.4.7. #### c.3.10 Derivation of frequency and growth rate of the CES oblique electron firehose instability The complex frequency (4.86) of the electron-firehose modes with \(\mu_{e}^{1/2}\ll k_{\parallel}\rho_{e}\ll k_{\perp}\rho_{e}\sim 1\) is derived by applying the ordering \[\tilde{\omega}_{e\parallel}\sim|\Delta_{e}|\sim\frac{1}{\beta_{e}} \tag{100}\] to (101) and using the asymptotic identities (101) for \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\), (102) for \(F_{e}\), \(G_{e}\), \(H_{e}\), \(L_{e}\), and \(N_{e}\), (103) for \(W_{i}\), \(X_{i}\), and \(Y_{i}\), and (104) for \(W_{e}\), \(X_{e}\) and \(Y_{e}\). We obtain the simplified dispersion relation \[\left\{-\Delta_{e}\frac{k_{\parallel}^{2}}{k_{\perp}^{2}}\left[1- \exp\left(-\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\right)I_{0}\!\left(\frac{k_{ \perp}^{2}\rho_{e}^{2}}{2}\right)\right]-\frac{k_{\parallel}^{2}\rho_{e}^{2}}{ \beta_{e}}\right\}\] \[\times\left\{\left(\mathrm{i}\sqrt{\pi}\tilde{\omega}_{e \parallel}+\Delta_{e}\right)k_{\perp}^{2}\rho_{e}^{2}\exp\left(-\frac{k_{ \perp}^{2}\rho_{e}^{2}}{2}\right)\left[I_{0}\!\left(\frac{k_{\perp}^{2}\rho_{e }^{2}}{2}\right)-I_{1}\!\left(\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\right) \right]-\frac{k^{2}\rho_{e}^{2}}{\beta_{e}}\right\}\] \[-k_{\parallel}^{2}\rho_{e}^{2}\tilde{\omega}_{e\parallel}^{2}\exp \left(-k_{\perp}^{2}\rho_{e}^{2}\right)\left[I_{0}\!\left(\frac{k_{\perp}^{2} \rho_{e}^{2}}{2}\right)-I_{1}\!\left(\frac{k_{\perp}^{2}\rho_{e}^{2}}{2}\right) \right]^{2}=0\,. \tag{104}\] Introducing the special functions \(\mathcal{F}(k_{\perp}\rho_{e})\) and \(\mathcal{H}(k_{\perp}\rho_{e})\) given by (4.89), and then rearranging (104), leads to (4.86). #### c.3.11 Derivation of frequency and growth rate of the CES EST instability To derive the expression (4.97) for the growth rate of the EST instability in the limits \(\mu_{e}^{1/2}\ll k_{\parallel}\rho_{e}\ll 1\ll k_{\perp}\rho_{e}\ll\beta_{e}^{1 /7}\), and \(\Delta_{e}\beta_{e}\gg 1\), we apply the orderings (4.96), viz., \[k_{\perp}\rho_{e}\sim(\Delta_{e}\beta_{e})^{1/2},\quad\tilde{\omega}_{e \parallel}\sim\Delta_{e}^{5/2}\beta_{e}^{3/2},\quad k_{\parallel}\rho_{e}\sim \frac{1}{\sqrt{\log|\Delta_{e}|\beta_{e}}}\ll 1\,. \tag{105}\] to (104). We then use the asymptotic identities (101) for \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\), (102) for \(F_{e}\), \(G_{e}\), \(H_{e}\), \(L_{e}\), and \(N_{e}\), (102) for \(W_{e}\), \(X_{i}\), and \(Y_{i}\), and (104) for \(W_{e}\), \(X_{e}\) and \(Y_{e}\) to give \[\mathrm{i}\frac{\tilde{\omega}_{e\parallel}}{k_{\perp}\rho_{e}}\left\{\mathrm{ i}\frac{\tilde{\omega}_{e\parallel}}{k_{\perp}^{3}\rho_{e}^{3}}\left[4\exp \left(-\frac{1}{k_{\parallel}^{2}\rho_{e}^{2}}\right)+\sqrt{\pi}\mu_{e}^{1/2}k _{\parallel}^{3}\rho_{e}^{3}\right]-\Delta_{e}\frac{k_{\parallel}^{2}\rho_{e} ^{2}}{k_{\perp}^{2}\rho_{e}^{2}}-\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{ e}}\right\}-\frac{k_{\parallel}^{2}\rho_{e}^{2}\tilde{\omega}_{e\parallel}^{2}}{\pi k _{\perp}^{6}\rho_{i}^{6}}=0, \tag{106}\] where the only ion contribution that is not always small, and thus cannot be neglected, is the term proportional to \(\mu_{e}^{1/2}\). Solving for the frequency gives \(\omega\approx 0\) - corresponding to a damped mode whose frequency is asymptotically small under the assumed ordering (104) - and the EST mode, whose growth rate is given by (4.97). #### c.3.12 Derivation of frequency and growth rate of the CES whisper instability In the limits \(\mu_{e}^{1/2}\ll k_{\parallel}\rho_{e}\ll 1\gg k_{\perp}\rho_{e}\) and \(\Delta_{e}\beta_{e}\gg 1\) under the orderings \[\tilde{\omega}_{e\parallel}\sim\frac{1}{\beta_{e}^{2/7}}\sim\frac{1}{k_{\perp }^{2}\rho_{e}^{2}}\sim\frac{1}{\Delta_{e}\beta_{e}},\quad k_{\parallel}\rho_{e }\sim\frac{1}{\sqrt{\log|\Delta_{e}|\beta_{e}}}\ll 1\,, \tag{107}\] the dispersion relation (101) becomes \[\mathrm{i}\frac{\tilde{\omega}_{e\parallel}}{k_{\perp}\rho_{e}}\Bigg{\{}\frac{k _{\parallel}^{2}\rho_{e}^{2}}{k_{\perp}^{2}\rho_{e}^{2}}\frac{4\tilde{\omega}_{ e\parallel}^{2}}{\sqrt{\pi}k_{\perp}\rho_{e}}+\mathrm{i}\frac{4\tilde{\omega}_{e \parallel}}{k_{\perp}^{3}\rho_{e}^{3}}\exp\left(-\frac{1}{k_{\parallel}^{2}\rho_ {e}^{2}}\right)-\Delta_{e}\frac{k_{\parallel}^{2}\rho_{e}^{2}}{k_{\perp}^{2}\rho_ {e}^{2}}-\frac{k_{\parallel}^{2}\rho_{e}^{2}}{\beta_{e}}\Bigg{\}}-\frac{k_{ \parallel}^{2}\rho_{e}^{2}\tilde{\omega}_{e\parallel}^{2}}{\pi k_{\perp}^{6} \rho_{e}^{6}}=0\,, \tag{108}\] where we have once again evaluated \(F_{i}\), \(G_{i}\), \(H_{i}\), \(L_{i}\), and \(N_{i}\) using (G 35), \(F_{e}\), \(G_{e}\), \(H_{e}\), \(L_{e}\), and \(N_{e}\) using (G 38), \(W_{i}\), \(X_{i}\), and \(Y_{i}\) using (G 99), and \(W_{e}\), \(X_{e}\) and \(Y_{e}\) using (G 102), and neglected all terms that are small under the ordering (K 61). Solving for the non-trivial root of (K 62) gives the expression (4.105) for the complex frequency of whisper waves. #### k.3.13 Derivation of frequency and growth rate of the CES ordinary-mode instability Because the low-frequency assumption \(\tilde{\omega}_{e\parallel}\ll 1\) is broken in the regime of relevance to the CES ordinary-mode instability, the dispersion relation (K 23) is not valid; to characterise these modes, we must instead return to considering the full hot-plasma dispersion relation. We choose to categorise the ordinary-mode instability for modes with \(k_{\parallel}=0\). In this special case, the plasma dielectric tensor simplifies considerably, and has the convenient property that \[\hat{\mathbf{z\cdot}}\,\mathbf{\mathfrak{C}}=(\hat{\mathbf{z\cdot}}\,\mathbf{\mathfrak{C}}\,\cdot\,\hat{\mathbf{z}})\,\hat{\mathbf{z}}\,,\] (K 63) if the particle distribution functions have even parity with respect to the parallel velocity \(v_{\parallel}\)(Davidson, 1983) - a condition satisfied by the CE distribution functions (4.1). Thus, perturbations whose associated eigenmode satisfies \(\widehat{\delta\mathbf{E}}=\widehat{\delta E}_{z}\hat{\mathbf{z}}\) decouple from other modes in the plasma. The dispersion relation for such modes follows from (2.4.1): \[\mathbf{\mathfrak{C}}_{zz}-\frac{c^{2}k_{\perp}^{2}}{\omega^{2}}=0\,.\] (K 64) In terms of matrices \(\mathbf{M}_{s}\) and \(\mathbf{P}_{s}\) defined by (2.96), this can be written \[\sum_{s}(\mathbf{M}_{s})_{zz}+\sum_{s}(\mathbf{P}_{s})_{zz} -k_{\perp}^{2}d_{e}^{2}=0\,.\] (K 65) For \(k_{\parallel}=0\), the matrix components \((\mathbf{M}_{s})_{zz}\) and \((\mathbf{P}_{s})_{zz}\) are given by [see (G 17_i_) and (G 93_i_)] \[(\mathbf{M}_{s})_{zz}= -\sum_{n=-\infty}^{\infty}\frac{\omega}{\omega-n\tilde{\Omega}_ {s}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left( \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\,,\] (K 66a) \[(\mathbf{P}_{s})_{zz}= -\frac{3\epsilon_{s}}{2}\sum_{n=-\infty}^{\infty}\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp} ^{2}\tilde{\rho}_{s}^{2}}{2}\right)=-\Delta_{s}\,.\] (K 66b) Therefore, the dispersion relation (K 65) becomes \[k_{\perp}^{2}d_{e}^{2} =-\sum_{s}\frac{m_{e}}{m_{s}}\left[\Delta_{s}+\sum_{n=-\infty}^{ \infty}\frac{\omega}{\omega-n\tilde{\Omega}_{s}}\exp\left(-\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s }^{2}}{2}\right)\right]\] \[=-\sum_{s}\frac{m_{e}}{m_{s}}\left[\Delta_{s}+\sum_{n=1}^{\infty} \frac{2\omega^{2}}{\omega^{2}-n^{2}\tilde{\Omega}_{s}^{2}}\exp\left(-\frac{k_{ \perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde {\rho}_{s}^{2}}{2}\right)\right]\,.\] (K 67) Since the left-hand side of (K 67) is real, and the imaginary part of the right-hand side is non-zero if and only if the complex frequency \(\omega\) has non-zero real and imaginary parts, we conclude that all solutions must be either purely propagating, or purely growing modes. Looking for purely growing roots, we substitute \(\omega={\rm i}\gamma\) into (K 67), and deduce that \[\sum_{s}\frac{m_{e}}{m_{s}}\left[\sum_{n=1}^{\infty}\frac{2\gamma^{2}}{\gamma^{ 2}+n^{2}\tilde{\Omega}_{s}^{2}}\exp\left(-\frac{k_{\perp}^{2}\tilde{\rho}_{s}^ {2}}{2}\right)I_{n}\!\left(\frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)\right]\] \[=-k_{\perp}^{2}d_{e}^{2}-\sum_{s}\frac{m_{e}}{m_{s}}\left[\Delta_{s}+\exp\left(- \frac{k_{\perp}^{2}\tilde{\rho}_{s}^{2}}{2}\right)I_{0}\left(\frac{k_{\perp}^{2} \tilde{\rho}_{s}^{2}}{2}\right)\right]\,. \tag{100}\] Neglecting the ion contributions (which are smaller than the electron ones by a \((m_{e}/m_{i})^{1/2}\) factor) and considering \(\Delta_{e}<0\), we arrive at (101).
2308.15309
Understanding the Privacy Risks of Popular Search Engine Advertising Systems
We present the first extensive measurement of the privacy properties of the advertising systems used by privacy-focused search engines. We propose an automated methodology to study the impact of clicking on search ads on three popular private search engines which have advertising-based business models: StartPage, Qwant, and DuckDuckGo, and we compare them to two dominant data-harvesting ones: Google and Bing. We investigate the possibility of third parties tracking users when clicking on ads by analyzing first-party storage, redirection domain paths, and requests sent before, when, and after the clicks. Our results show that privacy-focused search engines fail to protect users' privacy when clicking ads. Users' requests are sent through redirectors on 4% of ad clicks on Bing, 86% of ad clicks on Qwant, and 100% of ad clicks on Google, DuckDuckGo, and StartPage. Even worse, advertising systems collude with advertisers across all search engines by passing unique IDs to advertisers in most ad clicks. These IDs allow redirectors to aggregate users' activity on ads' destination websites in addition to the activity they record when users are redirected through them. Overall, we observe that both privacy-focused and traditional search engines engage in privacy-harming behaviors allowing cross-site tracking, even in privacy-enhanced browsers.
Salim Chouaki, Oana Goga, Hamed Haddadi, Peter Snyder
2023-08-29T13:53:42Z
http://arxiv.org/abs/2308.15309v3
# Understanding the Privacy Risks of Popular Search Engine Advertising Systems ###### Abstract. We present the first extensive measurement of the privacy properties of the advertising systems used by privacy-focused search engines. We propose an automated methodology to study the impact of clicking on search ads on three popular _private_ search engines which have advertising-based business models: StartPage, Qwant, and DuckDuckGo, and we compare them to two dominant data-harvesting ones: Google and Bing. We investigate the possibility of third parties tracking users when clicking on ads by analyzing first-party storage, redirection domain paths, and requests sent before, when, and after the clicks. Our results show that privacy-focused search engines fail to protect users' privacy when clicking ads. Users' requests are sent through redirectors on 4% of ad clicks on Bing, 86% of ad clicks on Qwant, and 100% of ad clicks on Google, DuckDuckGo, and StartPage. Even worse, advertising systems collude with advertisers across all search engines by passing unique IDs to advertisers in most ad clicks. These IDs allow redirectors to aggregate users' activity on ads' destination websites in addition to the activity they record when users are redirected through them. Overall, we observe that both privacy-focused and traditional search engines engage in privacy-harming behaviors allowing cross-site tracking, even in privacy-enhanced browsers. Search engines, advertising systems, cross-site tracking, privacy, measurement. + Footnote †: [leftmargin=*] We implement an automated measurement methodology to measure if and how users can be re-identified (hence, their privacy is compromised) when clicking on search ads on each search engine (see Section 3). We build an open-source implementation of this methodology in the form of a Puppeteer-based pipeline that simulates search queries and ad clicks. We apply this crawling methodology to the five search engines, providing a full dataset with visited websites, cookies created, locally stored values, and web requests to search engines' servers and/or other third parties when clicking ads. We use filter rules from several major open-source lists to detect web requests to online trackers, and we propose a methodology to differentiate user identifiers from non-tracking values in query parameters and cookies values. We then present in Section 4 a systematic analysis of our dataset to investigate privacy harms before clicking an ad, during clicking an ad, and after clicking an ad and reaching the advertiser's website. We find that users' privacy is not harmed _until_ users click on an ad. Privacy-focused search engines do not appear to attempt to re-identify users across visits or queries and do not include resources from, or make network requests to known trackers. However, we find that users' privacy is compromised by **all** studied search engines in various ways once users click on an ad. Disappointingly, we find that all search engines record additional information about the user and/or the users' clicks after the user has clicked on an ad. Private search engines capture data related to the clicked ad, including the ad provider, destination URL, and the ad's position within the search results page, along with the user's browsing data, such as the search query, device type, and browser language. Private search engines do not store user identifiers upon ad clicks, in contrast to traditional search engines that record user identifying values. Furthermore, we find that all search engines engage in navigation-based tracking. Navigation-based tracking refers to tracking techniques that are redirecting users through one or more redirectors when navigating from one website to another in order to share user information across sites (Steintein et al., 2017). Navigation-based tracking does not require third-party cookies and can be used to circumvent browsers' privacy protections from cross-site tracking using partitioned cookies storage. Alarmingly, we observe that privacy-focused search engines engage in more navigation-based tracking than non-privacy-focused ones: We observe navigational tracking on 4% ad clicks on Bing, on 100% ad clicks on Google, on 100% ad clicks on DuckDuckGo, on 86% ad clicks on Qwant, and on 100% ad clicks on StartPage. On the destination page, we check whether the search engine requires advertisers to abide by privacy-respecting practices by measuring whether advertisers include trackers or other known privacy-harming resources. We found that 93% of ads destination pages (across all five search engines) included tracker and privacy-harming resources. Finally, we check whether search engines or redirectors aid advertisers in profiling visitors by measuring the data they receive in the form of user-describing query params. We find that advertisers receive user identifiers in 68%, 92%, and 53% of cases for DuckDuckGo, StartPage, and Qwant, respectively. This practice, known as UID smuggling, enables redirectors to aggregate more user behavior data if they have scripts on the ads' destination websites and they store the user-identifying parameters they receive. Notably, in the case of private search engines, the user-identifying parameters are not set by the search engine but by the redirectors encountered between the search engine's and the advertiser's sites. Our results indicate that private search engines' privacy protections do not sufficiently cover their advertising systems. Although these search engines refrain from identifying and tracking users and their ad clicks, the presence of ads from Google or Microsoft subjects users to the privacy-invasive practices performed by these two advertising platforms. When users click on ads on private search engines, they are often identified and tracked either by Google, Microsoft, or other third parties, through bounce tracking and UID smuggling techniques. Particularly, advertisers receive unique user identifiers through query parameters in most ad clicks, which can enable cross-site tracking even in privacy-enhanced browsers that block third-party cookie tracking. ## 2. Background This section briefly discusses the policies of the main search engines alongside popular tracking approaches. ### Private search engines We study the two dominant search engines that rely on user tracking for personalized search results and advertisements, namely Google and Bing, and three of the most popular privacy-branded search engines that provide users with non-personalized results and ads: DuckDuckGo, StartPage, and Qwant (Duck et al., 2017; Goyal et al., 2017). Private search engines can either build their own independent search indexes or use big tech search engines like Bing, Google, or Yahoo to provide search results. Both types of private search engines claim not to store users' search histories and not to collect nor share tracking and personal data. We now describe the advertising systems employed by the different private search engines and present a summary of their data-sharing policies outlined in their respective _About_ pages. **DuckDuckGo** is a standalone search engine that maintains and uses its own search index alongside other indexes, such as Bing's, to provide search results (Bing et al., 2017). DuckDuckGo relies on Microsoft's advertising system but only serves ads based on the search results and not the behavioral profiles of users (Steintein et al., 2017): _"search ads on DuckDuckGo are based on the search results page you're viewing instead of being based on you as a person"_ When clicking an ad on DuckDuckGo, the user is redirected to the ad's landing page through Microsoft Advertising's platform. DuckDuckGo claims Microsoft does not store ad-click behaviors from DuckDuckGo for purposes other than accounting and does not associate ad-clicks with users' profiles (Duck et al., 2019): _"When you click on a Microsoft-provided ad that appears on DuckDuckGo, Microsoft Advertising does not associate your ad-click behavior with a user profile. It also does not store or share that information other than for accounting purposes."_ This implies that Microsoft can, though currently chooses not to, link the ad-click to an existing Microsoft user profile. The privacy policy is signed by both DuckDuckGo and Microsoft. **Qwant** is a standalone EU-based search engine that allows users to access online resources without being tracked nor profiled (Steintein et al., 2017). Qwant relies on Microsoft's advertising system to deliver ads in their search results pages. Although Qwant reports transmitting _some information_ concerning search queries to Microsoft to enable the latter to present pertinent advertisements, it remains unclear which specific information is shared. In addition, to detect fraud, Qwant uses a specialized service offered by Microsoft, which has access to the user's IP address and the browser "User-Agent". Qwant assures that this service does not have access to the search query, which is sent to another service that does not know the IP address of the user (Steintein et al., 2017). Unlike DuckDuckGo, which also uses Microsoft advertising, we did not find any mention to ad-click information on Qwant's privacy policy. They do not mention whether Microsoft stores this data and for what purposes they use it. **StartPage** is a meta-search engine that allows users to obtain non-personalized search results from Google's search index while protecting their privacy. StartPage relies on Google Ad-Sense to show ads to users. According to StartPage's privacy policy, the search engine serves strictly non-personalized ads since it does not share any identifiable information with Google. Therefore, ads displayed on the search results page are solely based on the user's search query (Steintein et al., 2017). Regarding ad-click behavior data, the privacy policy does not make any reference to whether Google tracks or profiles users based on this information. Nevertheless, StartPage emphasizes that by clicking on an ad, users leave the protection of StartPage's privacy policies and become subject to the practices of the website they are redirected to (Steintein et al., 2017). _"By clicking on an ad, like any other external website you click on after performing a StartPage search, you leave the privacy protection of StartPage and are subject to those websites' data collection policies."_ ### Cross-site tracking Cross-site tracking refers to the practice of following a user across multiple first-party websites and associate their browsing activities to a unique identifier. Web tracking practices require first-party websites (e.g. the content providers) to share data about a user's activity with third parties (the trackers). Online tracking has been traditionally implemented through browser cookies. However, due to increasing adoption of cookie-blocking browsers and extensions, and the push on adopting partitioned cookies storage on web browsers, more and more trackers started to rely on navigational tracking techniques. We next discuss how these techniques work. #### 2.2.1. Cookie tracking To enable cross-site cookie tracking, whenever a user visits a first-party website, the website makes a request to the third-party website (the tracker). This allows the tracker to set a cookie, which will identify the user and will be associated with the browsing activity of the user. For example, when the user visits a website \(A\) that makes a request to the tracker \(T\), the tracker associates the cookie identifier of the user with the fact that the user visited website \(A\) (see Figure 1). Later, when the user visits website \(B\), which also makes a request to the tracker \(T\), the tracker will be able to associate the cookie identifier of the user with the fact that the user visited website \(B\). Hence, the tracker will be able to know that the user visited websites \(A\) and \(B\). This was initially possible because browsers had a common cookie storage containing all cookies, and trackers could read their corresponding cookies regardless of which first-party website allowed the tracker cookie to be set (see Figure 1). However, several browsers, such as Safari, Firefox, and Brave, have implemented partitioned storage to prevent using cookies for cross-site tracking (Stein et al., 2017). These browsers use a partitioned cookies storage with a hierarchical namespace where a tracker accesses a different storage area on each website that loads it, preventing trackers from matching or assigning the same identifiers to users across multiple websites. Hence, cross-site tracking based on cookies can no longer be performed on these browsers. Chrome -the most used web browser- is in the process of testing partitioned cookies storage but does not use it by default (Stein et al., 2017; Stein et al., 2017). #### 2.2.2. Navigational tracking Navigational tracking refers to tracking techniques that use one or more URL navigations to share user information across sites. Navigational tracking does not require third-party cookies and can be used to circumvent browsers' privacy protections from cross-site tracking using partitioned cookies storage. **Bounce tracking** is a navigational tracking technique that refers to redirecting users through one or more redirects when navigating from one website to another. To allow this, a website \(A\) containing links to another website \(B\) does not directly link to the target \(B\) but instead links to an intermediary _redirector_ (R)-the tracker (see Figure 2). When users click on a link on website \(A\), they are taken to the redirector first, which then redirects them to the intended destination (website \(B\)) or other intermediary redirectors. The website \(A\) can directly change the actual link of the destination (b.com) to a redirection link (r.com), or a redirector's third-party script can do it. On its turn, the redirector can change the destination link again and send it further to other redirectors. Hence, from the link in the ad on the website \(A\), one cannot know all the different redirectors the users will pass through when they click on an ad. We call the _redirection path_ all the websites a user navigates through to arrive from \(A\) to \(B\). Since, from a browser perspective, the redirector is the first-party domain, it can read or set cookies in its own partition (Zhou et al., 2018). In the following, we describe what data redirectors can infer according to the redirector's behavior. (1) If the redirector does not set a first-party cookie, it will only know that a user went from website \(A\) to website \(B\) and will not be able to link this to other user browsing activities. (2) If the redirector sets a first-party cookie, it will be able to aggregate all the activity of the user that is redirected through it (either from website A or other websites that use it as a redirector), hence, it will allow cross-site tracking. (3) If the redirector also sets third-party cookies on websites \(A\) and \(B\), it will not be able to link the activity of the user on website \(A\) with the activity of the user on website \(B\), and with the activity of the user that goes through its own site (through redirects) since they do not share the same user ID (Zhou et al., 2018). Hence, while bounce tracking allows to a certain degree, cross-site tracking, it does not have the same coverage as the traditional third-party cookie tracking. **UID smuggling** is a navigational tracking technique that modifies users' navigation requests by adding information to the navigation URLs in the form of query parameters. In addition, similar to bounce tracking, UID smuggling may redirect the user to one or more third-party trackers before redirecting the user to the intended destination. Figure 3 describes this process. When a user clicks on a link on a website \(A\), the originator page itself or a tracker on the page-through a script-decorates the URL by adding the originator's user identifier (UID) as a query parameter. The user then passes through zero or more redirectors which are invisible to him. Each of these redirectors can get the UID from the query parameter and has permission to store it in a first-party cookie under the redirector's domain. Finally, the user is sent to the destination website B, and the redirector can forward or not to website \(B\) the UID it received from \(A\). All the trackers on website B will be able to read the UID from the query parameter and know that it was the UID sent by the originator (through request headers). UID smuggling is more powerful than bounce tracking. Trackers using UID smuggling regain the ability to share UIDs across websites with different domains and can circumvent restrictions from partitioned cookie storage spaces (Zhou et al., 2018). For example, they can link the user's visits to the website \(A\) with the user's visits to website \(B\) and the user's activity that goes through its site (through redirects) since they can all be linked to the same user ID. In addition, UID smuggling can help other trackers on website \(B\) (and website \(A\)) to link users' browsing activity across all the websites that received the UID as a query parameter. ## 3. Measurement Methodology We develop a measurement methodology to capture network flows when clicking on an ad from a search engine results page. Using multiple crawlers, we simulate a large number Figure 1. Cookie tracking in flat vs. partitioned cookies storage. of search engine queries in order to collect a sample of information flows per search engine. For each request, we collect the cookies created, the locally stored values, and the web request sent by the browser. In addition, we rely on several open-source datasets to detect web requests to online trackers. We consider five main search engines: Google1, Bing2, DuckDuckGo3, StartPage4, and Qwant5. We use Google and Bing as baselines to compare with the other three, which claim to have higher privacy standards and protective measures in place. Footnote 1: [https://www.google.com/](https://www.google.com/) Footnote 2: [https://www.bing.com/](https://www.bing.com/) Footnote 3: [https://ducducducgo.com/](https://ducducducgo.com/) Footnote 4: [https://www.startpage.com](https://www.startpage.com) Footnote 5: [https://www.qwant.com/](https://www.qwant.com/) ### Crawling system Each crawling iteration begins at a search engine's main page, where our system will type a query and access the search engine results page. Next, it chooses one of the displayed ads to click on to access its destination website. Then, the navigation path passes through zero or more redirectors before landing on the ad's destination website. The redirectors are invisible to the user but can be identified through an analysis of network requests initiated by the browser. Each of these redirectors can read the query parameters added by the search engine or other intermediaries and store them locally or send them to other third parties. The system records all first-party and third-party cookies, local storage values, and web requests at each step. We run each iteration in a new browser instance to ensure no stale data is cached from previous iterations. Depending on the search engine, ads are either part of the main page or are loaded through an iframe. We use scrapping techniques to detect them and rely on several HTML elements' attributes. For instance, all ads on StartPage are inside an HTML element titled "Sponsored Links". Moreover, we use hyperlink values to detect Google ads since they all link to 'www.googlead services.com/". Our system prioritizes ads with landing domains it has not visited yet, aiming to maximize the number of different destination websites. Each time a crawler clicks on an ad, our system adds the domain of its landing URL to the list of visited websites. In the subsequent iterations, the crawler first extracts the landing domains of all the displayed ads. The landing domains are included within the HTML objects of the advertisements on all search engines. The crawler then Figure 3. UID smuggling Figure 2. Bounce tracking. gives preference to click on ads leading to domains that have not been encountered in the list of visited websites. We reproduced these steps for 500 search queries on the five search engines. We randomly choose them from Google Trends (Krishnan et al., 2017) and movie titles from MovieLens (Krishnan et al., 2017). All iterations were performed in "accept" cookies mode. Table 1 represents the number of different search queries we typed, the number of different destination pages we landed on, and the number of different domain paths we collected for each search engine. We implemented our system using Puppeteer (Puppeteer, 2017) to automate visiting search engines' websites, typing search queries, detecting and clicking on one of the displayed ads, and waiting for 15 seconds on the ad's destination website. We reproduce these steps multiple times from the same IP address for each search engine. To reduce the chance of being identified as bots, we use puppeteer-extra-plugin-stealth (Puppeteer, 2017). This plugin applies various techniques to make the detection of headless Puppeteer crawlers by websites harder. Puppeteer allows us to record cookies and local storage for each request. However, it does not guarantee that it can attach request handlers to a web page before it sends any requests (Puppeteer, 2017). Hence, detecting and collecting web requests only using Puppeteer might cause losing some of them. We use a Chrome extension alongside Puppeteer crawlers to record web requests during all the crawling time. We do not observe a significant difference between web requests recorded by crawlers and web requests recorded by the extension. In median, the crawlers recorded 97% of the requests recorded by the extension. The code of the crawling system and the dataset are available at [https://github.com/CHOUAKIsalim/Search_Engines_Privacy](https://github.com/CHOUAKIsalim/Search_Engines_Privacy). ### Detection techniques **Detection of trackers:** We use URL filtering to detect web requests to online trackers. We use filter rules from two open-source lists: EasyList (Krishnan et al., 2017) and EasyPrivacy (Krishnan et al., 2017). EasyList is the most popular list to detect and remove adverts from webpages and forms the basis of many combination and supplementary filter lists (Krishnan et al., 2017). EasyPrivacy is a supplementary filter list that detects and removes all forms of tracking from the internet, including tracking scripts and information collectors (Krishnan et al., 2017). These filter lists are used by extensions that aim to remove unwanted content from the internet, like AdBlock and uBlock. We combined and parsed these lists using adblock-rs (Bog of cookies/parameters change. We discard tokens with different values in the two iterations as they are more likely session identifiers. (iv) Similar to (Steintein et al., 2017), we use programmatic heuristics to discard particular values. We discard tokens that appear to be timestamps (values between June and December 2022 in seconds and milliseconds), tokens that appear to be URLs, tokens that constitute one or more English words ((Kumar et al., 2020)), and tokens that are seven characters long or less. After using these filters, we are left with 1 942 tokens. We manually investigated them and observed a non-negligible number of false positives. Hence, we manually filtered the remaining tokens and removed those composed of any combination of natural language words, coordinates, or acronyms. In the end, we are left with 1 258 user-identifying tokens, which we consider to be user identifiers. ## 4. Results This section presents the results of applying the presented methodology to the five selected search engines. We measure how users' privacy is affected before, during, and after clicking on a search ad. We find that the advertising systems on all evaluated search engines result in privacy harm, even for search engines that market themselves as privacy-respecting. We find that how, and to what degree, user privacy is harmed varies across each evaluated system. The rest of this section proceeds as follows. Section 4.1 begins by presenting measurements of how user privacy is impacted _before_ users click on an ad (i.e., after the user has received answers to their search query, but before the user clicks on an advertisement contained among or alongside the search results). Section 4.2 presents measurements of how user privacy is effected _during_ clicking on an advertisement (i.e., after the user has clicked on an advertisement, but before the user arrives at the advertisement's destination). Finally, Section 4.3 gives measurements of how user privacy is affected _after_ clicking on an advertisement (i.e., after the user has arrived at the final destination of the advertisement link, and scripts are executed on the advertiser's website). ### Before clicking on an ad We first present measurements of how the advertising systems used by popular search engines affect user privacy before a person has clicked on any advertisement. At this point in the process, the user has submitted a query to the search engine and received a results page. The returned results include at least two types of links: "organic results" (i.e., websites that contain content the search engine thinks relates to the query) and "paid results" (i.e., advertisements that the search engine has been paid to show to users). This subsection presents measurements of how user privacy is impacted before the user has clicked on a search advertisement. Since a user will only click on a fraction of the advertisements they are presented with, users will be effected by these "before" privacy harms more frequently than the privacy harms presented in later subsections. #### 4.1.1. First-party reidentification We first measure whether search engines track or reidentify users across queries and visits. We find that the non-privacy-focused search engines (i.e., Bing and Google) track users across visits and are able to link different search queries to the same user who made those queries. The privacy-focused search engines, on the other hand, do not appear to attempt to reidentify users across visits or queries, aligning with the claims made in their privacy policies (see Section 2.1). We measured whether search engines are able to reidentify users across queries and visits by looking for whether search engines stored unique user identifiers in the browser's first-party storage (e.g., cookies, localStorage). Specifically, we inspected the DOM storage area for each site and looked for stored values that appeared to be unique identifiers, using the heuristics described in Section 3.2. We observed that Google and Bing did store such user identifiers; the other search engines did not. We note that some privacy-focused search engines _did_ store other values in first-party storage, but that they were used for purposes other than user identification (e.g., client-side storage of user preferences). #### 4.1.2. Requests to trackers We also measured whether search engines harmed user privacy by communicating with trackers when presenting advertisements. We did not observe any search engine including resources from, or making network requests to, known trackers. We checked for communication with known trackers by i. recording the URLs of all the network requests made by the browser when rendering the search results, and ii. checking those URLs against popular filter lists (as described in Section 3.2). These URLs comprise both the sub-resources (e.g., scripts, images, videos) loaded by the results page and the third-party requests made using the Web networking APIs (e.g., XMLHttpRequest, fetch(), web sockets). We note that we were only able to measure the client-side network behavior of each search engine, and could only observe whether the search engine pages themselves were sharing information with known trackers. We were not able to measure how or if each search engine communicates with trackers on the server-side. ### When clicking on an ad Next, we measure how user privacy is affected after the user clicks on an ad, but before the user has arrived at the ad's destination (usually, a page controlled by the party placing the advertisement). This step of the process involves systems run by both the search engine itself and the advertising platform paying for the ad. During this stage, the advertising system may try and accomplish several goals, including fraud detection (i.e., attempting to detect if the "click" was the result of an automated system, intending to increase how much the advertiser pays the search engine) and user profiling (i.e., recording information about the user clicking the ad to combine with existing user profiles). Simultaneously, the search engine may use this step to try and achieve other goals, including quality of service measurements (i.e., ensuring that advertisements render correctly) or additional user profiling (i.e., recording which ad the user clicked to "enrich" whatever information the search engine may have about the user). We find that the measured search engines vary widely in how they treat user privacy when the user clicks on an ad. However, we also find that the advertising systems engage in privacy-harming behaviors and share user identifying information with third parties across all measured search engines, despite the privacy-focused branding adopted by some search engines. #### 4.2.1. Search engine page behaviors First, we measured what behaviors the search engine's page engages in _after_ the user clicks on an ad but _before_ the browser begins navigating away from the search engine's page (and towards the advertisement's destination page). These behaviors might be things like recording which advertisement the user clicked on or how long the user waited before clicking, and are implemented with browser APIs like "onclick" handlers and "ping" attributes (Han et al., 2017). We measured each search engine's post-click behaviors by recording what network requests happened on the page after each advertisement was clicked on. We find that all search engines record additional information about the user and/or the user's click, after the user has clicked on an ad. _Bing._ Clicking on an advertisement on Bing results in additional first-party (i.e., within Bing) network requests. In all iterations, clicking caused a request to be sent to [https://bing.com/fd/ls/GLinkPingPost.aspx](https://bing.com/fd/ls/GLinkPingPost.aspx). These requests included several query parameters, including the clicked ads' destination websites. Furthermore, these requests include user identifiers, for instance, communicated in the MUD cookie -A cookie identifying unique web browsers visiting Microsoft sites-6. Footnote 6: [https://learn.microsoft.com/en-us/clarity/cookie-list](https://learn.microsoft.com/en-us/clarity/cookie-list) _Google._ Clicking on ads on Google results in additional first-party web requests. In all cases, the browser sends POST web requests to [https://google.com/gen_20?](https://google.com/gen_20?). These requests include user identifier values communicated in cookies such as NID and AEC7. Footnote 7: [https://policies.google.com/technologies/cookies](https://policies.google.com/technologies/cookies) _DuckDuckGo._ Clicking on an advertisement on DuckDuckGo. results in additional first-party network connections to [https://improving.duckduckgo.com](https://improving.duckduckgo.com). These requests include several query parameters, such as the search query, the ad provider (Bing in all cases), and the destination URL of the clicked ad. Next, the browser sends an additional network request that fetches a JavaScript file served from [https://duckduckgo.com/y.js](https://duckduckgo.com/y.js). This request includes several query parameters containing information about the ad and the link to which the user should be redirected (link to Bing servers). We note that none of the query parameters nor the cookies sent with these web requests matched our user heuristics for user identifiers. _Qwant._ When clicking on an advertisement on Qwant, a first request is sent to [https://qwant.com/action/click_serp](https://qwant.com/action/click_serp), including information about the user's browser, such as the type of the device and the browser language, along with the search query. Furthermore, this request contains information on the clicked ad (e.g., its position on the results page and the destination website). Then, another request is sent to [https://api.qwant.com/v3/redirect/](https://api.qwant.com/v3/redirect/), including the URL to direct the user to. These two connections do not include user identifiers as query parameters nor as cookies values. _StartPage._ Clicking on an advertisement on StartPage results in an additional first-party request to [https://startpage.com/sp/cl](https://startpage.com/sp/cl). This request includes information about the position of the clicked ad on the results page, but does not include the ad's destination URL. Similar to DuckDuckGo and Qwant, requests to StartPage servers do not include user identifiers. In summary, we find that all search engines, traditional and privacy-focused alike, record information about users' ad clicks. They all collect data about the clicked ad, such as its position on the results page or destination URL. However, only traditional search engines (Google and Bing) include user identifiers with web requests to their servers. #### 4.2.2. Navigation Tracking Next, we measure whether the advertising systems in search engines engage in navigation-based tracking, a technique for tracking users that circumvents browser privacy protections by directing a user through otherwise unrelated sites. Section 2.2.2 provides a high-level summary of how navigating tracking works and why it is an effective method of circumventing tracking protections in many browsers. We find that most of the search engines in our data set engage in navigation-based tracking at least some of the time. Further, we find that the _privacy-focused search engines engage in navigation-based tracking for the majority of placed ads._ We measure the navigation tracking we observed on the selected search engines in three dimensions: i. the distribution of how many sites the user is "bounced" through when they click on an ad on each search engine, ii. how many different organizations a user is exposed to during navigation tracking episodes (distinct from the number of pages or domains), and iii. the distribution of the number of sites in the redirection path that store user-identifying cookies. _Number of sites visited._ Figure 4 presents the distribution of the number of different sites (i.e., \(eTLD+1\)) each search engine directs the user through when clicking on an ad. We observe that clicking on an ad on Bing generally results in being redirected through the fewest number of sites (96% of ad clicks on Bing result in no other site being visited except for Bing and the final destination site). Clicking on sites on DuckDuckGo, Google, and Qwant typically results in visiting one other site (respectively, 82%, 69%, and 72% of clicks result in an intermediate navigation to a site different than the search engine and the ad's destination). Clicking on ads on StartPage resulted in (on average) visiting the largest number of different sites (93% of clicks resulted in visiting at least two sites other than StartPage and the ad's destination). _Number of organizations visited._ However, we note that all redirections are not equal in their privacy impact; the marginal privacy harm is generally much lower if a site redirects the user between two sites the company owns, versus the user being redirected between two sites owned by unrelated companies. More concretely, there is little-to-no additional privacy harm if Google bounces a user--and passes information about the user--from google.com to googleadservices.com, while there _is_ privacy harm if Google bounces a user--and the user's information--from google.com to facebook.com (i.e., Facebook learns new information they otherwise would not learn). Understanding the privacy harm of navigation tracking requires considering _which_ sites the user is being "bounced" between. Table 2 presents the five most common redirection paths for each search engine, and Table 7 in the appendix presents the most common sites in the redirection paths. Moreover, we group redirectors' domains by the organization to which they belong using the Disconnect Entity List [2]. Table 3 presents the fraction of navigation paths that include a website from each organization across all search engines. We observe that the impact of navigation tracking differs widely between search engines. On one hand, the navigation tracking that occurs from clicking on ads on Google results in little additional privacy harm; the most commonly immediately visited sites are also operated by Google (i.e., googleadservices.com and ad.doubleclick.com). On the other hand, we find that navigation tracking significantly harms user privacy on privacy-branded search engines. In all three cases, users are either usually directed to Bing sites (100% and 76% of the time for DuckDuckGo and Qwant, respectively) or Google sites (100% of the time for StartPage). While these results are alarming--since these are search engines advertising that they are privacy-preserving --they are not inexplicable. DuckDuckGo and Qwant rely on Bing to provide search ads, and StartPage relies on Google. _Number of sites that identify users._ The extent of privacy harm resulting from bounce tracking depends on two key factors: the behavior of the redirector (i.e., whether the redirector stores user-identifying cookies) and the type of cookie storage used by the browser (flat or partitioned). The lowest level of privacy harm occurs when the redirector does not store any user-identifying cookies. In this case, the redirector can infer the source and destination of the navigation event (i.e., the search engine and the ad's website). However, if the user navigates through the same redirector multiple times, the redirector cannot aggregate the tracking data from different visits to the same user. In contrast, if the redirector sets UID cookies on users' browsers, it can combine tracking data each time the user bounces through it. Specifically, if a user clicks on multiple ads on the same search engine and is redirected through the same redirector each time, the redirector can aggregate all the websites the user has visited. Moreover, if the user's Figure 4. CDF of the number of different redirectors for Bing, DuckDuckGo, Google, and StartPage. Figure 5. CDF of the number of different redirectors that store UID cookies for Bing, DuckDuckGo, Google, and StartPage. \begin{table} \begin{tabular}{|l|l|l|} \hline **Search engine** & **Domain paths** & **Frequency** \\ \hline \hline \multirow{4}{*}{Bing} & bing.com - destination & 96\% \\ & bing.com - clickserve.dartsearch.net - ad.doubleclick.net - destination & 3\% \\ & bing.com - t23.intelliad.de - 1045.netrk.net - destination & 1\% \\ \hline \multirow{4}{*}{Google} & google.com - googleadservices.com - destination & 69\% \\ & google.com - googleadservices.com - clickserve.dartsearch.net - ad.doubleclick.net - destination & 17\% \\ & nation & \\ & google.com - googleadservices.com - pixel.everesttech.net - ad.doubleclick.net - destination & 4\% \\ & google.com - googleadservices.com - monitor.clickcease.com - destination & 4\% \\ & google.com - googleadservices.com - monitor.ppcprotect.com - destination & 2\% \\ \hline \multirow{4}{*}{DuckDuckGo} & duckduckgo.com - bing.com - destination & 82\% \\ & duckduckgo.com - bing.com - clickserve.dartsearch.net - ad.doubleclick.net - destination & 14\% \\ & duckduckgo.com - bing.com - 6102.xg4ken.com - destination & 2\% \\ & duckduckgo.com - bing.com - clickserve.dartsearch.net - ad.doubleclick.net - tpt.mediaplex.com - destination & 1\% \\ & duckduckgo.com - bing.com - pixel.everesttech.net - destination & 1\% \\ \hline \multirow{4}{*}{StartPage} & startpage.com - google.com - googleadservices.com - destination & 73\% \\ & startpage.com - google.com - googleadservices.com - clickserve.dartsearch.net - & 17\% \\ & ad.doubleclick.net - destination & & 14\% \\ & startpage.com - google.com - destination & & 2\% \\ & startpage.com - google.com - googleadservices.com - 6008.xg4ken.com - destination & & 1\% \\ & startpage.com - google.com - googleadservices.com - clickserve.dartsearch.net - & 1\% \\ & ad.doubleclick.net - monitor.ppcprotect.com - destination & & 1\% \\ \hline \multirow{4}{*}{Qwant} & qwant.com - bing.com - destination & 66\% \\ & qwant.com - destination & & 14\% \\ & qwant.com - bing.com - clickserve.dartsearch.net - ad.doubleclick.net - destination & 10\% \\ & qwant.com - track.effiliation.com - destination & & 3\% \\ & qwant.com - click.linksynergy.com - destination & & 3\% \\ \hline \end{tabular} \end{table} Table 2. Top five most common navigation domain paths when clicking an ad for each search engine. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & **Bing** & **Google** & **DuckDuckGo** & **StartPage** & **Qwant** \\ \hline \hline **Adobe** & 0\% & **4\%** & **1\%** & 0\% & **1\%** \\ \hline **Conversant** & 0\% & 0\% & **1\%** & 0\% & 0\% \\ **Media** & & & & & \\ \hline **DuckDuckGo** & 0\% & 0\% & 100\% & 0\% & 0\% \\ \hline **Facebook** & 0\% & 0\% & 0\% & **1\%** & 0\% \\ \hline **Google** & **3\%** & 100\% & **15\%** & 100\% & **11\%** \\ \hline **Kenshoo** & 0\% & **2\%** & **2\%** & **1\%** & 0\% \\ \hline **Microsoft** & 100\% & 0\% & 100\% & 0\% & 79\% \\ \hline **Nielsen** & 0\% & 0\% & 0\% & **15** & 0\% \\ \hline **PFcProtect** & 0\% & **2\%** & 0\% & **1\%** & **1\%** \\ \hline **Qwant** & 0\% & 0\% & 0\% & 0\% & 100\% \\ \hline **Rakuten** & 0\% & 0\% & 0\% & 0\% & **3\%** \\ \hline **StartPage** & 0\% & 0\% & 0\% & 100\% & 0\% \\ \hline **Unknown** & **4\%** & **23\%** & **15\%** & **19\%** & **16\%** \\ \hline \end{tabular} \end{table} Table 3. Fraction of navigation paths that include a website from each organization across all search engines. Google and Bing might associate the destination website visited by the user through the advertisement to the user profile, especially if the user's browser has flat-cookie storage. ### After clicking on an ad Finally, we measure how user privacy is impacted once the user has "finished" clicking on a search ad and has arrived at the advertiser's page. We measure how the search engine/advertiser relationship effects user privacy in two ways: first, by measuring whether advertisers include trackers or other known-privacy-harming resources, and two, by measuring if and what kinds of information the search engine's advertising system provides to the advertiser (in the form of user-describing query params). This first measure relates to whether the search engine requires advertisers to abide by privacy-respecting practices; the latter measure relates to whether search engines' advertising systems collude with advertisers to aid advertisers in profiling visitors. Redirectors in navigation paths can aggregate more data about the user's behavior if they have scripts on the ads' destination websites. For this, they need to match users using either third-party cookies if they are enabled by the browser or UID smuggling. We investigate whether redirectors can aggregate users' activity on ads destination websites by analyzing online trackers, whether they receive UID as query parameters, and whether they store them. We recorded these requests by keeping the crawlers on the ads' destination pages for 15 seconds for all iterations. #### 4.3.1. Requests to online trackers We first measure whether search engines protect their users by requiring advertisers to be privacy-protecting. We measure this by loading the website each clicked search advertisement leads to, recording the URLs of all sub-resources and network requests made when loading and executing the page, and comparing those URLs against EasyList and EasyPrivacy. We find that 93% of the web pages users are taken to when they click on ads on both "standard" and "privacy-focused" contain many privacy-harming resources. Broken down by search engine, we observed 277, 218, 326, 437, and 260 different tracker third parties over all iterations, and a median of 9, 11, 6, 8, and 6 different online trackers per iteration for Bing, Google, DuckDuckGo, StartPage, and Qwant, respectively. In order to understand which companies track users on ad destination pages, we group the domains that observed tracking resources are served from by "entity" using the Disconnect Entity List (Beng et al., 2017) For example, using the entity list, we group tracker resources served from the domains google. com and doubleclick.com to the same entity (i.e., Google). Table 5 presents the top entities of trackers we observed on ad destination pages. For instance, we see that Google is the top entity for online trackers on destination pages for StartPage (36%), and we saw that all StartPage redirection paths go through Google servers. Hence, if the browser implements a flat cookies storage, Google can match the StartPage user on the ads destination website and aggregate data about his activity on it in 36% of the cases. We make the same observation for Microsoft trackers on Qwant (4.3%). #### 4.3.2. User identifiers Finally, we measure if the advertising systems of the search engines aid advertisers in tracking users across sites by transmitting unique identifiers (or other personal or otherwise individual values) across site boundaries through query parameters. As discussed in Section 3.2, this technique is sometimes called UID smuggling and is a common technique trackers and sites use to circumvent browser privacy protections (such as blocking third-party cookies or partitioning browser storage). For example, if an advertiser places an ad for [https://site.example](https://site.example), the advertising system might collude with the advertiser to allow the advertiser to profile the user by appending unique identifiers to the destination URL. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Bing** & **Google** & **DuckDuckGo** & **StartPage** & **Qwant** \\ \hline \hline ad.doubleclick.net & googleadservices.com & bing.com & google.com & bing.com \\ (3\%) & (98\%) & (95\%) & (100\%) & (78\%) \\ \hline t23.intellidad.de & ad.doubleclick.net & ad.doubleclick.net & googleadservices.com & ad.doubleclick.net \\ (1\%) & (21\%) & (14\%) & (94\%) & (11\%) \\ \hline 1045.netrk.net & pixe.leveresttech.net & 6102.xg4ken.com & ad.doubleclick.net & click.linksyenergy.com \\ (1\%) & (4\%) & (2\%) & (18\%) & (3\%) \\ \hline & monitor.ppcprotect.com & pixe.leveresttech.net & 6008.xg4ken.com & pixel.everesttech.net \\ & (2\%) & (1\%) & (1\%) & (1\%) \\ \hline & 3825.xg4ken.com & & & monitor.ppcprotect.com \\ & (2\%) & & & (1\%) \\ \hline & & & & tracking.deepsearch.adlucent.com \\ & & & & (1\%) \\ \hline \end{tabular} \end{table} Table 4. Redirectors that store UID cookies. The search engine's advertising system might, for example, append information the advertising system knows about the user to the advertiser's destination URL (creating a URL like [https://site.example?user_id=-id-](https://site.example?user_id=-id-)>, so that the advertiser can learn more about the user, harming the user's privacy. We measure whether search engines' advertising systems collude with advertisers to track users across sites by examining the query parameters the search engine (or other intermediate party in a navigation chain) includes in the URL of the advertiser's destination page. We collect all of the query parameters in the destination ad URLs and extracted values that appeared to be unique identifiers using the heuristics described in Section 3.2. We find that advertising systems collude with advertisers most of the time across all search engines, _even private ones_. Clicking ads on all five search engines resulted in user identifiers being passed to advertisers. We found user identifiers in query parameters in 80%, 94%, 68%, 92%, and 53% for Bing, Google, DuckDuckGo, StartPage, and Qwant, respectively. Most of these parameters are MSCLKID (Microsoft Click Identifier) or GCLID (Google Click Identifier), two _unique identifiers_ used for ad-click tracking. MSCLKID is added by Microsoft Advertising and GCLID by Google Ads when users click on their respective ads. Advertisers use these IDs to identify and track ad clicks; advertisers might store click-tracking first-party cookies to track actions taken after the ad click (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). Table 6 represents the fraction of iteration where the web request to the ad's destination page included MSCLKID, GCLID, or other parameters. We can see that in search engines that use Microsoft advertising (DuckDuckGo and Bing), we find both MSCLKID and GCLID. However, in ones that use Google advertising (Google, StartPage, and Qwant), we do not find MSCLKID. Moreover, we investigate whether advertisers persist the UID query parameters they receive. We cross-reference values obtained from destination pages' first-party storage (e.g., cookies and localStorage) with the query parameters these pages receive. We find that MSCLKID values are persisted in 15%, 17%, and 1% of cases for Bing, DuckDuckGo, and Qwant, respectively. As for GCLID, we find that a cookie is created in 5%, 10%, and 13% of cases for Bing, Google, and StartPage. ## 5. Limitations Our measurement methodology has some limitations. First, we only look for user identifiers transferred in query parameters and do not detect them when they are transferred in other methods. For instance, previous work (Wang et al., 2018; Wang et al., 2018) found that trackers sometimes decorate their own URL in the document.referrer header with user identifiers and reads them on the destination page. Second, we run all our crawling iterations from the same IP address. Consequently, if some query parameters are IP address based, they will have the same value across all iterations, and thus we would not consider them as user identifiers. Finally, our results are subject to variation based on the ads we selected and the search queries we used. Different search queries could potentially trigger distinct ads and lead to diverse advertisers, potentially exhibiting different behaviors. Nonetheless, our primary objective is to demonstrate the potential for third-party tracking when interacting with ads on private search engines. ## 6. Related Work Search engines and online tracking received a lot of research attention. We review studies closest to our work. **Search engines.** A first line of work has measured to which extent we can observe personalization in search engine results (Wang et al., 2018; Wang et al., 2018) and ads (Wang et al., 2018). For instance, Hannak et al. (Hannak et al., 2017) have developed a methodology for measuring personalization in search results, applied it to Bing, Google, and DuckDuckGo, and found that Bing results are more personalized than Google ones while they did not notice any personalization for DuckDuckGo. A second line of work has focused on solutions to protect users' privacy from search engines \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & MSCLKID & GCLID & \begin{tabular}{c} other UID \\ parameters \\ \end{tabular} \\ \hline Bing & 79\% & 12\% & 3\% \\ \hline Google & 0\% & 92\% & 8\% \\ \hline DuckDuckGo & 66\% & 12\% & 6\% \\ \hline StartPage & 0\% & 92\% & 12\% \\ \hline Qwant & 51\% & 8\% & 7\% \\ \hline \end{tabular} \end{table} Table 6. Fraction of iteration where the ad’s destination page received MSCLKID, GCLID and other UID attributes as query parameters. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Bing** & **Google** & **DuckDuckGo** & **StartPage** & **Qwant** \\ \hline \hline unknown (32.0\%) & unknown (34.8\%) & unknown (29.5\%) & Google (36.0\%) & Google (26.3\%) \\ \hline Google (24.4\%) & Google (28.7\%) & Google (21.8\%) & unknown (28.1\%) & Amazon (23.4\%) \\ \hline Microsoft (13.8\%) & Microsoft (10.5\%) & Amazon (16.3\%) & Microsoft (4.3\%) & unknown (22.4\%) \\ \hline Facebook (3.8\%) & Amazon (3.1\%) & Facebook (3.4\%) & Facebook (3.2\%) & Microsoft (4.2\%) \\ \hline Criteo (2.4\%) & Criteo (2.5\%) & Criteo (2.2\%) & Criteo (3.0\%) & Criteo (3.8\%) \\ \hline \end{tabular} \end{table} Table 5. Top entities of online trackers reached by crawlers on each search engine. and prevent web profiling. Castella-Roca et al. (2015) presented a computationally efficient protocol that provides a distorted user profile to the search engine to preserve users' privacy. Finally, several studies have proposed privacy-preserving search-personalizing solutions for search engines. For instance, Shen et al. (2015) analyze various software architectures for personalized search and envision possible strategies with a client-sided personalization. Xu et al. (2017) suggest helping users choose the content and degree of detail of the profile information built by search engines. To the best of our knowledge, there is no study investigating the privacy properties of the advertising systems used on private search engines. **Online tracking.** Several works analyzed the usage of cross-site tracking techniques in the wild (Koshelev et al., 2017). Chen et al. (2017) propose a data flow tracking system to measure user tracking performed through first-party cookies. They found that more than 97% of the websites they have crawled have first-party cookies set by third-party javascript and that on 57% of them, there is at least one cookie containing a unique user identifier diffused to multiple third parties. Roesner et al. (2017) measured how user tracking occurs in the wild. They found that multiple parties track most commercial pages and estimate that several trackers can each capture more than 20% of a user's browsing behavior. Koop et al. (2017) analyzed a dataset of redirection chains in the wild and found that 11% of websites redirect to the same 100 top redirectors. Moreover, they demonstrate that these top redirectors could identify users on the most visited websites. Randall et al. (2017) measure the frequency of UID smuggling in the wild and find that it is performed on more than 8% of all navigations in their dataset. We use a similar method to identify user identifiers among all cookie values and query parameters by implementing automatic filtering followed by a manual inspection. All these studies were conducted in the wild, and to the best of our knowledge, no study focuses on navigational tracking techniques performed on search engines. ## 7. Conclusion In this paper, we presented the first systematic study of the privacy properties of the advertising systems of five popular search engines: Two traditional ones, Google and Bing, and three private ones, DuckDuckGo, StartPage, and Qwant. We investigated whether, and to which extent, search engines through their advertising systems, engage in privacy-harming behaviors that allow cross-site tracking. Despite the privacy intentions and promises of private search engines, our findings reveal the failure of privacy-focused search engines to fully protect users' privacy during ad interactions. Users on all measured search engines, including the privacy-focused ones, are subject to navigation-based tracking by third parties. We find that all search engines engage in bounce tracking when clicking on ads, where users are sent through several redirectors before reaching the ads' destination websites. While private search engines themselves do not engage in user tracking, their reliance on traditional advertising systems (Microsoft or Google) renders users susceptible to tracking by those systems. _Although we cannot directly attribute this tracking to the search engines themselves, it is evident that they are enabling it through their reliance on Microsoft and Google's advertising systems._ Inspecting the privacy policies of the search engines in light of our findings reveals interesting disparities. While our results demonstrate that Microsoft is capable of tracking DuckDuckGo users when they click on ads, DuckDuckGo asserts that Microsoft does not associate ad-click data with user profiles. On the other hand, Qwant, which also relies on Microsoft advertising for a significant fraction of its ads, do not document the utilization of ad-click data by Microsoft and whether it is used to enhance user profiles. Similarly, StartPage explicitly states that clicking on ads subjects users to the data collection policies of other websites. Our study highlights the need for increased attention to privacy protection within the advertising systems of search engines. One potential solution to protect users' privacy for private search engines would be to reduce their reliance on third-party advertising systems. Developing their own advertising platform could provide greater control over privacy practices, although the feasibility and complexity of such an approach remain uncertain. Alternatively, private search engines could collaborate with advertising systems such as Microsoft and Google, forging partnerships that proactively tackle privacy concerns. For instance, private search engines could negotiate agreements with the ad provider that prevent redirecting users who click on ads placed within private search engines to additional third parties. This approach would minimize the extent of third-party tracking, limiting it to the ad provider only. Moreover, search engines like StartPage and Qwant could follow the lead of DuckDuckGo by seeking agreements with advertising systems to prevent the use of ad-click identifiers for user profile enrichment. These proactive steps would enhance user privacy while maintaining advertising partnerships with larger platforms. ###### Acknowledgements. This research was supported in part by the French National Research Agency (ANR) through the ANR-17-CE23-0014, ANR-21-CE23-0031-02, and MIAI@Grenoble Alpes ANR-19-P3IA-0003 grants and by the EU through the 101041223, 101021377, and 952215 grants.
2308.02675
TIPICAL -- Type Inference for Python In Critical Accuracy Level
Type inference methods based on deep learning are becoming increasingly popular as they aim to compensate for the drawbacks of static and dynamic analysis approaches, such as high uncertainty. However, their practical application is still debatable due to several intrinsic issues such as code from different software domains will involve data types that are unknown to the type inference system. In order to overcome these problems and gain high-confidence predictions, we thus present TIPICAL, a method that combines deep similarity learning with novelty detection. We show that our method can better predict data types in high confidence by successfully filtering out unknown and inaccurate predicted data types and achieving higher F1 scores to the state-of-the-art type inference method Type4Py. Additionally, we investigate how different software domains and data type frequencies may affect the results of our method.
Jonathan Elkobi, Bernd Gruner, Tim Sonnekalb, Clemens-Alexander Brust
2023-08-04T19:16:23Z
http://arxiv.org/abs/2308.02675v1
# TIPICAL - Type Inference for Python In Critical Accuracy Level ###### Abstract Type inference methods based on deep learning are becoming increasingly popular as they aim to compensate for the drawbacks of static and dynamic analysis approaches, such as high uncertainty. However, their practical application is still debatable due to several intrinsic issues such as code from different software domains will involve data types that are unknown to the type inference system. In order to overcome these problems and gain high-confidence predictions, we thus present TIPICAL, a method that combines deep similarity learning with novelty detection. We show that our method can better predict data types in high confidence by successfully filtering out unknown and inaccurate predicted data types and achieving higher F1 scores to the state-of-the-art type inference method Type4Py. Additionally, we investigate how different software domains and data type frequencies may affect the results of our method. type inference, novelty detection, machine learning, cross-domain ## I Introduction Dynamic programming languages can be enriched by optional type annotations to enable more precise program analysis and early detection of type-related run-time errors [1, 2]. Our objective is to develop a workable technique for Python programmers to use on a daily basis that will enhance their routine workflow through the annotation of optional data types, a process known as type inference. Giving type recommendations to the user in real-time as well as automatically after writing the code will accomplish this. Due to its automation, the first case, however, necessitates a method that only annotates with high certainty of correctness, as the harm that inaccurate type hints can do may be greater than the benefit of the positive one. Static and dynamic type inference techniques suffer from low precision due to applied abstraction or missing coverage [3]. Recent deep learning-based methods aim to overcome these issues and provide promising results [4, 5, 6, 7]. However, these systems struggle with problems occurring in practical applications for example data types unknown to the system or source code from other software domains [8]. First, the problem of unknown classes, or the inability to accurately predict unseen data types, is a prevalent issue in the field of machine learning. This is due to the lack of representation of such data types in the training set, rendering prediction of these data types ineffective. To mitigate this issue, we propose a method to filter out unknown data types based on their characteristic features. This approach not only enables the identification of unknown data types but also improves the overall reliability and quality of data type annotations by eliminating inaccurate predictions. In practical usage, type inference systems are often utilized in a variety of software domains. However, this can exacerbate the problem of unknown data types and lead to decreased accuracy of predictions due to dataset shifts [9, 10]. Therefore, in our research, we investigate the effect of different software domains on the performance of our type inference method. We thus present **TIPICAL** - Type Inference for Python In Critical Accuracy Level, an extension of the Type4Py method [6] for obtaining accurate results that address both the unknown data types problem and the domain shift in order to maximize the practical application of deep learning-based type inference. In our experiments, we show that our method can successfully filter out unknown and inaccurate predicted data types and improve the results compared to the state-of-the-art type inference method Type4Py. Furthermore, we investigate the impact of the data type frequencies and different software domains. For the evaluation, we use the recent datasets CrossDomainTypes4Py [8] and ManyTypes4Py [11]. In order to ensure the **reproducibility** of our experiments, we make our experimental pipeline publicly available1. Footnote 1: [https://gitlab.com/dlr-dw/type-inference](https://gitlab.com/dlr-dw/type-inference) ## II Related Work Several studies use deep learning techniques for type inference. In this work, we focus on methods that are designed for Python projects. DLType [12], TypeWriter [13], and PyInfer [14] are the first deep learning-based methods in that field. They suffer from the problem that they can only predict a limited number of data types due to their architecture. These methods are limited to the 500 or 1000 most frequent data types that occur in the training dataset. Typilus [4], and Type4Py [6] address this problem and can predict all data types which are present in the training dataset. There are potential approaches where new types can be recognized through additional static analysis, as demonstrated in HiTyper [15]. However, there is still the issue that data types that are rarely or not in the training dataset cannot be predicted [8]. Nevertheless, Novelty detection has not been actively pursued as a solution to the problem. Therefore we develop TIPICAL to mitigate these problems. As a basis for our method, we use Type4Py, because according to the evaluation of Mir et al. [6], it is state-of-the-art and the source code is publicly available. ## III Methodology In this section, we briefly explain the type inference method Type4Py, which is the basis for our method. Afterward, the structure and the functioning of TIPICAL are presented. ### _Type4Py Inference System_ The Type4Py framework serves as the foundation for our research, as detailed in the original paper [6]. The Type4Py system utilizes code tokens, identifier names, and available data types (visible type hints) as input. These code tokens and identifier names are embedded by Word2Vec [16] and processed separately through recurrent neural networks. Then the resulting representations are concatenated with the visible type hints and further processed through a fully connected layer to generate a feature vector. This feature vector is then used for a k-nearest neighbor search in the type cluster to predict all data types using the training dataset. However, it should be noted that this method may not accurately predict unknown data types that are not represented in the training dataset, as well as those that are inaccurately predicted due to limitations of using the nearest neighbor as a classification method ### _Tipical_ As a result, we sought to develop TIPICAL, a comprehensive system that would enhance the usage of type inference by utilizing novelty detection to filter out inaccurate predictions and unpredictable data types. Using the workflow presented in Figure 1, we find a threshold for filtering, We then use the same cluster centers threshold on the test set to filter out predictions. #### Iii-B1 Determining the Threshold First, we determine the cluster centers of each known data type using the training dataset, defined as follows: \[\bar{\vec{x}}=\frac{\sum_{i=1}^{n}\vec{x_{i}}}{n}, \tag{1}\] where \(\vec{x}\) is the feature vector, \(n\) is the total number of vectors and \(\bar{\vec{x}}\) is the cluster center vector. Next, we determine the top two nearest cluster centers for each vector in the validation set, and their distances \(d1\) and \(d2\). \[\Delta d=d_{2}-d_{1} \tag{2}\] Then \(\Delta d\) is calculated, as seen in Equation 2. It can be interpreted as cost-effective active learning or the proxy of the entropy of the distribution of the distances between all of the cluster centers. After which, using these facts, we develop a threshold-based method to determine whether or not the closest cluster center can accurately predict the data type. The reasoning behind this is that if two cluster centers are roughly the same distance from the vector, it may be difficult to distinguish which datatype is the correct one or even they could be an unknown data type that is just not represented in the training set, whereas if the closest cluster center is significantly closer to the vector than the second one, it will almost certainly make the correct prediction. The outliers are labeled as non-predicted for further research directions as can be seen later in the conclusion. #### Iii-B2 Making the Predictions Afterward, for the prediction itself, we use the nearest cluster center as the predicted data type. In order to make accurate predictions, we maximize the Fig. 1: Deciding the threshold using the validation set F1 score on the validation set by determining until which value of \(\Delta d\) we are going to filter out the vectors. Finally, as can be seen in the second part of Figure 1, we apply the same approach to the test set using our previous findings. We calculate \(d1\), \(d2\), and \(\Delta d\) for all the test set vectors. Further, we apply the threshold that we determined from the training set and filter out lower \(\Delta d\) from our final predictions, producing only high-certainty predictions that will provide our end-user with the most accurate predictions, as will be demonstrated in the following section. ## IV Experiments and Evaluation Following the pipeline of TIPICAL described in Section III-B, we present our experiments on the ManyTypes4Py and the CrossDomainTypes4Py datasets. Moreover, we expanded the scope of the experiments of the original papers [6, 8] to further study the effects of different software domains, and the unknown data types issues. In addition to creating a comprehensive method for real-life machine learning-based type inference with high certainty, we conducted the experiments to answer the following research questions: 1. How do different software domains affect the predictable and unknown data type distribution according to the entropy proxy - \(\Delta d\)? 2. How does the nearest cluster center-based method accuracy of the data types predictions correspond to \(\Delta d\)? 3. Can TIPICAL create higher certainty predictions than the predictions of the Type4Py system? 4. How does the frequency of data types affect its predictability in TIPICAL? ### _Datasets and Domains_ We use the CrossDomainTypes4Py [8] and ManyTypes4Py [11] datasets for our experiments. As described by Gruner et al. [8], these contain a total of at least three different software domains. CrossDomainTypes4Py consists of the scientific calculation (cal) domain with 4,783 repositories and the web development (web) domain with 3,129. ManyTypes4Py, on the other hand, contains 5,382 repositories, which are from various domains and are therefore considered general (mtp). ### _Experiment Setup_ We adapt the existing cross-domain Type4Py implementation from Gruner et al. [8] in order to conduct the research. We employ PyTorch, a deep learning framework, using Python 3.6. We use the same hyperparameters as Mir et al [6]. For our experiments, we created the following four cross-domain setups: 1. Setup Cal2Mtp - Scientific Calculation to General 2. Setup Mtp2Cal - General to Scientific Calculation 3. Setup Cal2Web - Scientific Calculation to Web Development 4. Setup Web2Cal - Web Development to Scientific Calculation ### _Research Questions and Results_ **RQ1: How do different software domains affect the predictable and unknown data types distribution according to the entropy proxy - \(\Delta d\)?** Each setup consists of two experiments, which are conducted three times to calculate the average of the results. In the first experiment, the system is trained on the first mentioned domain and evaluated on the second domain (example reference: Cal2Mtp.1). The second experiment is for comparison and performs the training and the evaluation on the second mentioned domain (example reference: Cal2Mtp.2). In total we get eight results from our four setups. Figures 3 & 4 show that using examples from various software domains has no discernible impact on the target domains. The amount of unpredictable types, which sharply increases due to the size of the source dataset as a whole rather than a Fig. 2: Applying the threshold to make predictions on the test set change in the use case itself, is another intriguing development that follows from this. Nevertheless, those findings are in favor of our approach since, by removing the lower scores of the \(\Delta d\), we will also remove most of the unknown data types since they are closer to at least two cluster centers within a comparable distance. Hence we provide a novelty detection method for practical use. **RQ2: How does a nearest cluster center-based method accuracy of the data types predictions corresponds to \(\Delta d\)?** Figures 5 & 6 illustrate that various domains do not affect the accuracy of the target domains of the closest cluster center's prediction. However, the closest cluster center for the right predictions examples causes the distribution of \(\Delta d\) to be shifted to a greater distinction. As a result, we can conclude that a domain change use case might be predicted with a better degree of certainty using our method. In addition, those results support our method because by filtering the lower scores of \(\Delta d\). We will also eliminate the majority of incorrect predictions since they are located nearer to at least two cluster centers than the accurate ones. Even in this scenario where the predictable data types are known, TIPICAL outperforms Types4Py in 7 out of 8 experiments. On average, our method gets 12.27% F1 score. This serves as evidence that TIPICAL, which does not make any assumptions about the target dataset, is more effective than methods that rely on prior knowledge of the predictable data types. **RQ4: How does the frequency of data types affect its predictability in TIPICAL?** The frequency of a data type influences how accurately predictable it is. Figures 7 and 8 show that common types (types that appear in the training more than 100 times [6]) are presented after the threshold at a higher rate than rare or unknown types. Additionally, although the rare type has a longer tail, we can see similarities between the distribution of the unknown and rare types. Because the calculated cluster centers of the common data types more closely resemble the novel label features, we can also predict the common data types more accurately than the rare types. Furthermore, the distribution of the common type is impacted by different software domains, whereas the distribution of the rare type is not. This may help to explain why some cases have decreased accuracy. and unpredictable data types. To mitigate this issue, TIPICAL employs a threshold-based method to determine the accuracy of predictions by comparing the distance to the closest cluster center to the second closest. This approach results in the filtering out of lower certainty predictions, thus maximizing the overall accuracy of predictions on both the validation and test sets. Consequently, the best current method for use in practice that needs high certainty of data type annotation is produced. Nevertheless, with our 8 experiments across 3 domains, we focus on 3 subjects. Real-world software domain changes, which mainly affect the accuracy of the predictable data types but not the prediction percentage of the unpredictable data types, lead to the conclusion that our technique can compete well even then. Additionally, the fact that the main types are common and easier to predict, further increases the validity of our method, mainly due to the robustness of predicting well-represented types in the training set. Our findings suggest that applying the type inference automatically using TIPICAL would be preferable because it will cause less harm, however, it may not be as effective for real-time suggestions when the developer may review all of the type annotations. Last but not least, we can suggest further developing this method by utilizing lifelong learning techniques to improve the general predictability of labels and enrich the predictable types. Moreover, another easy to achieve better results is to use only the common types from the training set to create the cluster centers, hence, the sureness of the cluster center will raise, and the approach will predict probably higher F1 scores, but with fewer samples. Due to the countless possible types, for example, by providing an example that is below the threshold so that a domain expert can label it and repeat the entire process indefinitely [17]. Researchers can also use a broad method to determine the genuine entropy, then follow the instructions in our method to increase the method's effectiveness and further study it.
2310.18467
Structure preserving numerical methods for the ideal compressible MHD system
We introduce a novel structure-preserving method in order to approximate the compressible ideal Magnetohydrodynamics (MHD) equations. This technique addresses the MHD equations using a non-divergence formulation, where the contributions of the magnetic field to the momentum and total mechanical energy are treated as source terms. Our approach uses the Marchuk-Strang splitting technique and involves three distinct components: a compressible Euler solver, a source-system solver, and an update procedure for the total mechanical energy. The scheme allows for significant freedom on the choice of Euler's equation solver, while the magnetic field is discretized using a curl-conforming finite element space, yielding exact preservation of the involution constraints. We prove that the method preserves invariant domain properties, including positivity of density, positivity of internal energy, and the minimum principle of the specific entropy. If the scheme used to solve Euler's equation conserves total energy, then the resulting MHD scheme can be proven to preserve total energy. Similarly, if the scheme used to solve Euler's equation is entropy-stable, then the resulting MHD scheme is entropy stable as well. In our approach, the CFL condition does not depend on magnetosonic wave-speeds, but only on the usual maximum wave speed from Euler's system. To validate the effectiveness of our method, we solve a variety of ideal MHD problems, showing that the method is capable of delivering high-order accuracy in space for smooth problems, while also offering unconditional robustness in the shock hydrodynamics regime as well.
Tuan Anh Dao, Murtazo Nazarov, Ignacio Tomas
2023-10-27T20:25:19Z
http://arxiv.org/abs/2310.18467v1
# Structure preserving numerical methods for the ideal compressible MHD system ###### Abstract We introduce a novel structure-preserving method in order to approximate the compressible ideal Magnetohydrodynamics (MHD) equations. This technique addresses the MHD equations using a non-divergence formulation, where the contributions of the magnetic field to the momentum and total mechanical energy are treated as source terms. Our approach uses the Marchuk-Strang splitting technique and involves three distinct components: a compressible Euler solver, a source-system solver, and an update procedure for the total mechanical energy. The scheme allows for significant freedom on the choice of Euler's equation solver, while the magnetic field is discretized using a curl-conforming finite element space, yielding exact preservation of the involution constraints. We prove that the method preserves invariant domain properties, including positivity of density, positivity of internal energy, and the minimum principle of the specific entropy. If the scheme used to solve Euler's equation conserves total energy, then the resulting MHD scheme can be proven to preserve total energy. Similarly, if the scheme used to solve Euler's equation is entropy-stable, then the resulting MHD scheme is entropy stable as well. In our approach, the CFL condition does not depend on magnetosonic wave-speeds, but only on the usual maximum wavespeed from Euler's system. To validate the effectiveness of our method, we solve a variety of ideal MHD problems, showing that the method is capable of delivering high-order accuracy in space for smooth problems, while also offering unconditional robustness in the shock hydrodynamics regime as well. keywords: MHD, structure preserving, invariant domain, involution constraints, energy-stability + Footnote †: journal: Elsevier ## 1 Introduction Magnetohydrodynamic (MHD) equations model the dynamics of plasma, which is an ionizied gas at high temperatures. The ideal MHD equations combines: the fluid dynamics equations description of Euler equations, with zero-permittivity limit of Maxwell's equations and Ohm's law closure. The model considers both the movement of the conductive fluid and its interaction with magnetic fields. The MHD equations are widely used in astrophysics applications as well as in nuclear fusion research, where it is used to study and control instabilities in the plasma confinement. Solutions to the MHD system contain contact, shock, and rarefaction waves. In addition, the interaction of fluid and magnetic fields at very high temperatures pose additional challenges for MHD simulations. Despite these difficulties, numerical solutions of the MHD system are vital to predict phenomena in various scientific fields such as plasma physics and astrophysics. Furthermore, when performing numerical simulations of the MHD system, it is crucial to ensure the preservation of essential structure of the solution, such as positivity properties, conservation of total energy, and involution constraints. Various schemes that retain several of these properties, in particular for the compressible Euler equations, are published in existing literature. For instance, the works of [32; 42; 14], along with references provided therein, represent just a subset of the comprehensive research dedicated to achieving positivity-preserving approximations for the compressible Euler equations, using finite volume, discontinuous Galerkin, and finite element methods. Unfortunately, direct extension of these methods for the MHD system is not straighforward due to the additional induction equation for the magnetic field and corresponding to the magnetic stress/force. In particular, the standard MHD model in divergence form is only valid if \(\operatorname{div}\mathcal{B}\equiv 0\) at all times. A slight violation of the divergence-free condition can lead to negative internal energy, which will cause the numerical simulation to fail catastrophically, see e.g., [40; 41]. It should be emphasized that the divergence formulation of the MHD system is valid only for sufficiently smooth solutions. However, in the case of weakly differentiable and discontinuous solutions, \(\operatorname{div}\mathcal{B}\) cannot be pointwise zero. To the best of our knowledge, none of the divergence cleaning techniques, such as [10], can completely eliminate the discrepancy error of the divergence of \(\mathcal{B}\). In this paper, instead of using the MHD equations in divergence form, see (2), which is widely used in scientific works, we proposed to use the induction equation and preserve the magnetic forces acting on the momentum and total mechanical energy as source terms. More precisely, we propose solving \[\partial_{t}\rho+\operatorname{div}\boldsymbol{m} =0\,, \tag{1a}\] \[\partial_{t}\boldsymbol{m}+\operatorname{div}\left(\rho^{-1} \boldsymbol{mm}^{\top}+\mathbb{I}p\right) =-\mu\mathcal{H}\times\operatorname{curl}\mathcal{H}\,,\] (1b) \[\partial_{t}E+\operatorname{div}\left(\tfrac{\boldsymbol{m}}{ \rho}(E+p)\right) =-\mu(\mathcal{H}\times\operatorname{curl}\mathcal{H})\cdot \tfrac{\boldsymbol{m}}{\rho}\,,\] (1c) \[\partial_{t}\mathcal{H}-\operatorname{curl}\left(\tfrac{ \boldsymbol{m}}{\rho}\times\mathcal{H}\right) =0\,, \tag{1d}\] where \(\rho\) is the density, \(\boldsymbol{m}\) is the momentum, \(E=\frac{1}{2\rho}|\boldsymbol{m}|^{2}+\rho e\) is the total mechanical energy, \(e\) is the specific internal energy, \(\mathcal{H}\) is the magnetic field, \(p=p(\rho,e)\) is the pressure, \(\mathbb{I}\in\mathbb{R}^{d\times d}\) denotes the identity matrix with \(d\) being the space dimension, and \(\mu>0\) is the magnetic permeability constant. Taking the divergence to both sides of (1d) we obtain the condition \(\partial_{t}\operatorname{div}\mathcal{H}=0\), implying that the evolution of the magnetic field \(\mathcal{H}\) is such that \(\operatorname{div}\mathcal{H}(t)\equiv\operatorname{div}\mathcal{H}_{0}\) for all \(t\geq 0\), where \(\mathcal{H}_{0}\) is the initial data. Note, that for the case of smooth (e.g., \(\mathcal{C}^{1}\)-continuous or better) divergence-free solutions the formulation (1) is equivalent to the MHD system in the divergence form (2). However, for the case of weakly-differentiable and discontinuous solutions, we should regard (1) and (2) as entirely different models. In particular, there is no reason to believe that (1) and (2) should produce the same families of discontinuous solutions, see for instance [34, p. 253] on a related discussion. We emphasize that formulation (2) is not valid without the assumption \(\operatorname{div}\mathcal{B}=\operatorname{div}\mathcal{B}_{0}=0\) since it is an intrinsic part of its derivation. On other hand, formulation (1) does not need or use the condition \(\operatorname{div}\mathcal{H}\equiv 0\): there is no mathematical reason to incorporate such assumption. From a practical point of view, regardless of whether we prefer source-formulation (1) or divergence formulation (2), any numerical method satisfying the following: 1. Preservation of pointwise stability properties: such as pointwise positivity of the density and minimum principle of the specific entropy; 2. Preservation of involution constraints, in this case, preservation of the weak-divergence; 3. Preservation of total energy; 4. Preservation of second order accuracy (or higher) for smooth solutions; 5. Preservation of discrete entropy-dissipation properties. is a desirable method for engineering and scientific applications. List (i)-(v) is quite ambitious and we are not aware of any numerical scheme capable of preserving properties (i)-(v) simultaneously. We highlight that designing a scheme that preserves just one of these properties (e.g., formal high-order accuracy, see for instance [8, 9]) does not pose a major challenge. The mathematical challenge of structure preservation lies in the satisfaction of two or more of these properties simultaneously. In this manuscript, we advocate for the use of formulation (1), instead of the usual divergence form (2), as better fit in order to preserve properties (i)-(v) outlined above. In [8] it was proved that by adding a viscous term to each equation of the ideal MHD system (i.e., conservation of mass, conservation of momentum, conservation of total energy, and the induction equation) one can achieve positivity of density and internal energy, minimum principle of the specific entropy, and satisfaction of all generalized entropies. In this article we improve the result of [8]. We prove that the viscous regularization of mass, momentum, and total mechanical energy is sufficient to achieve the above mentioned properties (i.e., positivity of density and internal energy, minimum principle of the specific entropy, and compatibility with all generalized entropies). This shows that there is no need to regularize the induction equation. This is a rather puzzling result, hinting at the idea that the inclusion of the \(\mathcal{B}\) field in the MHD Riemann problem is an artificial construct. We propose to separate the evolution equation of \(\mathcal{B}\) from the other components of the system (density, momentum, and total mechanical energy), as originally described by the non-divergence formulation (1). This is by no means a new idea: treating \(\mathcal{B}\) independently using its own spatial discretization has been proposed, for example, in [29, 22, 12] and references therein. However, our approach still represents a major departure from previously existing ideas and methods for the MHD system: * The induction equation is not treated as an isolated object, but rather as a constituent of a Hamiltonian system consisting in: the balance of momentum subject to the Lorentz force \(\mu(\operatorname{curl}\mathcal{H}\times\mathcal{H})\), coupled to the induction equation (1d), see for instance expressions (19) and (24). By treating a Hamiltonian system as such, this lends itself to natural definitions of stability that we can preserve in the fully-discrete setting. * We use no advective stabilization in any form or fashion for the induction equation. This is tacitly suggested by the viscous regularization argument indicating that no artificial viscosity is required for the magnetic field \(\mathcal{H}\). This is also consistent with Hamiltonian systems, such as (24), where the natural notion of stability is preservation of quadratic invariants. We avoid construing the induction equation as an advective system [19, 21], Friedrich's system [5], or vanishing-viscosity limit (e.g., conservation law). We use a primal (no vector potential) curl-conforming framework in order to discretize the magnetic field \(\mathbf{\mathcal{H}}\). This is consistent with the preservation of weak divergence. We do not pursue to preserve a zero strong-divergence, or use a div-conforming framework as suggested for instance in [4; 22]. However, we show that the method can preserve zero weak-divergence to machine accuracy for smooth as well as non-smooth regimes. We use no divergence cleaning. * Energy of the non-divergence system (1) is defined by a functional that consists in the sum of a linear \(+\) quadratic functional, see Section 2.2. This is quite different from the case of the divergence system (2) where energy stability consists in preserving the property \(\int_{\Omega}\mathcal{E}(t)\,\mathrm{d}\mathbf{x}=\int_{\Omega}\mathcal{E}_{0}\, \mathrm{d}\mathbf{x}\), which is the preservation of a linear functional. * The resulting scheme preserves properties (i)-(iv) outlined above. This scheme can be used for smooth as well as extreme shock-hydrodynamics regimes. Property (v), entropy stability, can be preserved as well, provided the hyperbolic solver used to discretize Euler's subsystem is entropy stable. We make no emphasis on property (v) since there the is a very large literature on the matter. The scheme runs at the CFL of Euler's system, with no time-step size restriction due to magnetosonic waves. There is, in principle, no limit on the formal spatial accuracy of the scheme. The outline of the paper is as follows: in Section 2 we provide all the necessary background in relationship to the mathematical properties of the MHD and Euler's system. In Sections 3.1-3.3 we summarize the main properties of the spatial and temporal discretizations that will be used. In Section 3.4 we present the scheme and make precise its mathematical properties. Finally, in Section 4 we present numerical results illustrating the efficiency of the solver is the context of smooth as well as non-smooth test problems. We highlight that the main ideas advanced in this paper can be implemented using quite general hyperbolic solvers for Euler's equation. In Section 3.4 we outline the structure and mathematical properties expected from such hyperbolic solver. For the sake of completeness we also describe the hyperbolic solvers used for all computations in A. ## 2 Main properties of the MHD system ### Vanishing-viscosity limits and invariant sets In this section we improve the result of [8]. Let us consider the case where initial magnetic field is divergence-free, i.e., \(\operatorname{div}\mathbf{\mathcal{B}}_{0}=0\). As already mentioned in the introduction, this implies that \(\operatorname{div}\mathbf{\mathcal{B}}=0\) also for all time \(t\). Therefore, the system (1) can be re-written in the following divergence form: \[\partial_{t}\rho+\operatorname{div}\mathbf{m} =0\,, \tag{2a}\] \[\partial_{t}\mathbf{m}+\operatorname{div}\left(\rho^{-1}\mathbf{mm}^{ \top}-\mu^{-1}\mathbf{\mathcal{B}}\mathbf{\mathcal{B}}^{\top}+\left\lvert p\right\rangle =\mathbf{0}\,,\] (2b) \[\partial_{t}\mathcal{E}+\operatorname{div}\left(\tfrac{\mathbf{m}}{ \rho}(\mathcal{E}+p)-\mathbf{\mathcal{B}}(\mathbf{\mathcal{B}}^{\top}\tfrac{\mathbf{m}}{ \rho})\right) =0,\] (2c) \[\partial_{t}\mathbf{\mathcal{B}}+\operatorname{div}\left(\rho^{-1}\bm {\mathcal{B}}\mathbf{m}^{\top}-\rho^{-1}\mathbf{m}\mathbf{\mathcal{B}}^{\top}\right) =\mathbf{0}, \tag{2d}\] where the total energy \(\mathcal{E}=\frac{1}{2\rho}|\mathbf{m}|^{2}+\rho e+\frac{1}{2\mu}|\mathcal{B}|^{2}\) includes the contribution from the magnetic field. The regularized system reads: \[\partial_{t}\rho+\operatorname{div}\mathbf{m} =\epsilon\Delta\rho\,, \tag{3a}\] \[\partial_{t}\mathbf{m}+\operatorname{div}\left(\rho^{-1}\mathbf{mm}^{ \top}-\mu^{-1}\mathcal{B}\mathcal{B}^{\top}+\mathbb{I}\left(p+\tfrac{1}{2\mu}| \mathcal{B}|^{2}\right)\right) =\epsilon\Delta\mathbf{m}\,,\] (3b) \[\partial_{t}\mathcal{E}+\operatorname{div}\left(\tfrac{m}{\rho}( \mathcal{E}+p)-\mathcal{B}(\mathcal{B}^{\top}\tfrac{m}{\rho})\right) =\epsilon\Delta\left(\mathcal{E}-\tfrac{1}{2\mu}|\mathcal{B}|^{2} \right),\] (3c) \[\partial_{t}\mathcal{B}+\operatorname{div}\left(\rho^{-1}\mathbf{m} \mathcal{B}^{\top}-\rho^{-1}\mathcal{B}\mathbf{m}^{\top}\right) =\mathbf{0}\,. \tag{3d}\] Note that there is no viscous regularization in (3d). In addition, the magnetic pressure is subtracted from the total energy in the viscous regularization term on the right hand side of (3c). The difference between the viscous regularization in reference [8] and that one in expression (3) is that the magnetic regularization was removed. In this section, we prove that even without the magnetic regularization terms, the state \(\mathbf{u}=[\rho,\mathbf{m},\mathcal{E},\mathcal{B}]^{\top}\) of (3) satisfies: positivity of density, positivity of internal energy, and minimum entropy principles, for all time. Moreover, (3) is compatible with all the generalized entropy inequalities. These results can be obtained with slight modifications of the proofs in [8]. **Definition 2.1** (Specific entropy, Gibbs identity and physical restrictions).: Let \(\varepsilon(\mathbf{u})=\mathcal{E}-\frac{1}{2\rho}|\mathbf{m}|^{2}-\frac{1}{2\mu}| \mathcal{B}|^{2}\) denote the internal energy, \(e(\mathbf{u})=\rho^{-1}\varepsilon(\mathbf{u})\) denote specific internal energy, and \(v=\rho^{-1}\) be the specific volume. Let \(s=s(\rho,e):\mathbb{R}^{+}\times\mathbb{R}^{+}\to\mathbb{R}\) denote the specific entropy. Assuming that the exact differential of \(s=s(\rho,e)\), meaning \(\mathrm{d}s=\frac{\partial s}{\partial e}\mathrm{d}e+\frac{\partial s}{ \partial\rho}\mathrm{d}\rho\), is consistent with Gibbs' differential relationship \(\mathrm{d}s=\frac{1}{\theta}\mathrm{d}e+\frac{p}{\theta}\mathrm{d}v\), where \(\theta\) is the temperature, implies that \[\tfrac{\partial s}{\partial e}=\tfrac{1}{\theta}\,,\,\tfrac{\partial s}{ \partial\rho}=-\tfrac{p}{\theta\rho^{2}}\,,\] combining both we obtain the formula for the pressure \(p=\rho^{2}\tfrac{\partial s}{\partial\rho}[\tfrac{\partial s}{\partial e}]^{ -1}\). In order for \(s(\rho,e)\) to be physically meaningful it has to satisfy some mathematical restrictions. An exhaustive list of restrictions can be found in [27, 15]. In this manuscript we will only assume that \(\tfrac{\partial s}{\partial e}>0\), implying positivity of the temperature, and that \(-s\) is strictly convex with for any \(\rho,e>0\), see [8, p. 3]. We will use the shorthand notation \(s_{e}:=\tfrac{\partial s}{\partial e}\) and \(s_{\rho}:=\tfrac{\partial s}{\partial\rho}\). **Lemma 2.1** (Positivity of density, see [15, 8]).: _Assuming sufficient smoothness and boundedness of the solution, the density solution satisfies the following property_ \[\operatorname{essinf}_{\mathbf{x}\in\mathbb{R}^{d}}\rho(\mathbf{x},t)>0,\quad\forall t >0.\] The proof of Lemma 2.1 merely depends on the mass equation (3a). This is a well-known result for which detailed proof can be found in [15]. **Lemma 2.2** (Minimum principle of the specific entropy).: _Assume sufficient smoothness and that the density and the internal energy uniformly converge to constant states outside a compact domain of interest. The minimum entropy principle holds:_ \[\inf_{\mathbf{x}\in\mathbb{R}^{d}}\,s(\rho(\mathbf{x},t),e(\mathbf{x},t))\geq\inf_{\mathbf{x} \in\mathbb{R}^{d}}\,s_{0}(\mathbf{x}),\] _where \(s(\rho,e)\) is the specific entropy, see Definition 2.1, and \(s_{0}\) is the initial specific entropy._ Proof.: Multiplying (3b) with \(\mathbf{v}\) gives \[\begin{split}\rho\left(\partial_{t}\left(\tfrac{1}{2}|\mathbf{v}|^{2} \right)+\mathbf{v}\cdot\nabla\left(\tfrac{1}{2}|\mathbf{v}|^{2}\right)\right)+|\mathbf{v}|^ {2}\epsilon\Delta\rho+\mathbf{v}\cdot\nabla p\\ -\mathbf{v}\cdot\operatorname{div}\,\left(\mu^{-1}\mathbf{\mathcal{B}} \mathbf{\mathcal{B}}^{T}-\tfrac{1}{2\mu}\|\mathbf{\mathcal{B}}\|^{2}\right)-\epsilon \mathbf{v}\cdot\Delta\mathbf{m}=0.\end{split} \tag{4}\] Multiplying (3d) with \(\mathbf{\mathcal{B}}\) gives \[\rho\left(\partial_{t}\left(\tfrac{\rho^{-1}|\mathbf{\mathcal{B}}^{2}}{2}\right)+ \mathbf{v}\cdot\nabla\left(\tfrac{\rho^{-1}|\mathbf{\mathcal{B}}^{2}}{2}\right)\right) +\tfrac{\rho^{-1}|\mathbf{\mathcal{B}}|^{2}}{2}\epsilon\Delta\rho+\tfrac{\rho^{-1 }|\mathbf{\mathcal{B}}|^{2}}{2}\rho\operatorname{div}\mathbf{v}-\mathbf{\mathcal{B}}\cdot( \mathbf{\mathcal{B}}\cdot\nabla)\mathbf{v}=0. \tag{5}\] Subtracting (4) and (5) from (3c) gives \[\rho(\partial_{t}e+\mathbf{v}\cdot\nabla e)+\left(e-\tfrac{1}{2}|\mathbf{v}|^{2} \right)\epsilon\Delta\rho+p\operatorname{div}\mathbf{v}-\epsilon\Delta\left(\rho e +\tfrac{1}{2\rho}|\mathbf{m}|^{2}\right)+\epsilon\mathbf{v}\cdot\Delta\mathbf{m}=0, \tag{6}\] which describes the evolution of the internal energy. Multiplying the mass equation with \(\rho s_{\rho}\), multiplying (6) with \(s_{e}\), adding them together, then applying the chain rules \(\nabla s=s_{e}\nabla e+s_{\rho}\nabla\rho\) and \(\partial_{t}s=s_{e}\partial_{t}e+s_{\rho}\partial_{t}\rho\), we obtain the entropy conservation equation \[\rho(\partial_{t}s+\mathbf{v}\cdot\nabla s)+(es_{e}-\rho s_{\rho}) \epsilon\Delta\rho+s_{e}\left(-\tfrac{1}{2}|\mathbf{v}|^{2}\epsilon\Delta\rho- \epsilon\Delta\left(\rho e+\tfrac{1}{2\rho}|\mathbf{m}|^{2}\right)+\epsilon\mathbf{v} \cdot\Delta\mathbf{m}\right)=0. \tag{7}\] Let \(\ell\coloneqq s_{e}^{-1}(es_{e}-\rho s_{\rho})\epsilon\nabla\rho+\epsilon \rho s_{e}^{-1}\nabla s\). We can rewrite (7) as \[\rho(\partial_{t}s+\mathbf{v}\cdot\nabla s)+(es_{e}-\rho s_{\rho}) \epsilon\Delta\rho-s_{e}\operatorname{div}\ell-s_{e}\epsilon\rho\nabla\mathbf{v} :\nabla\mathbf{v}=0. \tag{8}\] Let \(J\coloneqq-(\epsilon\nabla\rho)\cdot\nabla(es_{e}-\rho s_{\rho})+\ell\cdot \nabla s_{e}+\epsilon\nabla_{\rho}\cdot\nabla s\). It can be proved that \(J\leq 0\), which follows from the strict convexity of \(-s\), see [8, Lemma 3]. Therefore, we rewrite (8) as \[\rho(\partial_{t}s+\mathbf{v}\cdot\nabla s)-\operatorname{div}\left( \epsilon\rho\nabla s\right)-\epsilon\nabla\rho\cdot\nabla s=-J+s_{e}( \epsilon\rho\nabla\mathbf{v}):\nabla\mathbf{v}, \tag{9}\] where the right hand side is non-negative. In the regular case when \(\inf_{\mathbf{x}\in\mathbb{R}^{d}}s(\mathbf{x},t)\) is reached at, let us say \(\bar{\mathbf{x}}(t)\), inside a compact domain \(\Omega\subset\mathbb{R}^{d}\), we have \(\nabla s(\bar{\mathbf{x}},t)=0\) and \(\Delta s(\bar{\mathbf{x}},t)\geq 0\) due to smoothness. From (9), we have that \(\partial_{t}s(\bar{\mathbf{x}},t)\geq 0\) since \(\rho>0\). This says that \(\inf_{\mathbf{x}\in\mathbb{R}^{d}}s(\mathbf{x},t)\) is always increasing in time. Therefore, we have the conclusion. However, if \(\inf_{\mathbf{x}\in\mathbb{R}^{d}}s(\mathbf{x},t)\) is reached at \(\mathbf{x}\to\infty\), then we have \(\inf_{\mathbf{x}\in\mathbb{R}^{d}}s(\mathbf{x},t)=x^{*}\geq\inf_{\mathbf{x}\in\mathbb{R}^{ d}}s_{0}(\mathbf{x})\) where \(\inf_{\mathbf{x}\in\mathbb{R}^{d}}s(\mathbf{x},t)\to x^{*}\) as \(\mathbf{x}\to\infty\) due to the uniform convergence assumption. The proof is complete. Let \(f(s)\) be a twice differentiable function with \(s(\mathbf{u})=s(\rho,e(\mathbf{u}))\) being the specific entropy. Consider a class of strictly convex functions \(\eta(\mathbf{u})=-\rho f(s(\mathbf{u}))\) which are known as generalized Harten entropies. The following lemma is a direct consequence of Lemma 2.2. **Lemma 2.3** (Generalized entropy inequalities).: _Any smooth solution to (3) satisfies the entropy inequality_ \[\partial_{t}(\rho f(s))+\operatorname{div}(\mathbf{v}\rho f(s)-\epsilon \rho\nabla f(s)-\epsilon f(s)\nabla\rho)\geq 0.\] Proof.: Multiplying both sides of (9) with \(f^{\prime}(s)\) gives \[\begin{split}\rho(\partial_{t}f(s)+\mathbf{v}\cdot\nabla f(s))& -\operatorname{div}\left(\epsilon\rho\nabla f(s)\right)+\epsilon\rho f ^{\prime\prime}(s)|\nabla s|^{2}-\epsilon f^{\prime}(s)\nabla\rho\cdot\nabla s =\\ &-Jf^{\prime}(s)+f^{\prime}(s)s_{e}(\epsilon\rho\nabla\mathbf{v}): \nabla\mathbf{v}.\end{split} \tag{10}\] Multiplying the mass equation with \(\rho\) and add it to (10), we have \[\partial(\rho f(s))+\operatorname{div}\left(\rho\mathbf{v}f(s)\right) -\operatorname{div}\left(\epsilon\rho\nabla f(s)+\epsilon f(s)\nabla \rho\right)=\] \[-\epsilon\rho f^{\prime\prime}(s)|\nabla s|^{2}-Jf^{\prime}(s)+f^ {\prime}(s)s_{e}(\epsilon\rho\nabla\mathbf{v}):\nabla\mathbf{v}.\] By the strict convexity of \(-\rho f(s)\), we can show that \(-\epsilon\rho f^{\prime\prime}(s)|\nabla s|^{2}-Jf^{\prime}(s)\geq 0\) and \(f^{\prime}(s)>0\), see [8, Theorem 3.4, 3.5]. By the assumption that the temperature is positive, we have \(s_{e}>0\). Therefore, the inequality of the lemma always holds true. ### Energy balance of the non-divergence formulation **Proposition 2.1** (Total energy-balance).: _The MHD system (1) satisfies the following formal energy-flux balance:_ \[\partial_{t}\int_{\Omega}E+\tfrac{\mu}{2}|\mathbf{\mathcal{H}}|^{2}\,\mathrm{d} \mathbf{x}+\int_{\partial\Omega}(E+p)\tfrac{\mathbf{m}}{\rho}\cdot\mathbf{n}-\mu(\tfrac{ \mathbf{m}}{\rho}\times\mathbf{\mathcal{H}})\cdot(\mathbf{\mathcal{H}}\times\mathbf{n})\, \mathrm{d}\mathbf{s}=0 \tag{11}\] Proof.: Integrating (1c) in space and using the divergence theorem we get \[\int_{\Omega}\partial_{t}E\,\mathrm{d}\mathbf{x}+\int_{\partial\Omega}(E+p) \tfrac{\mathbf{m}}{\rho}\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}+\mu\int_{\Omega}(\mathbf{ \mathcal{H}}\times\operatorname{curl}\mathbf{\mathcal{H}})\cdot\tfrac{\mathbf{m}}{ \rho}\,\mathrm{d}\mathbf{x}=\mathbf{0}\,. \tag{12}\] Multiplying (1d) by \(\mu\mathbf{\mathcal{H}}\), using integration by parts formula \[\int_{\Omega}\operatorname{curl}\mathbf{u}\cdot\mathbf{v}\,\mathrm{d}\mathbf{x}=\int_{ \partial\Omega}(\mathbf{u}\times\mathbf{v})\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}+\int_{\Omega }\mathbf{u}\cdot\operatorname{curl}\mathbf{v}\,\mathrm{d}\mathbf{x}\,,\] and reorganizing the terms we get: \[\partial_{t}\int_{\Omega}\tfrac{\mu}{2}|\mathbf{\mathcal{H}}|^{2}\,\mathrm{d}\bm {x}-\mu\int_{\partial\Omega}[(\tfrac{\mathbf{m}}{\rho}\times\mathbf{\mathcal{H}}) \times\mathbf{\mathcal{H}}]\cdot\mathbf{n}\,\mathrm{d}\mathbf{s}-\mu\int_{\Omega}( \tfrac{\mathbf{m}}{\rho}\times\mathbf{\mathcal{H}})\cdot\operatorname{curl}\mathbf{ \mathcal{H}}\,\mathrm{d}\mathbf{x}=0\,. \tag{13}\] Using the property \(\operatorname{curl}\mathbf{\mathcal{H}}\cdot(\tfrac{\mathbf{m}}{\rho}\times\mathbf{ \mathcal{H}})=\tfrac{\mathbf{m}}{\rho}\cdot(\mathbf{\mathcal{H}}\times\operatorname{ curl}\mathbf{\mathcal{H}})\,,\) and inserting this identity into (13) yields \[\partial_{t}\int_{\Omega}\tfrac{\mu}{2}|\mathbf{\mathcal{H}}|^{2}\,\mathrm{d}\mathbf{x }-\mu\int_{\Omega}(\mathbf{\mathcal{H}}\times\operatorname{curl}\mathbf{\mathcal{H}}) \cdot\tfrac{\mathbf{m}}{\rho}\,\mathrm{d}\mathbf{x}-\mu\int_{\partial\Omega}[( \tfrac{\mathbf{m}}{\rho}\times\mathbf{\mathcal{H}})\times\mathbf{\mathcal{H}}]\cdot\mathbf{n }\,\mathrm{d}\mathbf{s}=0. \tag{14}\] Finally, adding (14) to (12), and using properties of the triple product yields the desired result. **Remark 2.1** (Energy conservation and boundary conditions).: As noted in the introduction, non-divergence formulation (1) and divergence formulation (2) should be treated as different models. As such, each formulation accommodates complementary set of boundary conditions. We start by noting that the total energy considered in (11) is \(\int_{\Omega}E+\tfrac{\mu}{2}|\mathbf{\mathcal{H}}|^{2}\,\mathrm{d}\mathbf{x}\) is not a linear functional but rather the sum of a linear and a quadratic functional. As usual, conservation of total energy \(\int_{\Omega}E+\tfrac{\mu}{2}|\mathbf{\mathcal{H}}|^{2}\,\mathrm{d}\mathbf{x}\) holds for the trivial case of periodic boundary conditions. Inspecting identity (11), we note that another simple scenario where total energy is conserved is when \(\mathbf{m}\cdot\mathbf{n}\equiv 0\) and \(\mathbf{\mathcal{H}}\times\mathbf{n}\equiv\mathbf{0}\) on the entirety of the boundary. Tangent boundary conditions, such as \(\mathbf{\mathcal{H}}\times\mathbf{n}\equiv\mathbf{0}\), can be enforced in the curl-conforming framework such as the Sobolev space \(H(\operatorname{curl})\), see for instance [30, 2], which will be used to discretize \(\mathbf{\mathcal{H}}\). Note that the normal boundary conditions \((\mu\mathbf{\mathcal{H}})\cdot\mathbf{n}=0\), traditionally used for the divergence formulation (2), are not meaningful in the context of non-divergence model (1). **Remark 2.2** (Simplifying assumptions).: In the remainder of the paper, in order to simplify arguments in relationship to boundary conditions, we assume that periodic boundary conditions are used, or that the initial data is compactly supported and that the final time is small enough to prevent waves from reaching the boundary. Alternatively, we could assume that boundary conditions \(\mathbf{m}\cdot\mathbf{n}\equiv 0\) and \(\mathcal{H}\times\mathbf{n}\equiv\mathbf{0}\) are used on the entirety of the boundary. ### Euler's equation with forces Consider Euler's system subject to the effect of a force, that is \[\tfrac{\partial}{\partial t}\mathbf{u}+\operatorname{div}\mathbb{f}(\mathbf{u})=\mathbf{ s}(\mathbf{f}), \tag{15}\] with \[\mathbf{u}=\begin{bmatrix}\rho\\ \mathbf{m}\\ E\end{bmatrix},\ \ \mathbb{f}(\mathbf{u})=\begin{bmatrix}\mathbf{m}^{\top}\\ \rho^{-1}\mathbf{m}\mathbf{m}^{\top}+\mathbb{I}p\\ \rho^{-1}\mathbf{m}^{\top}(E+p)\end{bmatrix},\ \ \mathbf{s}(\mathbf{f})=\begin{bmatrix}0\\ \mathbf{f}\\ \rho^{-1}\mathbf{m}\cdot\mathbf{f}\end{bmatrix}. \tag{16}\] In particular, system (1a)-(1c) can be rewritten as in (15)-(16) using the particular choice of force \(\mathbf{f}=-\mu\mathcal{H}\times\operatorname{curl}\mathcal{H}\). For any force \(\mathbf{f}\) we have the property described in the following lemma. **Lemma 2.4** (Invariance of entropy-like functionals [26]).: _Let \(\mathbf{u}=[\rho,\mathbf{m},E]\in\mathbb{R}^{d+2}\) be the state of Euler's system. Let \(\Psi(\mathbf{u}):\mathbb{R}^{d+2}\to\mathbb{R}\) be any arbitrary functional of the state satisfying the functional dependence \(\Psi(\mathbf{u}):=\psi(\rho,\varepsilon(\mathbf{u}))\), where \(\varepsilon(\mathbf{u}):=E-\frac{|\mathbf{m}|^{2}}{2\rho}\) is the internal energy per unit volume. Then we have that_ \[\nabla_{\mathbf{u}}\Psi(\mathbf{u})\cdot\mathbf{s}(\mathbf{f})\equiv 0, \tag{17}\] _where \(\nabla_{\mathbf{u}}\) is the gradient with respect to the state, i.e., \(\nabla_{\mathbf{u}}=[\frac{\partial}{\partial\rho},\frac{\partial}{\partial\mathbf{m} _{1}},...,\frac{\partial}{\partial\mathbf{m}_{l}},\frac{\partial}{\partial E}]^{\top}\)._ Proof.: Using the chain rule we observe that \(\nabla_{\mathbf{u}}\Psi(\mathbf{u})=\frac{\partial\psi}{\partial\rho}\nabla_{\mathbf{u}} \rho+\frac{\partial\psi}{\partial\varepsilon}\nabla_{\mathbf{u}}\varepsilon\), where \[\nabla_{\mathbf{u}}\rho =[1,0,...,0]^{\top}\in\mathbb{R}^{d+2},\] \[\nabla_{\mathbf{u}}\varepsilon =[\frac{|\mathbf{m}|^{2}}{\rho^{2}},-\frac{\mathbf{m}_{l}}{\rho},...,- \frac{\mathbf{m}_{d}}{\rho},1]^{\top}\in\mathbb{R}^{d+2}.\] Taking the product with \(\mathbf{s}(\mathbf{f})\) we get \[\nabla_{\mathbf{u}}\Psi(\mathbf{u})\cdot\mathbf{s}(\mathbf{f}) =\frac{\partial\psi}{\partial\rho}\underbrace{\nabla_{\mathbf{u}} \rho\cdot\mathbf{s}(\mathbf{f})}_{=0}+\frac{\partial\psi}{\partial\varepsilon} \nabla_{\mathbf{u}}\varepsilon\cdot\mathbf{s}(\mathbf{f})\] \[=\frac{\partial\psi}{\partial\varepsilon}(-\rho^{-1}\mathbf{m}\cdot \mathbf{f}+\rho^{-1}\mathbf{m}\cdot\mathbf{f})=0.\] **Remark 2.3** (Colloquial interpretation).: Lemma 2.4 is simply saying that the evolution in time of an arbitrary functional \(\Psi(\mathbf{u})\) satisfying the functional dependence \(\Psi(\mathbf{u}):=\psi(\rho,\varepsilon(\mathbf{u}))\) is independent of the force \(\mathbf{f}\). This follows directly by taking the dot-product of (15) with \(\nabla_{\mathbf{u}}\Psi(\mathbf{u})\) to get \[\nabla_{\mathbf{u}}\Psi(\mathbf{u})\cdot\tfrac{\partial}{\partial t}\mathbf{u}\ =\ \tfrac{ \partial}{\partial t}\Psi(\mathbf{u})\ =\ -\nabla_{\mathbf{u}}\Psi(\mathbf{u})\cdot\operatorname{div} \mathbb{f}(\mathbf{u})+\underbrace{\nabla_{\mathbf{u}}\Psi(\mathbf{u})\cdot\mathbf{s}(\bm {f})}_{\equiv 0}.\] In particular, this holds true when \(\Psi(\mathbf{u}):=\varepsilon(\mathbf{u})\). Similarly, we can apply Lemma 2.4 to the specific internal energy \(e(\mathbf{u})=\rho^{-1}\varepsilon(\mathbf{u})\) since \(e(\mathbf{u})\) satisfies the functional dependence \(e(\mathbf{u})=\psi(\rho,\varepsilon(\mathbf{u}))\) as well. We also note that condition (17) is related to the so-called "complementary degeneracy requirements" usually invoked in GENERIC systems, see [31]. ### Splitting of the differential operator **Remark 2.4** (Choice of splitting).: We consider the splitting for the system (1) in two evolutionary operators: \[\text{Operator \#1}\left\{\begin{aligned} \partial_{t}\rho+\operatorname{div} \mathbf{m}&=0\,,\\ \partial_{t}\mathbf{m}+\operatorname{div}\left(\rho^{-1}\mathbf{mm}^{ \top}+\mathbb{I}p\right)&=\mathbf{0}\,,\\ \partial_{t}E+\operatorname{div}\left(\frac{\mathbf{m}}{\rho}(E+p) \right)&=0\,,\\ \partial_{t}\mathcal{H}&=\mathbf{0}\,,\end{aligned}\right. \tag{18}\] and \[\text{Operator \#2}\left\{\begin{aligned} \partial_{t}\rho& =0\,,\\ \partial_{t}\mathbf{m}&=-\mu\mathcal{H}\times\operatorname{ curl}\mathcal{H}\,,\\ \partial_{t}E&=-\mu(\mathcal{H}\times\operatorname{ curl}\mathcal{H})\cdot\frac{\mathbf{m}}{\rho}\,,\\ \partial_{t}\mathcal{H}&=\operatorname{curl}\left( \nu\times\mathcal{H}\right)\,.\end{aligned}\right. \tag{19}\] Given some initial data \(\mathbf{u}^{n}=[\rho^{n},\mathbf{m}^{n},E^{n},\mathcal{H}^{n}]\) for each one of these operators, we would like to know what properties are preserved by their evolution. **Proposition 2.2** (Evolution of Operator #1: preservation of linear invariants and pointwise stability properties).: _Assume periodic boundary conditions. Assume that the initial data at time \(t_{n}\) is admissible, meaning \(\mathbf{u}^{n}(\mathbf{x})=[\rho^{n},\mathbf{m}^{n},E^{n}](\mathbf{x})\in\mathcal{A}\) for all \(\mathbf{x}\in\Omega\) with \(\mathcal{A}\) defined as_ \[\mathcal{A}=\left\{[\rho,\mathbf{m},E]^{\top}\in\mathbb{R}^{d+2}\,\big{|}\,\rho>0,\,E-\frac{1}{2}\frac{|\mathbf{m}|^{2}}{\rho}>0\right\}. \tag{20}\] _Then, the evolution of Operator #1 from time \(t_{n}\) to \(t_{n+1}\), preserves the following linear invariants_ \[\int_{\Omega}\rho^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\rho^{n} \,\mathrm{d}\mathbf{x}\,\ \ \int_{\Omega}\mathbf{m}^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\mathbf{m}^{n}\,\mathrm{d} \mathbf{x}\, \tag{21}\] \[\int_{\Omega}E^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}E^{n}\, \mathrm{d}\mathbf{x}\,\ \ \int_{\Omega}\mathcal{H}^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\mathcal{H}^{n}\, \mathrm{d}\mathbf{x}\,\] _as well as the pointwise stability properties_ \[\rho^{n+1}(\mathbf{x})\geq 0\,\ \ s(\mathbf{u}^{n+1}(\mathbf{x}))\geq\min_{\mathbf{x}\in \Omega}s(\mathbf{u}^{n}(\mathbf{x}))\,\ \varepsilon(\mathbf{u}^{n+1}(\mathbf{x}))\geq 0. \tag{22}\] _for all \(\mathbf{x}\in\Omega\)._ Note that \(\int_{\Omega}\mathcal{H}^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\mathcal{H}^{n} \,\mathrm{d}\mathbf{x}\) follows trivially from the fact that \(\partial_{t}\mathcal{H}\equiv\mathbf{0}\) for the case of Operator #1. Properties (21) are a consequence of the divergence theorem. On the other hand, establishing properties (22) is rather technical, the reader is referred to [36; 15]. We note in passing that positivity of the internal energy is not a direct property, but rather a consequence of the positivity of density and minimum principle of the specific entropy. **Corollary 2.1** (Energy-stability of Operator #1: Linear + Quadratic functional).: _Assume periodic boundary conditions, then the evolution described by Operator #1 satisfies the following energy-estimate_ \[\int_{\Omega}E^{n+1}+\tfrac{\mu}{2}|\mathcal{H}^{n+1}|^{2}\,\mathrm{d}\mathbf{x}= \int_{\Omega}E^{n}+\tfrac{\mu}{2}|\mathcal{H}^{n}|^{2}\,\mathrm{d}\mathbf{x}\] Proof.: This follows from the conservation property \(\int_{\Omega}E^{n+1}\,\mathrm{d}\mathbf{x}=\int_{\Omega}E^{n}\,\mathrm{d}\mathbf{x}\), then we add \(\int_{\Omega}\tfrac{\mu}{2}|\mathcal{H}^{n}|^{2}\,\mathrm{d}\mathbf{x}\) to both sides of the equality, and use the fact that \(\mathcal{H}^{n+1}\equiv\mathcal{H}^{n}\) since \(\partial_{t}\mathcal{H}\equiv\mathbf{0}\). Regarding Operator #2, we start by noting that since \(\partial_{t}\rho\equiv 0\), we can rewrite (19) as \[\text{Operator }\#2\,\begin{cases}\partial_{t}\rho=0\\ \rho\partial_{t}\mathbf{v}=-\mu\mathcal{H}\times\operatorname{curl}\mathcal{H}\,, \\ \partial_{t}E=-\mu(\mathcal{H}\times\operatorname{curl}\mathcal{H})\cdot\mathbf{v} \,,\\ \partial_{t}\mathcal{H}=\operatorname{curl}\left(\mathbf{v}\times\mathcal{H} \right),\end{cases} \tag{23}\] and note that only the evolution of \(\mathbf{v}\) and \(\mathcal{H}\) are actually coupled. Assume periodic boundary conditions, multiply the evolution equation for \(\mathbf{v}\) with a smooth test function \(\mathbf{z}\) and the evolution equation for \(\mathcal{H}\) with a vector-valued smooth test function \(\mathbf{\mathcal{X}}\) integrate by parts, then we get: \[\begin{split}(\rho\partial_{t}\mathbf{v},\mathbf{z})&=-\mu( \mathcal{H}\times\operatorname{curl}\mathcal{H},z)\,,\\ (\partial_{t}\mathcal{H},\mathbf{\mathcal{X}})&=(\mathcal{H} \times\operatorname{curl}\mathcal{X},\mathbf{v})\,,\end{split} \tag{24}\] We will discretize (24) in space and time, see Section 3.3. It is clear that, in order to make sense of the integration by parts used to derive (24) the weak or distributional \(\operatorname{curl}\) of \(\mathcal{H}\) should be well defined. Therefore, it is natural to consider a \(\operatorname{curl}\)-conforming space discretization for \(\mathcal{H}\). Note that tangent boundary conditions \(\mathcal{H}\times\mathbf{n}\equiv\mathbf{0}\), which can be directly enforced in the \(\operatorname{curl}\)-conforming framework, are useful to achieve energy-isolation of the MHD system, see Remark 2.1. **Proposition 2.3** (Evolution of Operator #2: global energy stability and pointwise invariants).: _Let \(\mathbf{u}^{n}=[\rho^{n},\mathbf{m}^{n},E^{n},\mathcal{H}^{n}]^{\top}\) be the initial data. Assume periodic boundary conditions, then the evolution of described by Operator #2 as defined in (23), preserves the following global quadratic-invariant:_ \[\int_{\Omega}\tfrac{1}{2}\rho^{n+1}|\mathbf{v}^{n+1}|^{2}+\tfrac{\mu}{2}|\mathcal{ H}^{n+1}|^{2}\,\mathrm{d}\mathbf{x}=\int_{\Omega}\tfrac{1}{2}\rho^{n}|\mathbf{v}^{n}|^{2}+ \tfrac{\mu}{2}|\mathcal{H}^{n}|^{2}\,\mathrm{d}\mathbf{x}\,, \tag{25}\] _a well as pointwise invariance of the internal energy_ \[(E^{n+1}-\tfrac{1}{2}\rho^{n+1}|\mathbf{v}^{n+1}|^{2})(\mathbf{x})=(E^{n}-\tfrac{1}{2} \rho^{n}|\mathbf{v}^{n}|^{2})(\mathbf{x}) \tag{26}\] _for all \(\mathbf{x}\in\Omega\), with \(\rho^{n+1}(\mathbf{x})=\rho^{n}(\mathbf{x})\) since density does not evolve for the case of Operator #2._ Proof.: Energy stability (25) follows by taking \(z=\mathbf{v}\) and \(\mathbf{\mathcal{X}}=\mu\mathcal{H}\) in (24) and adding both lines. On the other hand, the invariance of the internal energy (26) is a direct consequence (17) and Remark 2.3. **Corollary 2.2** (Energy-stability of Operator #2: Linear + Quadratic functional).: _Under the assumptions of Proposition 2.3, the evolution of Operator #2 satisfies the following energy-balance_ \[\int_{\Omega}E^{n+1}+\tfrac{\mu}{2}|\mathcal{H}^{n+1}|^{2}\,\mathrm{d}\mathbf{x}= \int_{\Omega}E^{n}+\tfrac{\mu}{2}|\mathcal{H}^{n}|^{2}\,\mathrm{d}\mathbf{x} \tag{27}\] Proof.: Integrating (26) with respect to space we get \[\int_{\Omega}E^{n+1}-\tfrac{1}{2}\rho^{n+1}|\mathbf{v}^{n+1}|^{2}\, \mathrm{d}\mathbf{x}=\int_{\Omega}E^{n}-\tfrac{1}{2}\rho^{n}|\mathbf{v}^{n}|^{2}\, \mathrm{d}\mathbf{x}. \tag{28}\] Identity (27) follows by adding (25) and (28) leading to the cancellation of kinetic energy terms. **Remark 2.5** (Invariant domain preservation for the evolution described by Operator #2).: We note that the evolution described Operator #2 is such that neither density nor internal energy evolve, then: specific entropy and mathematical entropy remain invariant. Or equivalently in term of formulas \[\tfrac{\partial\rho}{\partial t}\equiv 0\;\;\text{and}\;\;\tfrac{\partial}{ \partial t}(E-\tfrac{1}{2\rho}|\mathbf{m}|^{2})\equiv 0\;\;\text{imply that}\;\;\tfrac{ \partial s}{\partial t}\equiv 0\;\;\text{and}\;\;\tfrac{\partial\eta}{\partial t} \equiv 0. \tag{29}\] here \(\eta(\mathbf{u})=-\rho s(\mathbf{u})\) is the mathematical entropy. In conclusion: the evolution of Operator #2 cannot meaningfully affect the evolution of density, internal energy, or specific entropy. Therefore, Operator #2 cannot affect the preservation of invariant set properties. Since the mathematical entropy remains constant during the evolution described by Operator #2 we also have the global estimate: \[\int_{\Omega}\eta(\mathbf{u}(\mathbf{x},t_{n+1}))\,\mathrm{d}\mathbf{x}=\int_ {\Omega}\eta(\mathbf{u}(\mathbf{x},t_{n}))\,\mathrm{d}\mathbf{x}\,.\] **Remark 2.6** (Discrete-time evolution of total mechanical energy for Operator #2).: From (23) we note that the evolution of velocity \(\mathbf{v}\) and magnetic field \(\mathcal{H}\) are independent from the evolution of the total mechanical energy \(E\). Therefore, in the time-discrete setting, given an initial data \([\rho^{n},\mathbf{v}^{n},E^{n},\mathcal{H}^{n}](\mathbf{x})\) at time \(t_{n}\), we can compute \(\mathbf{v}^{n+1}\) and \(\mathcal{H}^{n+1}\) by integrating (24) in time neglecting the evolution law for the total mechanical energy \(\partial_{t}E=-\mu(\mathcal{H}\times\operatorname{curl}\mathcal{H})\cdot \tfrac{\mathbf{m}}{\rho}\). Once \(\mathbf{v}^{n+1}\) and \(\mathcal{H}^{n+1}\) are available, the constraint (26) identifies a unique function \(E^{n+1}(\mathbf{x})\). More precisely, we may rewrite (26) as \[E^{n+1}(\mathbf{x}):=(E^{n}-\tfrac{1}{2}\rho^{n}|\mathbf{v}^{n}|^{2}+ \tfrac{1}{2}\rho^{n+1}|\mathbf{v}^{n+1}|^{2})(\mathbf{x}), \tag{30}\] and use it in order to compute \(E^{n+1}(\mathbf{x})\) using the data \(\rho^{n},\rho^{n+1}=\rho^{n}\), \(E^{n}\), \(\mathbf{v}^{n}\) and \(\mathbf{v}^{n+1}\). This means that there is no particular use for \(\partial_{t}E=-\mu(\mathcal{H}\times\operatorname{curl}\mathcal{H})\cdot \tfrac{\mathbf{m}}{\rho}\). Therefore, we use (30) in order to update total mechanical energy in the time-discrete setting. Note that, by construction, (30) guarantees exact preservation of the internal energy, specific internal energy, specific entropy and mathematical entropy as well (provided that \(\rho^{n+1}\equiv\rho^{n}\)). **Remark 2.7** (Induction equation as an independent object of study).: We note that the induction equation, either (1d) or (2d), is a very interesting object in its own right. Broadly speaking, the devise of schemes for advective PDEs endowed with involution-like constraints is a challenging task that has received significant attention in the last years [11; 39; 5; 28; 23; 24; 20; 19; 21; 33]. However, unless we make very strong assumptions about the velocity field, the induction equation does not satisfy any global stability property (e.g., \(L^{2}\)-stability). Similarly, to the best of the authors' knowledge the induction does not satisfy any pointwise stability property, such as max/min principles or invariant set properties. Since the induction equation does not have natural notions of global or pointwise stability, numerical stability not a well-defined concept. On the other hand, system (23) satisfies the global stability property (25), and the pointwise stability property (26), outlining quite clearly the properties numerical methods should preserve. ## 3 Space and time discretization of the MHD system ### Space discretization In this subsection we outline the space discretization used for Euler's components \(\{\rho,\mathbf{m},E\}\) and the magnetic field \(\mathbf{\mathcal{H}}\). The ideas advanced in this manuscript work in two space dimensions (\(d=2\)) as well as three space dimensions (\(d=3\)). Similarly, the scheme has no limitation on the choice of polynomial degree and formal accuracy in space. However, for the sake of concreteness, we focus on the case of \(d=2\) and spatial discretizations capable of delivering second-order accuracy. In Remark 3.1 we also provide the proper generalization for the case of quadrilateral/tetrahedral meshes. We consider a simplicial mesh \(\mathcal{T}_{h}\) and a corresponding scalar-valued continuous finite element space \(\mathbb{V}_{h}\) for each component of Euler's system: \[\mathbb{V}_{h}=\{v_{h}(\mathbf{x})\in\mathcal{C}^{0}(\Omega)\ \big{|}\ v_{h}(\mathbf{T}_{K}( \widehat{\mathbf{x}}))\in\mathbb{P}^{1}(\widehat{K})\ \forall K\in\mathcal{T}_{h}\}. \tag{31}\] Here, \(\mathbf{T}_{K}(\widehat{\mathbf{x}}):\widehat{K}\to K\) denotes a diffeomorphism mapping from the unit simplex \(\widehat{K}\) to the physical element \(K\in\mathcal{T}_{h}\), and \(\mathbb{P}^{1}(\widehat{K})\) is polynomial space of at most first degree on the reference element. We define \(\mathcal{V}_{\mathbb{V}}=\{1:\dim(\mathbb{V}_{h})\}\) as the index-set of global, scalar-valued degrees of freedom corresponding to \(\mathbb{V}_{h}\). Similarly, we introduce the set of global shape functions \(\{\phi_{i}(\mathbf{x})\}_{i\in\mathcal{V}_{\mathbb{V}}}\) and the set of collocation points \(\{\mathbf{x}_{i}\}_{i\in\mathcal{V}_{\mathbb{V}}}\) satisfying the property \(\phi_{i}(\mathbf{x}_{j})=\delta_{ij}\) for all \(i,j\in\mathcal{V}_{\mathbb{V}}\). We assume that the partition of unity property \(\sum_{i\in\mathcal{V}_{\mathbb{V}}}\phi_{i}(\mathbf{x})=1\) for all \(\mathbf{x}\in\Omega\) holds true. We introduce a number of matrices that will be used for the algebraic discretization. We define the consistent mass matrix entries \(m_{ij}\in\mathbb{R}\), lumped mass matrix \(m_{i}\in\mathbb{R}\), and the discrete divergence-matrix entries \(\mathbf{\mathrm{c}}_{ij}\in\mathbb{R}^{d}\): \[m_{ij}=\int_{\Omega}\phi_{i}\phi_{j}\,\mathrm{d}\mathbf{x}\,\ \ m_{i}=\int_{\Omega}\phi_{i}\, \mathrm{d}\mathbf{x}\,\ \ \mathbf{\mathrm{c}}_{ij}=\int_{\Omega}\nabla\phi_{j}\phi_{i}\, \mathrm{d}\mathbf{x}\,. \tag{32}\] Note that the definition of \(m_{ij}\) and the partition of unity property \(\sum_{i\in\mathcal{V}_{\mathbb{V}}}\phi_{i}(\mathbf{x})=1\) imply that \(\sum_{j\in\mathcal{V}_{\mathbb{V}}}m_{ij}=m_{i}\). Given two scalar-valued finite element functions \(u_{h}=\sum_{i\in\mathcal{V}_{\mathbb{V}}}u_{i}\phi_{i}\in\mathbb{V}_{h}\) and \(v_{h}=\sum_{i\in\mathcal{V}_{\mathbb{V}}}v_{i}\phi_{i}\in\mathbb{V}_{h}\) we define the lumped inner product as \[\langle u_{h},v_{h}\rangle=\sum_{i\in\mathcal{V}_{\mathbb{V}}}m_{i}u_{i}v_{i}\,. \tag{33}\] For the case of vector valued functions \(\mathbf{u}_{h}=\sum_{i\in\mathcal{V}_{\mathbb{V}}}\mathbf{u}_{i}\phi_{i}\in[\mathbb{V}_ {h}]^{2}\) and \(\mathbf{v}_{h}=\sum_{i\in\mathcal{V}_{\mathbb{V}}}\mathbf{v}_{i}\phi_{i}\in[\mathbb{V} _{h}]^{2}\) the lumped inner-product is defined as \(\langle\mathbf{u}_{h},\mathbf{v}_{h}\rangle=\sum_{i\in\mathcal{V}_{\mathbb{V}}}m_{i} \mathbf{u}_{i}\cdot\mathbf{v}_{i}\). We define the finite dimensional space \[\mathbf{\mathrm{H}}_{h}=\{\mathbf{\mathcal{X}}_{h}\in H(\mathrm{curl},\Omega)\ \big{|}\ [\nabla_{\widehat{\mathbf{x}}}\mathbf{T}_{K}(\widehat{\mathbf{x}})]^{\top}\mathbf{\mathcal{X}}_{ h}(\mathbf{T}_{K}(\widehat{\mathbf{x}}))\in[\mathbb{P}^{1}(\widehat{K})]^{2}\ \forall K\in\mathcal{T}_{h}\} \tag{34}\] which will be used to discretize the magnetic field \(\mathcal{H}\). The finite element space \(\mathbb{H}_{h}\) is known as the "rotated" or curl-conforming BDM\({}_{1}\) space. The primary motivation to use this space is that it the simplest curl-conforming finite element that spans all the vector-valued polynomial space \([\mathbb{P}_{1}]^{2}\), therefore full second-order accuracy should be expected in the \(L^{p}\)-norms when using this element. Finally, we define the space \[\mathbb{W}_{h}=\left\{\omega_{h}\in\mathcal{C}^{0}(\Omega)\,\middle|\,\omega_{ h}(\mathbf{T}_{K}(\widehat{\mathbf{x}}))\in\mathbb{P}_{2}(\widehat{K})\;\forall K\in \mathcal{T}_{h}\right\}. \tag{35}\] It is easy to prove that the space \(\mathbb{W}_{h}\) satisfies the inclusion \(\nabla\mathbb{W}_{h}\subset\mathbb{H}_{h}\), more precisely, these two spaces are part of a discrete exact sequence, see [1]. The space \(\mathbb{W}_{h}\) is used to define the weak divergence-free property, see Proposition 3.1. **Remark 3.1** (Quadrilateral and tetrahedral meshes).: In the context of tensor product elements, we have to use different definitions for the set of spaces \(\mathbb{W}_{h}\), \(\mathbb{H}_{h}\) and \(\mathbb{W}_{h}\) defined in (31), (34) and (35) respectively. The simplest space we can use in order to discretize the components of Euler's system is: \[\mathbb{V}_{h}=\{v_{h}(\mathbf{x})\in\mathcal{C}^{0}(\Omega)\;\middle|\;v_{h}(\mathbf{ T}_{K}(\widehat{\mathbf{x}}))\in\mathbb{Q}^{k}(\widehat{K})\;\forall K\in\mathcal{T}_{h }\}. \tag{36}\] for \(k\geq 1\). On the other hand, the natural candiates for the spaces \(\mathbb{H}_{h}\) and \(\mathbb{W}_{h}\) are \[\mathbb{H}_{h} =\left\{\mathbf{\mathcal{X}}_{h}\in H(\text{curl},\Omega)\,\middle|\, [\nabla_{\widetilde{\mathbf{x}}}\mathbf{T}_{K}(\widehat{\mathbf{x}})]^{\top}\mathbf{\mathcal{ X}}_{h}(\mathbf{T}_{K}(\widehat{\mathbf{x}}))\in\mathcal{N}_{k}(\widehat{K})\;\forall K \in\mathcal{T}_{h}\right\} \tag{37}\] \[\mathbb{W}_{h} =\left\{\omega_{h}\in\mathcal{C}^{0}(\Omega)\,\middle|\,\omega_{ h}(\mathbf{T}_{K}(\widehat{\mathbf{x}}))\in\mathbb{Q}^{k+1}(\widehat{K})\;\forall K \in\mathcal{T}_{h}\right\}. \tag{38}\] with \(\mathcal{N}_{k}(\widehat{K})=[\mathbb{P}_{k,k+1,k+1}(\widehat{K}),\mathbb{P}_ {k+1,k,k+1}(\widehat{K}),\mathbb{P}_{k+1,k+1,k}(\widehat{K})]\), where \(\mathbb{P}_{p,q,r}\) denotes the space of scalar-valued polynomials of at most \(p\)-th degree in the \(x\)-variable, \(q\)-th degree in the \(y\)-variable and \(r\)-th degree in the \(z\)-variable. The vector-valued polynomial family \(\mathcal{N}_{k}(\widehat{K})\) is the celebrated Nedelec space of the first kind, see [6]. The choice of spaces described in (36)-(38) can be generalized straightforwardly to arbitrary polynomial degree for both the two and three space dimensions. An alternative to the choices described in (37)-(38), for the specific case of two-space dimensions and a target of second-order accuracy, is using the BDM\({}_{1}\) space on quadrilaterals for \(\mathbb{H}_{h}\), also denoted as \(\mathcal{S}_{1}\Lambda^{1}\) in the context of finite element exterior calculus, and the serendripity element \(\mathcal{S}_{1}\Lambda^{0}\) for \(\mathbb{W}_{h}\), see [1]. However, the implementation of elements from the \(\mathcal{S}_{k}\Lambda^{r}\) family, and their generalization to higher-order polynomial degrees is slightly more technical. ### Discretization of the Operator #1: minimal assumptions The central ideas advanced in this paper are compatible with most of the existing numerical methods used to solve Euler's equation of gas dynamics. In this subsection we limit ourselves to outline the minimal assumptions made about the numerical scheme used to approximate the solution of Operator #1. We assume that, given some initial data \(\mathbf{u}_{h}=[\rho_{h}^{n},\mathbf{m}_{h}^{n},E_{h}^{n}]^{\top}\), a numerical approximation to the solutions of \(\mathbf{u}(\mathbf{x},t)=[\rho(\mathbf{x},t),\mathbf{m}(\mathbf{x},t),E(\mathbf{x},t)]^{\top}\) at time \(t_{n}\), we have at hand a numerical procedure to compute the updated state as \[\{\rho_{h}^{n+1},\mathbf{m}_{h}^{n+1},E_{h}^{n+1},\tau_{n}\}:=\text{ \euiler\_system\_update}(\{\rho_{h}^{n},\mathbf{m}_{h}^{n},E_{h}^{n}\}), \tag{39}\] where \(\{\rho_{h}^{n+1},\mathbf{m}_{h}^{n+1},E_{h}^{n+1}\}\) is the approximate solution at time \(t_{n}+\tau_{n}\). Note that as described in (39), \(\tau_{n}\) is a return argument of the procedure \(\euiler\_system\_update\). In other words, euler_system_update determines the time-step size on its own. We may at times, need to prescribe the time-step size used by euler_system_update, in such case the interface of the method might look as: \[\{\rho_{h}^{n+1},\mathbf{m}_{h}^{n+1},E_{h}^{n+1}\}:=\texttt{euler\_system\_update}( \{\rho_{h}^{n},\mathbf{m}_{h}^{n},E_{h}^{n},\tau_{n}\}),\] where \(\tau_{n}\) is supplied to euler_system_update. The internals of euler_system_update are not of much relevance. However, we may assume that euler_system_update is formally second-order accurate, and most importantly, the following structural properties: * _Collocated discretization._ We assume that all the components of Euler's system (18) are discretized in a collocated fashion meaning \[\rho_{h}(\mathbf{x})=\sum_{j\in\mathcal{V}_{\mathrm{V}}}\rho_{i}\phi_{i}(\mathbf{x}) \,,\ \mathbf{m}_{h}(\mathbf{x})=\sum_{i\in\mathcal{V}_{\mathrm{V}}}\mathbf{m}_{i}\phi_{i}(\mathbf{x} )\,,\ E_{h}(\mathbf{x})=\sum_{i\in\mathcal{V}_{\mathrm{V}}}E_{i}\phi_{i}(\mathbf{x})\,,\] where \(\rho_{i}\in\mathbb{R}\), \(\mathbf{m}_{i}\in\mathbb{R}^{2}\), \(E_{i}\in\mathbb{R}\), and \(\{\phi_{i}(\mathbf{x})\}_{i\in\mathcal{V}_{\mathrm{V}}}\) is the basis of the scalar-valued finite element space \(\mathbb{V}_{h}\) defined in (31). * _Conservation of linear invariants._ In the context of periodic boundary conditions the hyperbolic solver preserves the linear invariants: \[\sum_{i\in\mathcal{V}_{\mathrm{V}}}m_{i}\rho_{i}^{n+1}=\sum_{i\in \mathcal{V}_{\mathrm{V}}}m_{i}\rho_{i}^{n}\,\ \ \sum_{i\in\mathcal{V}_{\mathrm{V}}}m_{i}\mathbf{m}_{i}^{n+1}=\sum_{i\in \mathcal{V}_{\mathrm{V}}}m_{i}\mathbf{m}_{i}^{n}\,\ \ \sum_{i\in\mathcal{V}_{\mathrm{V}}}m_{i}E_{i}^{n+1}= \sum_{i\in\mathcal{V}_{\mathrm{V}}}m_{i}E_{i}^{n},\] (40) where \(m_{i}\) was defined in (32). * _Admissibility._ Assume that the initial data \(\mathbf{u}_{i}^{n}=[\rho_{i}^{n},\mathbf{m}_{i}^{n},E_{i}^{n}]^{\top}\) is admissible, meaning \(\mathbf{u}_{i}^{n}\in\mathcal{A}\) for all \(i\in\mathcal{V}_{\mathrm{V}}\), where the set \(\mathcal{A}\) was defined in (20). Then the updated state \(\mathbf{u}_{i}^{n+1}=[\rho_{i}^{n+1},\mathbf{m}_{i}^{n+1},E_{i}^{n+1}]^{\top}\) is admissible for all \(i\in\mathcal{V}_{\mathrm{V}}\) as well. We highlight that this is rather low requirements for preservation of pointwise properties. In general, positivity properties are not enough, and we might be interested in stronger properties, such as the preservation of the local minimum principle of the specific entropy, see [14, 18]. * _Entropy dissipation inequality._ We _may_ assume that the scheme preserves a global entropy inequality, meaning \[\sum_{i\in\mathcal{V}_{\mathrm{V}}}m_{i}\eta(\mathbf{u}_{i}^{n+1})\leq\sum_{i\in \mathcal{V}_{\mathrm{V}}}m_{i}\eta(\mathbf{u}_{i}^{n}),\] (41) in the context of periodic boundary conditions. For the sake of completeness, in A we make precise the implementation details of the hyperbolic solver used in all our computations. For any practical purpose, we may simply regard euler_system_update as the user's favorite choice of Euler scheme (a black box) that is: consistent, conservative, it is mathematically guaranteed not to crash, and may have some entropy-dissipation properties. **Remark 3.2** (Partition of unity and consistent mass-matrix).: We note that hyperbolic solvers using a consistent mass matrix do not satisfy conservation property (40) directly, but rather the identities \[\sum_{j\in\mathcal{V}_{\mathrm{V}}}m_{ij}\rho_{j}^{n+1}=\sum_{j\in\mathcal{V} _{\mathrm{V}}}m_{ij}\rho_{j}^{n}, \tag{42}\] where \(\{\varrho_{i}^{n+1}\}_{i\in\mathcal{V}_{\mathcal{V}_{\mathcal{V}}}}\) represents a quantity of interest such as density, momentum or total mechanical energy, with \(m_{ij}\) as defined in (32). In this context, we consider the summation \(\sum_{i\in\mathcal{V}_{\mathcal{V}_{\mathcal{V}}}}\) to both sides of identity (42), using the partition of unity property \(\sum_{i\in\mathcal{V}_{\mathcal{V}_{\mathcal{V}}}}m_{ij}=m_{j}\) (see Section 3.1) we get \[\sum_{i\in\mathcal{V}_{\mathcal{V}_{\mathcal{V}}}}\sum_{j\in\mathcal{V}_{ \mathcal{V}}}m_{ij}\varrho_{j}^{n+1}=\sum_{i\in\mathcal{V}_{\mathcal{V}}}\sum_ {j\in\mathcal{V}_{\mathcal{V}}}m_{ij}\varrho_{j}^{n}\ \ \Rightarrow\ \ \sum_{j\in\mathcal{V}_{\mathcal{V}}}\underbrace{(\sum_{i\in\mathcal{V}_{ \mathcal{V}}}m_{ij})}_{=\,m_{j}}\varrho_{j}^{n+1}=\sum_{j\in\mathcal{V}_{ \mathcal{V}}}\underbrace{(\sum_{i\in\mathcal{V}_{\mathcal{V}}}m_{ij})}_{=\,m_{ j}}\varrho_{j}^{n},\] recovering to the usual conservation identity. ### Discretization of the Operator #2 This section concerns with the spatial discretization of Operator #2, see (19). We consider the following semi-discretation of (24): find \(\mathbf{v}_{h}\in[\mathbb{V}_{h}]^{2}\) and \(\mathbf{\mathcal{H}}_{h}\in\mathbf{\mathsf{H}}_{h}\) such that \[\begin{split}\langle\rho_{h}\partial_{t}\mathbf{v}_{h},\mathbf{z}_{h} \rangle&=-\mu(\mathbf{\mathcal{H}}_{h}\times\text{curl}\,\mathbf{ \mathcal{H}}_{h},\mathbf{z}_{h}),\\ (\partial_{t}\mathbf{\mathcal{H}}_{h},\mathbf{X}_{h})&=( \mathbf{\mathcal{H}}_{h}\times\text{curl}\,\mathbf{X}_{h},\mathbf{v}_{h})\,,\end{split} \tag{43}\] for all \(\mathbf{z}_{h}\in[\mathbb{V}_{h}]^{2}\) and \(\mathbf{\mathcal{X}}_{h}\in\mathbf{\mathsf{H}}_{h}\) more precisely \[\mathbf{v}_{h}=\sum_{i\in\mathcal{V}_{\mathcal{V}}}\mathbf{v}_{i}\phi_{i}\ \text{ and }\ \mathbf{\mathcal{H}}_{h}=\sum_{i\in\mathcal{V}_{\mathcal{V}}}\mathcal{H}_{i}\mathbf{ \varphi}_{i},\] where \(\{\phi_{i}\}_{i\in\mathcal{V}_{\mathcal{V}}}\) is a basis of the scalar-valued space \(\mathbb{V}_{h}\) while \(\{\mathbf{\varphi}_{i}\}_{i\in\mathcal{V}_{\mathcal{V}_{h}}}\) is a vector-valued basis for the space \(\mathbf{\mathsf{H}}_{h}\), see Section 3.1 for the definition of the finite element spaces. Note that the bilinear form containing the time-derivative of the velocity in (43) is lumped, see (33) for the definition of lumping. This lumping is second-order accurate for the case of first-order simplices (used in our computations), see Remark 3.3 for the case of quadrilateral elements. **Remark 3.3** (Lumping and higher order elements).: In all our computations we use simplices. However, if we were to use tensor product elements (e.g., quadrilaterals) mass lumping has consistency order \(2k-3\) when using \(\mathbb{Q}^{k}\) elements with Gauss-Lobatto interpolation points. Therefore, mass-lumping preserves the formal consistency order of the method and is compatible with arbitrarily high-order polynomial degree. **Lemma 3.1** (Conserved quantities).: _The semi-discrete scheme (43), as well as the fully discrete scheme using Crank-Nicolson method_ \[\langle\rho_{h}^{n}(\mathbf{v}_{h}^{n+1}-\mathbf{v}_{h}^{n}),\mathbf{z}_{h} \rangle =-\tau_{n}\mu\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}} \times\text{curl}\,\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}})\cdot\mathbf{z}_{h}\,\mathrm{ d}\mathbf{x}\,, \tag{44a}\] \[\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+1}-\mathbf{\mathcal{H}}_{h}^{n })\cdot\mathbf{\mathcal{X}}_{h} =\tau_{n}\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}\times \text{curl}\,\mathbf{\mathcal{X}}_{h})\cdot\mathbf{v}_{h}^{n+\frac{1}{2}}\,\mathrm{d}\bm {x}\,, \tag{44b}\] _where \(\mathbf{v}_{h}^{n+\frac{1}{2}}:=\frac{1}{2}(\mathbf{v}_{h}^{n}+\mathbf{v}_{h}^{n+1})\) and \(\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}:=\frac{1}{2}(\mathbf{\mathcal{H}}_{h}^{n}+\mathbf{ \mathcal{H}}_{h}^{n+1})\), preserve the energy identity_ \[\sum_{i\in\mathcal{V}_{\mathcal{V}}}\tfrac{m_{i}}{2}\rho_{i}^{n+1}|\mathbf{v}_{i}^{ n+1}|^{2}+\tfrac{\mu}{2}\|\mathbf{\mathcal{H}}_{h}^{n+1}\|^{2}_{L^{2}(\Omega)}=\sum_{i\in \mathcal{V}_{\mathcal{V}}}\tfrac{m_{i}}{2}\rho_{i}^{n}|\mathbf{v}_{i}^{n}|^{2}+ \tfrac{\mu}{2}\|\mathbf{\mathcal{H}}_{h}^{n}\|^{2}_{L^{2}(\Omega)}. \tag{45}\] Proof.: We consider the proof of the fully discrete scheme (44). We take \(z_{h}=\mathbf{v}_{h}^{n+\frac{1}{2}}\) and \(\mathbf{\mathcal{X}}_{h}=\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}\) in (44), the result follows by noting that \[\langle\rho_{h}^{n}(\mathbf{v}_{h}^{n+1}-\mathbf{v}_{h}^{n}),\mathbf{v}_{h}^{n+\frac{1}{2}} \rangle=\sum_{i\in\mathcal{V}_{\mathsf{V}}}\tfrac{m_{i}}{2}\rho_{i}^{n}(\mathbf{v} _{i}^{n+1}-\mathbf{v}_{i}^{n})\cdot(\mathbf{v}_{i}^{n+1}+\mathbf{v}_{i}^{n}),\] using difference of squares, and adding both lines leading to the cancellation of right hand side terms. **Proposition 3.1** (Preservation of the weak-divergence).: _Assume that \(\mathbf{\mathsf{H}}_{h}\) is the curl-conforming BDM\({}_{1}\) space, as defined in (34). Then, we have that the solution of (43) and (44) will satisfy_ \[(\mathbf{\mathcal{H}}_{h}^{n+1},\nabla\omega_{h})=(\mathbf{\mathcal{H}}_{h}^{n}, \nabla\omega_{h}) \tag{46}\] _for all \(\omega_{h}\in\mathbb{W}_{h}\), with \(\mathbb{W}_{h}\) as defined in (35). Note that (46) is nothing else than the discrete counterpart of weak divergence property._ Proof.: The proof follows from the fact that the inclusion \(\nabla\mathbb{W}_{h}\subset\mathbf{\mathsf{H}}_{h}\) holds true, therefore \(\nabla\omega_{h}\) is a valid test function of (44b) (for all \(\omega_{h}\in\mathbb{W}_{h}\)). Inserting this test function into (44b) we get \[\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+1}-\mathbf{\mathcal{H}}_{h}^{n})\cdot\nabla \omega_{h}=\tau_{n}\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}\times \operatorname{curl}\nabla\omega_{h})\cdot\mathbf{v}_{h}^{n+\frac{1}{2}}\,\mathrm{ d}\mathbf{x}\,,\] where the right hand side is zero since \(\operatorname{curl}\nabla\omega_{h}\equiv\mathbf{0}\). Scheme (44) defines the numerical procedure used to update the momentum and magnetic field during the evolution of Stage #2: we summarize its implementation in Algorithm 1 as a function with input and return arguments. However, Algorithm 1 does not prescribe the evolution of the density \(\rho\) and total mechanical energy \(E\). We observe in (23) that the density does not evolve during the evolution of Stage #2. On other hand, we will use (30) in order to update the total mechanical energy. We summarize the entire update for Stage #2 in Algorithm 2. ``` Define:\(\mathbf{v}_{h}^{n}:=\sum_{i\in\mathcal{V}_{\mathsf{V}}}\tfrac{\mathbf{m}_{i}^{n}}{ \rho_{i}^{n}}\phi_{i}\) Find:\(\{\mathbf{v}_{h}^{n+1},\mathbf{\mathcal{H}}_{h}^{n+1}\}\in[\mathbb{W}_{h}]^{2} \times\mathbf{\mathsf{H}}_{h}\) such that \[\langle\rho_{h}^{n}(\mathbf{v}_{h}^{n+1}-\mathbf{v}_{h}^{n}),z_{h}\rangle=- \tau_{n}\mu\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}\times \operatorname{curl}\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}})\cdot z_{h}\,\mathrm{ d}\mathbf{x}\,,\] \[\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+1}-\mathbf{\mathcal{H}}_{h}^{n}) \cdot\mathbf{\mathcal{X}}_{h}=\tau_{n}\int_{\Omega}(\mathbf{\mathcal{H}}_{h}^{n+ \frac{1}{2}}\times\operatorname{curl}\mathbf{\mathcal{X}}_{h})\cdot\mathbf{v}_{h}^{n+ \frac{1}{2}}\,\mathrm{d}\mathbf{x}\] where \[\mathbf{v}_{h}^{n+\frac{1}{2}}:=\tfrac{1}{2}(\mathbf{v}_{h}^{n}+\mathbf{v}_{h}^{n+1})\] and \[\mathbf{\mathcal{H}}_{h}^{n+\frac{1}{2}}:=\tfrac{1}{2}(\mathbf{\mathcal{H}}_{h}^{n}+ \mathbf{\mathcal{H}}_{h}^{n+1})\] \[\text{Define:}\mathbf{m}_{h}^{n+1}:=\sum_{i\in\mathcal{V}_{\mathsf{V}}}( \mathbf{v}_{i}^{n+1}\rho_{i}^{n})\phi_{i}\] Return:\(\{\mathbf{m}_{h}^{n+1},\mathbf{\mathcal{H}}_{h}^{n+1}\}\) ``` **Algorithm 1**momentum_and_h_field_update(\(\langle\rho_{h}^{n},\mathbf{m}_{h}^{n},\mathbf{\mathcal{H}}_{h}^{n},\tau_{n}\rangle\)) **Lemma 3.2** (Properties preserved by Algorithm 2).: _The scheme source_update, described by Algorithm 2, preserves the following global energy_ \[\sum_{i\in\mathcal{V}_{\mathsf{V}}}m_{i}E_{i}^{n+1}+\tfrac{\mu}{2}\| \boldsymbol{H}_{h}^{n+1}\|_{L^{2}(\Omega)}^{2}=\sum_{i\in\mathcal{V}_{\mathsf{ V}}}m_{i}E_{i}^{n}+\tfrac{\mu}{2}\|\boldsymbol{H}_{h}^{n}\|_{L^{2}(\Omega)}^{2}, \tag{49}\] _as well as pointwise properties_ \[\varepsilon(\boldsymbol{u}_{i}^{n+1})=\varepsilon(\boldsymbol{u}_{i}^{n}), \ s(\boldsymbol{u}_{i}^{n+1})=s(\boldsymbol{u}_{i}^{n}),\ \eta(\boldsymbol{u}_{i}^{n+1})=\eta(\boldsymbol{u}_{i}^{n})\text{ for all }i\in \mathcal{V}_{\mathsf{V}}\,,\] _with \(\boldsymbol{u}_{i}^{n}=[\rho_{i}^{n},\boldsymbol{m}_{i}^{n},E_{i}^{n}]^{\top}\). In particular, this implies that the following global property for the mathematical entropy_ \[\sum_{i\in\mathcal{V}_{\mathsf{V}}}m_{i}\eta(\boldsymbol{u}_{i}^{n+1})=\sum_{ i\in\mathcal{V}_{\mathsf{V}}}m_{i}\eta(\boldsymbol{u}_{i}^{n}). \tag{50}\] Proof.: From Lemma 3.1 we know that \[\sum_{i\in\mathcal{V}_{\mathsf{V}}}\tfrac{m_{i}}{2\rho_{i}^{n+1}}| \boldsymbol{m}_{i}^{n+1}|^{2}+\tfrac{\mu}{2}\|\boldsymbol{H}_{h}^{n+1}\|_{L^{ 2}(\Omega)}^{2}=\sum_{i\in\mathcal{V}_{\mathsf{V}}}\tfrac{m_{i}}{2\rho_{i}^{n }}|\boldsymbol{m}_{i}^{n}|^{2}+\tfrac{\mu}{2}\|\boldsymbol{H}_{h}^{n}\|_{L^{2} (\Omega)}^{2}. \tag{51}\] Multiplying (48) by \(m_{i}\), reorganizing, and adding for all nodes we get \[\sum_{i\in\mathcal{V}_{\mathsf{V}}}m_{i}(E_{i}^{n+1}-\tfrac{1}{2\rho_{i}^{n+1 }}|\boldsymbol{m}_{i}^{n+1}|^{2})=\sum_{i\in\mathcal{V}_{\mathsf{V}}}m_{i}(E_{ i}^{n}-\tfrac{1}{2\rho_{i}^{n}}|\boldsymbol{m}_{i}^{n}|^{2}).\] Adding this last result to both sides of (51) yields (49). Note that, (48) implies pointwise invariance of the internal energy \(\varepsilon(\boldsymbol{u})\) by construction, which combined with the invariance of the density (47) are enough to guarantee pointwise preservation of the specific and mathematical entropy. Finally, (50) follows from the pointwise preservation of the mathematical entropy. ### The MHD update scheme The Marchuck-Strang splitting scheme involves three steps: the first one using full time-step \(\tau_{n}\), advancing Operator #1 described in (18), the second step using a double size time-step \(2\tau_{n}\) evolving in time the Operator #2 described in (23), and a third step using a full time-step \(\tau_{n}\) evolving Operator #1 again. We summarize the scheme in Algorithm (3). **Proposition 3.2** (Properties preserved by mhd_update).: _Assuming periodic boundary conditions, and that the Euler scheme underlying euler_system_update satisfies the assumptions described in Section 3.2, then the procedure mhd_update described by Algorithm 3 preserves the following global estimate_ \[\sum_{i\in\mathcal{V}_{\psi}}m_{i}E_{i}^{n+1}+\tfrac{\mu}{2}\| \mathcal{H}_{h}^{n+1}\|_{L^{2}(\Omega)}^{2}=\sum_{i\in\mathcal{V}_{\psi}}m_{i}E _{i}^{n}+\tfrac{\mu}{2}\|\mathcal{H}_{h}^{n}\|_{L^{2}(\Omega)}^{2}\,,\] _as well as pointwise admissibility \(\mathbf{u}_{i}^{n+1}\in\mathcal{A}\) for all \(i\in\mathcal{V}_{\psi}\) with \(\mathcal{A}\) as defined in (20). The scheme also preserves the weak divergence property \((\mathcal{H}_{h}^{n+1},\nabla\omega_{h})=(\mathcal{H}_{h}^{n},\nabla\omega_{h})\) for all \(\omega_{h}\in\mathbb{W}_{h}\). If in addition, we assume that property (41) for euler_system_update holds, then we also have the global entropy estimate:_ \[\sum_{i\in\mathcal{V}_{\psi}}m_{i}\eta(\mathbf{u}_{i}^{n+1})\leq\sum_{i\in \mathcal{V}_{\psi}}m_{i}\eta(\mathbf{u}_{i}^{n}).\] Proof.: Key results were proven in Corollary Lemma 3.1, Proposition 3.1, and Lemma 3.2. Under the assumptions of the lemma, each stage of Strang splitting preserves energy-stability, admissibility, weak-divergence, and entropy stability. The proof follows from the sequential nature of operator splitting and the assumptions on euler_system_update described Section 3.2. ## 4 Numerical experiments In this section, we demonstrate the capability of the proposed scheme through several numerical experiments. Second-order accuracy for smooth problems is confirmed in Section 4.1. The results for the popular 1D Riemann Brio-Wu problem [7] is reported in Section 4.2. In Sections 4.3 and 4.4, we look at the two challenging MHD benchmarks: the blast problem [3], and the astrophysical jet problem [40]. ### Accuracy test: smooth isentropic vortex [40] The computational domain is a square \([-10,10]\times[-10,10]\). We start with the ambient solution: the velocity \(\mathbf{v}_{\infty}=(1,1)^{\top}\), the magnetic field \(\mathbf{\mathcal{B}}_{\infty}=(0,0)^{\top}\), and the pressure \(p_{\infty}=1\). At each spatial point \((x_{0},x_{1})^{\top}\), we define the radius from it to the origin \(r=\sqrt{x_{0}^{2}+x_{1}^{2}}\). The vortex is initialized by adding smooth pertubations to the velocity, magnetic field, and pressure of the ambient solution, \[\mathbf{v}=\mathbf{v}_{\infty}+(-x_{1},x_{0})\delta v,\quad\mathbf{\mathcal{B}}=\mathbf{ \mathcal{B}}_{\infty}+(-x_{1},x_{0})\delta B,\quad p=p_{\infty}+(-x_{1},x_{0} )\delta p,\] where \[\delta v=\tfrac{\kappa}{2\pi}\mathrm{e}^{0.5(1-r^{2})},\quad\delta B=\tfrac{ \mu}{2\pi}\mathrm{e}^{0.5(1-r^{2})},\quad\delta p=\tfrac{\mu^{2}(1-r^{2})-\kappa ^{2}}{8\pi^{2}}\mathrm{e}^{1-r^{2}}.\] The real numbers \(\mu\) and \(\kappa\) are vortex strength parameters. In this test, we set \(\kappa=\sqrt{2}\mu\) similar to [40]. The final time is \(T=0.05\). The CFL number is chosen to be \(0.1\). The adiabatic gas constant is \(\gamma=\tfrac{5}{3}\). The convergence results with \(\mu=1\) is presented in Table 1, \(\mu=5.38948943\) in Table 2, and \(\mu=5.389489439\) in Table 3. In the second and the third case, the pressure at the vortex centre is very close to zero: \(3.3\times 10^{-9}\) in the second case, and \(5.3\times 10^{-12}\) in the third case. We want to examine how this affects the convergence rates. Overall, we obtain second-order accuracy in both \(L^{1}\)- and \(L^{2}\)- norms. However, we note that \(L^{\infty}\) rates in Table 3 are not sharp. This is expected, since the results of Table 3, with a vacuum \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \#DOFs & L\({}^{1}\) & Rate & L\({}^{2}\) & Rate & L\({}^{\infty}\) & Rate \\ \hline \hline \multirow{4}{*}{\(\Xi^{\infty}\)} & 1922 & 4.27E-04 & – & 2.34E-03 & – & 2.70E-02 & – \\ & 7442 & 1.07E-04 & 2.04 & 5.98E-04 & 2.01 & 7.33E-03 & 1.93 \\ & 29282 & 2.63E-05 & 2.05 & 1.47E-04 & 2.04 & 1.84E-03 & 2.02 \\ & 116162 & 6.30E-06 & 2.08 & 3.55E-05 & 2.06 & 4.47E-04 & 2.05 \\ \hline \multirow{4}{*}{\(\Xi^{\infty}\)} & 5520 & 6.47E-02 & – & 7.41E-02 & – & 2.77E-02 & – \\ & 21840 & 1.62E-02 & 2.01 & 1.88E-02 & 1.99 & 7.43E-03 & 1.91 \\ & 86880 & 4.06E-03 & 2.01 & 4.72E-03 & 2.00 & 1.85E-03 & 2.02 \\ & 346560 & 1.02E-03 & 2.00 & 1.18E-03 & 2.00 & 4.59E-04 & 2.01 \\ \hline \end{tabular} \end{table} Table 1: Convergence of velocity and magnetic field on smooth solutions. Parameter \(\mu=1.0\). \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \#DOFs & L\({}^{1}\) & Rate & L\({}^{2}\) & Rate & L\({}^{\infty}\) & Rate \\ \hline \hline \multirow{4}{*}{\(\Xi^{\infty}\)} & 1922 & 2.99E-03 & – & 1.63E-02 & – & 1.18E-01 & – \\ & 7442 & 7.69E-04 & 2.00 & 4.54E-03 & 1.89 & 3.92E-02 & 1.63 \\ & 29282 & 1.73E-04 & 2.18 & 1.03E-03 & 2.17 & 1.27E-02 & 1.64 \\ & 116162 & 3.92E-05 & 2.15 & 2.22E-04 & 2.23 & 3.15E-03 & 2.02 \\ \hline \multirow{4}{*}{\(\Xi^{\infty}\)} & 5520 & 6.48E-02 & – & 7.41E-02 & – & 3.15E-02 & – \\ & 21840 & 1.63E-02 & 2.01 & 1.89E-02 & 1.99 & 8.88E-03 & 1.84 \\ \cline{1-1} & 86880 & 4.07E-03 & 2.01 & 4.73E-03 & 2.00 & 2.07E-03 & 2.11 \\ \cline{1-1} & 346560 & 1.02E-03 & 2.00 & 1.18E-03 & 2.00 & 5.10E-04 & 2.02 \\ \hline \end{tabular} \end{table} Table 2: Convergence of velocity and magnetic field on smooth solutions. Parameter \(\mu=5.38948943\). The minimum pressure is \(3.3\times 10^{-9}\). of \(\mathcal{O}(10^{-12})\), are on the limit of what can be meaningfully computed using double precision accuracy. For instance, with such a strong vacuum, the accuracy of the map \(\mathbf{m}\mapsto\mathbf{v}:=\frac{\mathbf{m}}{\rho}\), or even the computation of the internal energy, are just a big stretch from reasonable expectations. We also notice that numerical linear algebra technology starts to break down at such limits as well. For instance, computational practice shows that, in the context of large non-symmetric systems, it is nearly impossible to enforce relative tolerance of Krylov methods much smaller than \(\mathcal{O}(10^{-13})\). These errors propagate from the solution of the source-system (44) into the rest of the scheme. We have verified that by setting the slightly weaker vacuum of \(\mathcal{O}(10^{-11})\), we immediately recover sharp second-order rates in \(L^{\infty}\)-norm. ### 1D Riemann problem: Brio-Wu [7] The Brio-Wu problem is a popular 1D benchmark for MHD schemes. The domain is \([0,1]\). The initial solution is \[[\rho,\mathbf{v},p,\mathcal{B}]^{\top}=\begin{cases}[1,0,0,1,0.75,1]^{\top},&x\in[ 0,0.5),\\ [0.125,0,0,0.1,0.75,-1]^{\top},&x\in[0.5,1].\end{cases}\] The adiabatic gas constant \(\gamma=2\). The final time \(T=0.1\). CFL number \(0.1\) is used. Despite the nonuniqueness of the Riemann solution, the solutions obtained by almost all numerical schemes converge to a specific irregular solution. We refer the interested readers to [37] and references therein. To calculate the convergence rates, we compute a reference solution using the Athena code [35] with 10000 grid intervals. The density solution and the \(y\)-component of the magnetic solution when first-order viscosity is used are shown in Figure 1. The obtained solution is smooth in all the components, including the magnetic field although no regularization is added to it. High-order entropy-based viscosity is then employed to lower the error. The convergence behavior of our numerical scheme is shown in Table 4. The density solution and the \(y\)-component of the magnetic solution are shown in Figure 2. From the convergence tables 4 and the solution plots in Figure 2, our solution converges towards the reference solution. In the circle centered at the origin with radius \(R=0.1\), the pressure is initialized at \(p=1000\). The 10 000 times pressure difference creates a strong blast effect that is difficult for the numerical simulations. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \#nodes & L\({}^{1}\) & Rate & L\({}^{2}\) & Rate \\ \hline \hline 100 & 3.11E-02 & – & 5.36E-02 & – \\ 200 & 1.89E-02 & 0.73 & 3.91E-02 & 0.46 \\ 400 & 1.17E-02 & 0.69 & 2.95E-02 & 0.41 \\ 800 & 7.17E-03 & 0.71 & 2.19E-02 & 0.43 \\ 1600 & 4.43E-03 & 0.69 & 1.62E-02 & 0.44 \\ \hline \end{tabular} \end{table} Table 4: Convergence rates of the density solution of the Brio-Wu problem using high-order entropy viscosity method Figure 1: Density solution \(\rho_{h}\) and the y-component of the magnetic field \(\mathcal{B}_{y,h}\) solution to the Brio-Wu problem using first-order viscosity. Note that even though the magnetic field has no viscous stabilization, the solution shows no significant unphysical oscillation. Figure 2: Density solution \(\rho_{h}\) and the y-component of the magnetic field \(\mathcal{B}_{y,h}\) solution to the Brio-Wu problem using high-order entropy viscosity method. Just like any other second-order (or higher) accuracy scheme, some oscillations are to be expected. However, the natural expectation is that unphysical oscillations do not persist further refinement. methods to capture. That is because the pressure can easily become negative due to approximation errors. The adiabatic gas constant is \(\gamma=1.4\). The solution at \(T=0.01\) is plotted in Figure 3. The obtained solutions agree well with the existing references, e.g., [3; 40; 41]. Detailed structures of the solution are visible, and no oscillatory behaviors are observed. ### Astrophysical jet [40] The last benchmark is proposed by [40]. The domain is \([-0.5,0.5]\times[0,1.5]\). The initial ambient fluid is given by \[[\rho,\mathbf{v},p,\mathbf{\mathcal{B}}]^{\top}=[0.1\gamma,0,0,1.0,0,B_{a}]^{\top}.\] A Mach 800 inflow is set on the inlet boundary \(\mathbf{x}\in(-0.05,0,05)\times\{0\}\): \[[\rho,\mathbf{v}]=[\gamma,0,800]^{\top}.\] The adiabatic gas constant \(\gamma=1.4\). The solution is simulated on half the domain and reflecting boundary condition is imposed on the line \(x=0\). The Euler counterpart of this Mach 800 jet Figure 3: Solution to Blast problem at time \(t=0.01\) using 290147 nodal points is already a very difficult test of positivity preservation of numerical schemes. Fortunately, we have a good Euler solver which can overcome this difficulty. Since the magnetic component is not present in the mechanical pressure, positivity of pressure in the split form is not interfered by how the magnetic field is. We observe that our method performs very well regardless of how extreme the magnetic field is. The solution is shown in Figure 4 when setting \(B_{a}=\sqrt{200}\) and \(5\) when setting \(B_{a}=\sqrt{20000}\). In Figure 4(b), we notice that the magnetic pressure is sharp but less smooth in some regions. This can be due to the fact that the magnetic field is not regularized. Since we did not implement out flow boundary conditions, the domain is extended to the directions of the bow shocks. The extended domain parts are cut out from the final plots. Figure 4: Solution to the astrophysical jet problem at time \(t=0.002\), \(B_{a}=\sqrt{200}\) using 136173 nodal points Figure 5: Solution to the astrophysical jet problem at time \(t=0.002\), \(B_{a}=\sqrt{20000}\) using 136173 nodal points **Remark 4.1** (Solver performance: Why is the scheme competitive?).: Implicit-time integration requires the execution of Newton iterations, with each iteration involving the inversion a Jacobian. In addition, the inverse of the Jacobian is applied using a Krylov method, with each iteration involving two matrix-vector products. The fundamental question of whether implicit time-integration is competitive (or not) boils down to having a low count of (linear and nonlinear) iterations. For the method advanced in this paper we have rather exceptional linear and nonlinear solver performance. For starters, the nonlinear system (44) is solved with at most 4 Newton iterations: we hardcoded the logic to stop the whole computation if more than 4 iterations are needed. This is in big part because we are using the solution from the previous time-step as an initial guess. On the other hand, even though the method does not have to respect the CFL of the MHD system, and magnetosonic waves can be ignored, we still have to respect the CFL of Euler system. Therefore the time-step sizes are still moderate and the resulting Jacobian is just a perturbation of the mass matrix. This means that an inexpensive Krylov method, such as BiCGStab without any form of preconditioning, can be used in practice. Usually less than a dozen matrix-vector product are used in order to apply the inverse of the Jacobian. We believe that the scheme is quite competitive and the incorporation of matrix-free linear algebra for system (44) would make the current implementation suitable to execute three-dimensional computations. ## Appendix A Hyperbolic solver used in this paper This section provides a brief outline of the numerical methods used to solve Euler's system for all the computational experiments advanced in Section 4. The Euler's system solver presented in this section fills in the role of euler_system_update invoked in Algorithm 3, which is supposed to comply with the properties described in Section 3.2. This section does not introduce any novel concept, idea, or numerical scheme, and it is only provided for the sake of completeness. The main ideas advanced in this section were originally developed in the sequence of papers [17; 16; 14] and references therein. ### Low-order scheme Low-order scheme is obtained using first-order Graph Viscosity method suggested first in [17]. Let \(t_{n}\) be the current time, \(\tau_{n}\) is the current time-step and we advance in time by setting \(t_{n+1}=t_{n}+\tau_{n}\). Let \(\mathbf{u}_{h}^{n}=\sum_{i\in\mathcal{V}_{\vee}}\mathbf{u}_{i}^{n}\phi_{i}(\mathbf{x})\) be the finite element approximation at time \(t_{n}\). The first order approximation at time \(t_{n+1}\) is computed as \[m_{i}\frac{\mathbf{u}_{i}^{\mathrm{L},n+1}-\mathbf{u}_{i}^{n}}{\tau_{n}}+\sum_{j\in \mathcal{I}(i)}\mathbb{f}(\mathbf{u}_{j}^{n})\mathbf{c}_{ij}-d^{\mathrm{L}}_{ij}(\mathbf{ u}_{j}^{n}-\mathbf{u}_{i}^{n})=\mathbf{0},\] (A.1) where \(m_{i}\) and \(\mathbf{c}_{ij}\) were defined in (32), while the set \(\mathcal{I}(i)\) is the so-called stencil, which is defined as \(\mathcal{I}(i)=\{j\in\mathcal{V}_{\vee}\,|\,\mathbf{c}_{ij}\neq\mathbf{0}\}\), while the low-order graph-viscosity \(d^{\mathrm{L}}_{ij}\) is computed as \[d^{\mathrm{L}}_{ij} :=\max(\lambda_{\max}(\mathbf{u}_{i}^{n},\mathbf{u}_{j}^{n},\mathbf{u}_{ij}) \,|\mathbf{c}_{ij}|_{\ell_{2}},\ \lambda_{\max}(\mathbf{u}_{j}^{n},\mathbf{u}_{i}^{n},\mathbf{n}_{ji})\,|\mathbf{c}_{ji}|_{ \ell_{2}}),\quad\forall i\neq j,\] (A.2) \[d^{\mathrm{L}}_{ii} :=-\sum_{i\neq j\in\mathcal{V}_{\vee}}d^{\mathrm{L}}_{ji}\ \ \text{and}\ \ \mathbf{n}_{ij}=\frac{\mathbf{c}_{ij}}{|\mathbf{c}_{ij}|_{\ell_{2}}}.\] Here \(\lambda_{\max}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})\) is the maximum wave speed of the one dimensional Riemann problem: \(\partial_{t}\mathbf{u}+\partial_{x}(\mathbf{f}(\mathbf{u})\mathbf{n})=0\), where \(x=\mathbf{x}\cdot\mathbf{n}\), with initial condition: \(\mathbf{u}(x,0)=\mathbf{u}_{L}=[\rho_{L},\mathbf{m}_{L},E_{L}]^{\top}\) if \(x<0\), and \(\mathbf{u}(x,0)=\mathbf{u}_{R}=[\rho_{R},\mathbf{m}_{R},E_{R}]^{\top}\) if \(x\geq 0\). The maximum wavespeed of this Riemann problem can be computed exactly (38, Chap. 4), however this comes at the expense of solving a nonlinear problem. In theory and practice, any upper bound of the maximum wavespeed of the Riemann problem could be used in formula (A.2) while still preserving rigorous mathematical properties of the scheme (17, 18). For the specific case of the covolume equation of state; \(p(1-b\rho)=(\gamma-1)e\rho\) with \(b\geq 0\); we can use \(\lambda^{\mathfrak{g}}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})\) which is defined by \[\lambda^{\mathfrak{g}}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})=\max(( \lambda_{1}^{-}(p^{\mathfrak{g}}))_{-},\ (\lambda_{3}^{+}(p^{\mathfrak{g}}))_{+}),\] \[\lambda_{1}^{-}(p^{\mathfrak{g}})=v_{L}-c_{L}\left(1+\frac{ \gamma+1}{2\gamma}\left(\frac{p^{\mathfrak{g}}-p_{L}}{p_{L}}\right)_{+}\right) ^{\frac{1}{2}},\] \[\lambda_{3}^{+}(p^{\mathfrak{g}})=v_{R}+c_{R}\left(1+\frac{ \gamma+1}{2\gamma}\left(\frac{p^{\mathfrak{g}}-p_{R}}{p_{R}}\right)_{+}\right) ^{\frac{1}{2}},\] \[p^{\mathfrak{g}}:=\left(\frac{c_{L}(1-b\rho_{L})+c_{R}(1-b\rho_{ R})-\frac{\gamma-1}{2}(v_{R}-v_{L})}{c_{L}(1-b\rho_{L})\ p_{L}^{-\frac{\gamma-1}{2\gamma}}+c_{R}(1-b\rho_{R})\ p_{R}^{-\frac{\gamma-1}{2\gamma}}}\right)^{ \frac{2\gamma}{\gamma-1}},\] (A.3) where \(z_{-}:=\max(0,-z)\), \(z_{+}:=\max(0,z)\), \(v_{L}=\mathbf{v}_{L}\cdot\mathbf{n}\), \(v_{R}=\mathbf{v}_{R}\cdot\mathbf{n}\), \(p_{L}\) and \(p_{R}\) are the left and right pressures, and \(c_{L}\) and \(c_{R}\) are left and right sound speeds. Here the formula (A.3) is often referred to as the two-rarefaction estimate (38). It is possible to show that \(\lambda_{\max}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})\leq\lambda^{\mathfrak{g}}(\bm {u}_{L},\mathbf{u}_{R},\mathbf{n})\)(16) for \(1<\gamma\leq\frac{5}{3}\). For all computations presented in this paper, \(\lambda^{\mathfrak{g}}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})\) is used instead of \(\lambda_{\max}(\mathbf{u}_{L},\mathbf{u}_{R},\mathbf{n})\) in order to compute the algebraic viscosities described in (A.2). We finally mention that scheme (A.1) equipped with the viscosity (A.2), is compatible with the assumption (41). **Remark A.1** (Convex reformulation and CFL condition).: The scheme (A.1) can be rewritten as \[\mathbf{u}_{i}^{n+1}=\left(1-\sum_{j\in\mathcal{I}(i)\backslash\{i\}}\frac{2\tau_{ \alpha}d_{ij}^{\mathbb{n},n}}{m_{i}}\right)\mathbf{u}_{i}^{n}+\sum_{j\in\mathcal{I} (i)\backslash\{i\}}\left(\frac{2\tau_{\alpha}d_{ij}^{\mathbb{n},n}}{m_{i}} \right)\overline{\mathbf{u}}_{ij}^{n},\] (A.4) where \[\overline{\mathbf{u}}_{ij}^{n}=\tfrac{1}{2}(\mathbf{u}_{j}^{n}+\mathbf{u}_{i}^{ n})-\tfrac{|\mathbf{\kappa}_{ij}|}{2d_{ij}^{\mathbb{n}}}(\mathbb{f}(\mathbf{u}_{j}^{n})- \mathbb{f}(\mathbf{u}_{i}^{n}))\mathbf{n}_{ij}\] (A.5) are the so-called bar-states. We note that the states \(\{\overline{\mathbf{u}}_{ij}^{n}\}_{j\in\mathcal{I}(i)}\) are admissible provided that \(\mathbf{u}_{i}^{n}\) and \(\mathbf{u}_{j}^{n}\) are admissible and that \(d_{ij}^{\mathbb{n}}\geq\max(\lambda_{\max}(\mathbf{u}_{i}^{n},\mathbf{u}_{j}^{n},\mathbf{u} _{ij})\|\mathbf{c}_{ij}\|_{\ell_{2}},\ \lambda_{\max}(\mathbf{u}_{j}^{n},\mathbf{u}_{i}^{n},\mathbf{n}_{ji})\| \mathbf{c}_{ji}\|_{\ell_{2}})\), see (17, 18). We note that \(\mathbf{u}_{i}^{n+1}\) is a convex combination of the bar-states \(\{\overline{\mathbf{u}}_{ij}^{n}\}_{j\in\mathcal{I}(i)}\) provided the condition \(\left(1-\sum_{j\in\mathcal{I}(i)\backslash\{i\}}\frac{2\tau_{\alpha}d_{ij}^{ \mathbb{n},n}}{m_{i}}\right)\geq 0\) holds. Therefore, we define the largest admissible time-step size as \[\tau_{n}=\text{CFL}\cdot\min_{i\in\mathcal{V}_{\text{V}}}\big{(}- \frac{m_{i}}{2d_{ii}^{\mathbb{n}}}\big{)}\] where \(\text{CFL}\in(0,1)\) is a user defined parameter. ### High-order scheme We note that the scheme (A.1) can only be first-order accurate. Therefore we consider the high-order scheme: \[\sum_{j\in\mathcal{I}(i)}m_{ij}\frac{\mathbf{u}_{j}^{\mathrm{H},n+1}-\mathbf{u}_{j}^{n}} {\tau_{n}}+\sum_{j\in\mathcal{I}(i)}\mathbb{f}(\mathbf{u}_{j}^{n})\mathbf{c}_{ij}-d _{ij}^{\mathrm{H}}(\mathbf{u}_{j}^{n}-\mathbf{u}_{i}^{n})=\mathbf{0},\] (A.6) Here \(\{d_{ij}^{\mathrm{H}}\}_{j\in\mathcal{I}(i)}\) are the high-order viscosities which are meant to be such that \(d_{ij}^{\mathrm{H}}\approx 0\) in smooth regions of the domain, while \(d_{ij}^{\mathrm{H}}\approx d_{ij}^{\mathrm{L}}\) near shocks and discontinuities. In addition, \(d_{ij}^{\mathrm{H}}\) must be symmetric and conservative, i.e., \(d_{ij}^{\mathrm{H}}=d_{ji}^{\mathrm{H}}\) and \(d_{ii}^{\mathrm{H}}:=-\sum_{i\neq j\in\mathcal{V}_{\mathcal{V}}}d_{ji}^{ \mathrm{H}}\). In this paper, we use a high-order viscosity that is proportional to the entropy residual (i.e. entropy-production) of the unstabilized scheme. Let us start by considering the Galerkin solution \(\mathbf{u}_{h}^{\mathrm{G}}\) defined as \[m_{i}\frac{\mathbf{u}_{i}^{\mathrm{G}}-\mathbf{u}_{i}^{n}}{\tau_{n}}+ \sum_{j\in\mathcal{I}(i)}\mathbb{f}(\mathbf{u}_{j}^{n})\mathbf{c}_{ij}=\mathbf{0}\ \ \text{for all}\ i\in\mathcal{V}_{\mathcal{V}}.\] (A.7) Let \(\{\eta(\mathbf{u}),\mathfrak{G}(\mathbf{u})\}\) be an entropy pair of the Euler system. We define the entropy residual function \(R_{h}^{n}(\mathbf{u}_{h})=\sum_{i\in\mathcal{V}_{\mathcal{V}}}R_{i}^{n}\phi_{i}\in \mathbb{V}_{h}\) with nodal values \(R_{i}^{n}\) defined by \[R_{i}^{n}:=m_{i}\frac{\mathbf{u}_{i}^{\mathrm{G}}-\mathbf{u}_{i}^{n}}{ \tau_{n}}\cdot\nabla_{\mathbf{u}}\eta(\mathbf{u}_{i}^{n})+\sum_{j\in\mathcal{I}(i)} \mathfrak{G}(\mathbf{u}_{j}^{n})\mathbf{c}_{ij}\ \ \text{for all}\ i\in\mathcal{V}_{ \mathcal{V}}.\] (A.8) Here \(R_{i}^{n}\) is proportional to the entropy production of the unstabilized scheme (A.7). However, formula (A.8) is not practical, since it requires computing \(\mathbf{u}_{i}^{\mathrm{G}}\). We derive a formula for \(R_{i}^{n}\) that does not invoke \(\mathbf{u}_{i}^{\mathrm{G}}\): multiplying (A.7) by \(\nabla_{\mathbf{u}}\eta(\mathbf{u}_{i}^{n})\) we get that \[m_{i}\frac{\mathbf{u}_{i}^{\mathrm{G}}-\mathbf{u}_{i}^{n}}{\tau_{n}} \cdot\nabla_{\mathbf{u}}\eta(\mathbf{u}_{i}^{n})=-\sum_{j\in\mathcal{I}(i)}(\mathbb{f} (\mathbf{u}_{j}^{n})\mathbf{c}_{ij})\cdot\nabla_{\mathbf{u}}\eta(\mathbf{u}_{i}^{n})\] which we use to replace the first term in (A.8): \[R_{i}^{n}=\sum_{j\in\mathcal{I}(i)}-(\mathbb{f}(\mathbf{u}_{j}^{n}) \mathbf{c}_{ij})\cdot\nabla_{\mathbf{u}}\eta(\mathbf{u}_{i}^{n})+\mathfrak{G}(\mathbf{u}_ {j}^{n})\mathbf{c}_{ij}\ \text{for all}\ i\in\mathcal{V}_{\mathcal{V}}.\] (A.9) In practice, we use (A.9) in order to compute the entropy-viscosity indicators. We are now ready to define the high-order nonlinear viscosity as \[d_{ij}^{\mathrm{H}}:=\min\Big{(}d_{ij}^{\mathrm{L}},c_{\mathrm{ EV}}\max(\overline{R}_{i}^{n},\overline{R}_{j}^{n})\Big{)},\] where \(c_{\mathrm{EV}}\) is a tunable constant, which is taken to be equal to \(1\) in numerical examples in this manuscript, and \(\overline{R}_{i}^{n}\) is the normalized entropy residual: \[\overline{R}_{i}^{n}:=\frac{R_{i}^{n}}{\max\Big{(}\rho_{i}^{\max,n}s_{i}^{\max,n}-\rho_{i}^{\min,n}s_{i}^{\min,n},\epsilon\|\eta_{h}^{n}\|_{L^{\infty}( \Omega)}\Big{)}},\] where \(w_{i}^{\max,n}:=\max_{j\in\mathcal{I}(i)}w_{j}^{n}\), and \(w_{i}^{\min,n}:=\min_{j\in\mathcal{I}(i)}w_{j}^{n}\), for \(w\) being \(\rho\) or \(s\). Recall that the mathematical entropy is computed as \(\eta(\mathbf{u})=-\rho s(\mathbf{u})\), where \(s(\mathbf{u})=\frac{1}{\gamma-1}\log(e)-\log(\rho)\) is the specific entropy. A small safety factor \(\epsilon=10^{-8}\) is used to avoid division by zero. ### Convex limiting The low-order and high-order methods can be convenient rewritten as \[m_{i}(\mathbf{u}_{i}^{\mathrm{L},n+1}-\mathbf{u}_{i}^{n})+\sum_{j\in\mathcal{I}(i)}\mathbf{ F}_{ij}^{\mathrm{L}}=\mathbf{0}\;\;\;\text{and}\;\;\;m_{i}(\mathbf{u}_{i}^{\mathrm{H},n+1}- \mathbf{u}_{i}^{n})+\sum_{j\in\mathcal{I}(i)}\mathbf{F}_{ij}^{\mathrm{H}}=\mathbf{0}\] where the algebraic fluxes \(\mathbf{F}_{ij}^{\mathrm{L}}\) are defined as \(\mathbf{F}_{ij}^{\mathrm{H}}\) \[\mathbf{F}_{ij}^{\mathrm{L}} =\tau_{n}(\mathbb{f}(\mathbf{u}_{j}^{n})+\mathbb{f}(\mathbf{u}_{i}^{n})) \mathbf{c}_{ij}-\tau_{n}d_{ij}^{\mathrm{L},n}(\mathbf{u}_{j}^{n}-\mathbf{u}_{i}^{n}),\] \[\mathbf{F}_{ij}^{\mathrm{H}} =\tau_{n}(\mathbb{f}(\mathbf{u}_{j}^{n})+\mathbb{f}(\mathbf{u}_{i}^{n})) \mathbf{c}_{ij}-\tau_{n}d_{ij}^{\mathrm{H}}(\mathbf{u}_{j}^{n}-\mathbf{u}_{i}^{n})\] \[\qquad+(m_{ij}-\delta_{ij}m_{i})(\mathbf{u}_{j}^{\mathrm{H},n+1}-\mathbf{ u}_{j}^{n}-\mathbf{u}_{i}^{\mathrm{H},n+1}+\mathbf{u}_{i}^{n}).\] We define the algebraic flux-corrections \(\mathbf{A}_{ij}=\mathbf{F}_{ij}^{\mathrm{L}}-\mathbf{F}_{ij}^{\mathrm{H}}\) and set the final flux-limited solution to be \[m_{i}\mathbf{u}_{i}^{n+1}=m_{i}\mathbf{u}_{i}^{\mathrm{L},n+1}+\sum_{j\in\mathcal{I}( i)}\ell_{ij}\mathbf{A}_{ij} \tag{10}\] where \(\ell_{ij}\in[0,1]\) are the limiters. If \(\ell_{ij}\equiv 0\) for all \(i\) and \(j\), then (10) recovers \(\mathbf{u}_{i}^{n+1}=\mathbf{u}_{i}^{\mathrm{L},n+1}\). Similarly, if \(\ell_{ij}\equiv 1\) for all \(i\) and \(j\) then (10) leads to \(\mathbf{u}_{i}^{n+1}=\mathbf{u}_{i}^{\mathrm{H},n+1}\). The goal is to select limiters as large as it could be possible while also preserving important bounds. We want to enforce local bounds on the density and local minimum principle of the specific entropy. However, logarithmic entropies, such as \(s(\mathbf{u})=\ln\frac{p(\mathbf{u})}{\rho^{\gamma}}\), are not friendly in the context Newton-like line search iterative methods. Therefore, we use \(\tilde{s}(\mathbf{u})=\rho^{-\gamma}\varepsilon(\mathbf{u})\) which leads to an entirely equivalent minimum principle since \[s(\mathbf{u})\leq s(\mathbf{v})\;\Leftrightarrow\;\tilde{s}(\mathbf{u})\leq \tilde{s}(\mathbf{v})\;\text{for all}\;\mathbf{u},\mathbf{v}\in\mathcal{A}\,,\] due to the monotonicity of \(\ln x\). Therefore, at each node \(i\in\mathcal{V}_{\forall}\) we compute the bounds: \[\rho_{i}^{\min} :=\mathbb{1}_{h}^{-}\min_{j\in\mathcal{I}(i)}\min\{\rho_{j}^{n}, \overline{\rho}_{ij}^{n}\}\] \[\rho_{i}^{\max} :=\mathbb{1}_{h}^{+}\max_{j\in\mathcal{I}(i)}\max\{\rho_{j}^{n}, \overline{\rho}_{ij}^{n}\}\] \[\tilde{s}_{i}^{\min} :=\mathbb{1}_{h}^{-}\min_{j\in\mathcal{I}(i)}\min\{\tilde{s}_{j}^ {n},\overline{\tilde{s}}_{ij}^{n}\}\] where \(\overline{\rho}_{ij}^{n}\) denotes the density of the bar-state \(\overline{\mathbf{u}}_{ij}^{n}\) (see expression (15)), while \(\overline{\tilde{s}}_{ij}^{n}:=\tilde{s}(\overline{\mathbf{u}}_{ij}^{n})\). Here \(\mathbb{1}_{h}^{-}\) and \(\mathbb{1}_{h}^{+}\) are just ad-hoc relaxations of the unity with a prescribed decay rate with respect to the local meshsize \(h\). More precisely we consider \[\mathbb{1}_{h}^{-}=1-\kappa(\tfrac{m_{i}}{|\Omega|})^{\frac{\rho}{2}}\;\;\text {and}\;\;\mathbb{1}_{h}^{+}=1+\kappa(\tfrac{m_{i}}{|\Omega|})^{\frac{\rho}{2}}\; \;\text{with}\;p=1.50,\;d=2.0,\;\text{and}\;\kappa=4.0\,.\] We mention in passing that, asymptotically for \(h\to 0\), the value of \(\kappa\) is has no importance and we may use any other \(\kappa=\mathcal{O}(1)\). At each node \(i\in\mathcal{V}_{\forall}\) we define the set \[\mathcal{B}_{i}=\{\mathbf{u}=[\rho,\mathbf{m},E]^{\top}\in\mathbb{R}^{d+2}\,\big{|}\rho _{i}^{\min}\leq\rho\leq\rho_{i}^{\max},\,\tilde{s}(\mathbf{u})\geq\tilde{s}_{i}^{ \min}\}\] We note that (A.10) can be conveniently rewritten as \[\mathbf{u}_{i}^{\mathrm{H},n+1}=\sum_{j\in\mathcal{I}(i)}\lambda_{i}(\mathbf{u}_{i}^{ \mathrm{L},n+1}+\ell_{ij}\mathbf{P}_{ij})\;\;\text{where}\;\;\lambda_{i}=\tfrac{ 1}{\mathsf{card}\mathcal{I}(i)-1}\;\;\text{and}\;\;\mathbf{P}_{ij}=\tfrac{1}{ \lambda_{i}m_{i}}\mathbf{A}_{ij}\] (A.11) Convex-limiting is built on the observation that condition \(\mathbf{u}_{i}^{\mathrm{H},n+1}\in\mathcal{B}_{i}\) will hold if \(\mathbf{u}_{i}^{\mathrm{L},n+1}+\ell_{ij}\mathbf{P}_{ij}\in\mathcal{B}_{i}\) for all \(j\in\mathcal{I}(i)\). Therefore, at each node \(i\) we compute the preliminary limiters \(l_{ij}\) as \[l_{ij}:=\texttt{compute\_line\_search}(\mathbf{u}_{i}^{\mathrm{L},n+1},\mathbf{P} _{ij},\rho_{i}^{\min},\rho_{i}^{\max},\tilde{s}_{i}^{\min})\] with compute_line_search as defined in Algorithm 4, while the final limiters are computed as \(\ell_{ij}=\min\{l_{ij},l_{ji}\}\) in order to guarantee conservation properties of the scheme, see [14; 13; 25] for both theory and implementation detail. ``` \[\ell^{\rho,min}:=\max\{\ell\in[0,1]\,|\,\rho(\mathbf{u}+\ell\mathbf{P})\geq\varrho ^{min}\}\] \[\ell^{\rho,max}:=\max\{\ell\in[0,\ell^{\rho,min}]\,|\,\rho(\mathbf{u}+ \ell\mathbf{P})\leq\varrho^{max}\}\] \[\ell^{s}:=\max\{\ell\in[0,\ell^{\rho,max}]\,|\,\tilde{s}(\mathbf{u}+ \ell\mathbf{P})\geq\tilde{s}^{min}\}\] \[\texttt{Return}\colon\ell^{s}\] Comments: input arguments are \(\mathbf{u},\mathbf{P}\in\mathbb{R}^{m}\) and \(\varrho^{min},\varrho^{max}\), \(\tilde{s}^{min}\in\mathbb{R}^{+}\). ``` **Algorithm 4**compute_line_search\((\mathbf{u},\mathbf{P},\varrho^{min},\varrho^{max},\tilde{s}^{min})\)
2310.17032
Quantum Long Short-Term Memory (QLSTM) vs Classical LSTM in Time Series Forecasting: A Comparative Study in Solar Power Forecasting
Accurate solar power forecasting is pivotal for the global transition towards sustainable energy systems. This study conducts a meticulous comparison between Quantum Long Short-Term Memory (QLSTM) and classical Long Short-Term Memory (LSTM) models for solar power production forecasting. The primary objective is to evaluate the potential advantages of QLSTMs, leveraging their exponential representational capabilities, in capturing the intricate spatiotemporal patterns inherent in renewable energy data. Through controlled experiments on real-world photovoltaic datasets, our findings reveal promising improvements offered by QLSTMs, including accelerated training convergence and substantially reduced test loss within the initial epoch compared to classical LSTMs. These empirical results demonstrate QLSTM's potential to swiftly assimilate complex time series relationships, enabled by quantum phenomena like superposition. However, realizing QLSTM's full capabilities necessitates further research into model validation across diverse conditions, systematic hyperparameter optimization, hardware noise resilience, and applications to correlated renewable forecasting problems. With continued progress, quantum machine learning can offer a paradigm shift in renewable energy time series prediction, potentially ushering in an era of unprecedented accuracy and reliability in solar power forecasting worldwide. This pioneering work provides initial evidence substantiating quantum advantages over classical LSTM models while acknowledging present limitations. Through rigorous benchmarking grounded in real-world data, our study illustrates a promising trajectory for quantum learning in renewable forecasting.
Saad Zafar Khan, Nazeefa Muzammil, Salman Ghafoor, Haibat Khan, Syed Mohammad Hasan Zaidi, Abdulah Jeza Aljohani, Imran Aziz
2023-10-25T22:19:05Z
http://arxiv.org/abs/2310.17032v3
Quantum Long Short-Term Memory (QLSTM) vs Classical LSTM in Time Series Forecasting: A Comparative Study in Solar Power Forecasting ###### Abstract Accurately forecasting solar power generation is crucial in the global progression towards sustainable energy systems. In this study, we conduct a meticulous comparison between Quantum Long Short-Term Memory (QLSTM) and classical Long Short-Term Memory (LSTM) models for solar power production forecasting. Our controlled experiments reveal promising advantages of QLSTMs, including accelerated training convergence and substantially reduced test loss within the initial epoch compared to classical LSTMs. These empirical findings demonstrate QLSTM's potential to swiftly assimilate complex time series relationships, enabled by quantum phenomena like superposition. However, realizing QLSTM's full capabilities necessitates further research into model validation across diverse conditions, systematic hyperparameter optimization, hardware noise resilience, and applications to correlated renewable forecasting problems. With continued progress, quantum machine learning can offer a paradigm shift in renewable energy time series prediction. This pioneering work provides initial evidence substantiating quantum advantages over classical LSTM, while acknowledging present limitations. Through rigorous benchmarking grounded in real-world data, our study elucidates a promising trajectory for quantum learning in renewable forecasting. Additional research and development can further actualize this potential to achieve unprecedented accuracy and reliability in predicting solar power generation worldwide. Quantum Machine Learning Forecasting Quantum Neural Networks Renewable energy systems ## 1 Introduction The ongoing global shift towards sustainable energy solutions has elevated solar power as a pivotal player in the reshaping landscape of energy production and consumption. However, the integration of renewable sources comes with challenges. Wind and solar forecast errors can lead to significant deviations from planned electricity schedules, with one study finding a positive relationship between wind forecast errors and imbalance volumes [11]. Such inaccuracies impose costs and risks that policy incentives seek to mitigate through improved renewable forecasting. Merging the arenas of quantum information (QI) and machine learning (ML) brings forth a transformative approach to data analytics. As expounded upon by Dunjko et al. [9], the interplay between quantum computing and machine learning has recently witnessed significant breakthroughs. Quantum Machine Learning (QML) strives to harness techniques from both quantum computing and machine learning to address challenges inherent in each domain. This synthesis of technologies has implications in diverse fields, including renewable energy, potentially offering innovative solutions to longstanding challenges as explored by Biamonte et al [4]. It's pivotal to note that quantum machine learning extends beyond mere energy minimization tasks, offering a broader scope in problem-solving paradigms as per Bauch et al [2]. In this transition, precise solar power predictions are becoming vitally important, aiding stakeholders and grid operators in maintaining a delicate equilibrium between energy creation and utilization more effectively. Unlike instantaneous power demand forecasting, solar production forecasting over longer time horizons does not require real-time predictions, providing an avenue where slower quantum model inference may be acceptable in return for substantially improved accuracy. Recent work by Rivera-Ruiz et al. [20] demonstrates that quantum machine learning architectures exhibit competitive performance in a range of forecasting problems, thereby affirming the potential of quantum computational models in predictive analytics. Thus, the integration of innovative machine learning technologies, specifically to enhance the accuracy of traditional forecasting methods, is of utmost significance. This study ventures into this critical sector, investigating the advantages Quantum Long Short-Term Memory (QLSTM) networks might offer over their classical LSTM counterparts in the domain of solar power forecasting, known for its intricate non-linear spatiotemporal patterns. In the evolving sphere of sustainable energy solutions, solar power emerges as a pivotal anchor in the rapidly transitioning energy landscape. As the large-scale penetration of renewable sources necessitates a shift towards proactive management of electrical grids, advanced prediction methodologies for renewable energy sources, particularly for photovoltaic plants known for their intermittency, are paramount. Succetti et al. [25] emphasized this transition, highlighting the critical role of deep neural networks, especially models based on Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs), in multivariate prediction of energy time series. Their work showcased the efficacy of these models in real-world scezenarios characterized by high data variability, positioning them as efficient tools for grid management and resilience, even at the prosumer's level. This shift towards greener energy models underscores the significance of precise solar power predictions [18], vital for stakeholders and power network managers to maintain energy equilibrium. Parallel to this, the growing prominence of renewables, especially solar and wind, has spurred innovative forecasting methodologies to integrate these sources into the smart grid's fabric. Meenal et al. [16] spotlight an array of forecasting techniques, from statistical to AI-based. Notably, artificial intelligence, especially machine learning and deep learning, showcases a robust capability for weather predictions, heralding transformative implications for renewable energy assimilation, even amidst the challenges of atmospheric intricacies. Consequently, the adoption of groundbreaking machine learning methodologies, devised to enhance the acuity of customary forecasting approaches, takes a position of critical significance [22]. This investigation delves deeper into this pivotal area, systematically examining the potential merits of Quantum Long Short-Term Memory (QLSTM) compared to their classical LSTM contemporaries in the context of solar power forecasting - a domain characterized by its nuanced non-linear spatiotemporal patterns. LSTM neural networks have historically demonstrated significant efficacy in leveraging long-term temporal relationships for accurate forecasting, a notion corroborated by substantial research [7]. Lindemann et al. [14] also acknowledged their prowess in modeling and predicting nonlinear, time-variant system dynamics. However, the advent of quantum computational elements offers a fertile ground for augmenting the capabilities of LSTM networks, potentially addressing their restricted ability to represent complex time series forecasting dynamics. Figure. 1 shows a general overview of time series forecasting application to energy grids and power management systems. In recent years, there has been a noticeable increase in interest in quantum machine learning, particularly in the development and refinement of quantum adaptations of recurrent neural networks (RNNs) for time series forecasting [15]. Marking a pivotal shift, Chen et al. [8] pioneered the introduction of QLSTM architectures, which adeptly amalgamate variational quantum circuits [5] with the conventional LSTM framework, thereby harnessing an exponentially larger Hilbert space for data representation and computation. This advancement, coupled with faster convergence times and heightened noise resilience as observed by Emmanoulopoulos et al. [10], potentially paves the way for more precise and reliable forecasting systems, especially pertinent to solar power production. However, the nascent field of quantum machine learning poses its own set of challenges and opportunities [6]. These challenges include the current hardware limitations and the potential trade-offs between quantum advantages and computational overhead, factors that this study meticulously considers in its comparative analysis. Despite these promising developments, the existing body of literature significantly lacks a comprehensive comparison between QLSTM and classical LSTM models grounded in empirical solar production data. This research aims to bridge this gap, offering a thorough comparative analysis in this domain. We seek to determine whether QLSTMs, with their exponential representational capabilities, can indeed set new standards of accuracy in renewable forecasting tasks worldwide, transcending theoretical and synthetic benchmarks. Building upon the foundational work of Chen et al. [8] that elucidated the potential efficacy of QLSTM in sequence modeling, a growing corpus of studies has ventured to investigate the potential of hybrid quantum-classical architectures in general time series forecasting [10][7]). Linday et al. [15] emphasize the ability of quantum models to either match or surpass the accuracy levels achieved by classical LSTMs. Moreover, an expanding community of researchers is concentrating their efforts on incorporating quantum neural networks into solar power forecasting [1] and Ceschini et al. [7] signaling substantial improvements in accuracy compared to traditional neural networks. Nevertheless, the substantial advancements achieved by classical deep-learning techniques in this domain cannot be ignored. Conventional LSTM and hybrid deep networks have consistently established reliable benchmarks, demonstrating their efficacy in solar power prediction tasks [26][24]. This trajectory is highlighted by the latest developments in hybrid structures, such as convolutional LSTM, which amalgamate various strengths to attain unprecedented results in forecasting challenges [19]. Given the rapid advancements in the quantum computing domain, this research aims to provide a nuanced understanding of the capabilities and potential advantages of QLSTM models in solar power predictions. The analysis of quantum data, as posited by Cerezo et al [6]., offers novel avenues for accelerating data-driven decision-making in renewable energy sectors, a frontier this research aims to explore in-depth. We hypothesize that QLSTMs, by leveraging their vastly superior representational capacities, might pave new pathways in understanding nuanced spatiotemporal relationships inherent in solar power data, potentially ushering in an era of heightened accuracy and reliability in renewable energy forecasts. To substantiate these anticipated advantages, this study proposes to conduct extensive controlled experiments, comparing the performance of QLSTM and classical LSTM models utilizing established photovoltaic plant datasets. This initiative, taking place amidst swift developments in quantum computing, extends beyond academic interest, holding considerable Figure 1: General overview of time series forecasting application to energy grids and power management systems practical implications that could redefine global strategies related to energy storage, grid maintenance, and integration costs. Navigating this promising yet nascent research domain necessitates addressing pivotal questions concerning the feasibility of quantum models in limited data scenarios and the implications of existing hardware limitations on model complexity. A comprehensive investigation into the design, training, and deployment of quantum models, considering the current hardware constraints, becomes essential. This study commits to undertaking these pressing investigations, cultivating a deeper understanding of the quantum machine learning domain and its prospective role in revolutionizing renewable energy forecasts. ### Contributions This pioneering research bridges the gap between quantum machine learning advancements and their real-world applications in renewable energy forecasting. The primary contributions of this study include: * **Empirical Validation**: Providing the first empirical evidence that supports the utility of QLSTMs in solar power forecasting, transitioning from synthetic benchmarks to real-world renewable time series data. * Demonstrating marked improvements of QLSTM in accuracy and convergence when benchmarked against classical LSTM models. * **Representation Advantages**: Confirming, through practical data from solar farms, the hypothesized representational strengths of quantum architectures in mapping the intricate spatiotemporal relationships typical of renewable forecasting tasks. * **Design Optimization**: Tailoring QLSTM architectures in light of current quantum hardware challenges, offering actionable insights into real-world implementation, considering limitations in optimization, noise, and computational overhead. * **Performance Analysis**: Conducting thorough experiments and ablation studies to identify the factors attributing to QLSTM's superior performance over LSTM models. This insight aids in guiding future modifications in quantum neural network designs. * **Forecasting Implications**: Establishing QLSTMs as a credible alternative to outdo traditional forecasting methods. * Through benchmarking against conventional techniques using actual photovoltaic data, this research accentuates the potential of quantum machine learning in fortifying accuracy and reliability essential for sustainable energy infrastructure planning and execution. * **Future Potential**: While challenges persist, this investigation paints a picture of a future where QLSTMs revolutionize renewable forecasting. The presented evidence supports the premise of QLSTMs offering unmatched accuracy and adaptability in capturing the nuances of renewable energy generation. * **Interdisciplinary Milestone**: This study stands as a notable juncture of quantum computing and machine learning. By detailing the capabilities of QLSTMs for genuine energy forecasting, it lights the path for a broader embrace of quantum strategies in this vital field. Methodology Figure. 2 describes the methodology of our research work. ### Data Description This study employs two comprehensive datasets tailored for an exhaustive comparative analysis between Quantum Long Short-Term Memory (QLSTM) and Classical LSTM models. The solar dataset of real operational data empowers the modeling process by simulating genuine conditions prevalent in solar farms. The simulated dataset, a merger of high-resolution power generation data and corresponding weather conditions, presents a granular view of power fluctuations and is emblematic of the variations characteristic to real-world solar power environments. The amalgamation of both actual and high-fidelity simulated datasets presents a broad spectrum of temporal granularity and geographic variability. Figure 2: Methodology Flowchart This fusion is meticulously curated to offer a versatile platform for model development, ensuring accurate, generalizable, and comprehensive solar forecasting paradigms. #### 2.1.1 Dataset Justification The specific choice of a real-world operational solar plant dataset and a high-fidelity simulated dataset spanning an entire year provides a robust platform for comparative assessment between QLSTM and classical LSTM models. The real-world data enables evaluation on genuine solar farm conditions with intrinsic noise, while the simulated data allows examination across diverse weather scenarios over an extended duration. Together, these datasets present the variance, noise, and long-term temporal patterns crucial for rigorously examining the capabilities of QLSTM against classical LSTM in solar forecasting tasks. ### Pre-processing In the realm of time series forecasting, particularly when applying Quantum Long Short-Term Memory (QLSTM) and Classical Long Short-Term Memory (LSTM) models to analyze solar power production, meticulous data preprocessing is of paramount importance. These procedures are not only pivotal for safeguarding data integrity but are also instrumental in elevating the dependability and precision of forecasting outcomes. In our study, we have meticulously executed a comprehensive data preprocessing pipeline, encompassing data originating from a 200 MW solar photovoltaic (PV) facility located near Daggett, California, USA, spanning the entire year of 2006. #### 2.2.1 Initial Data Loading and Transformation * **Solar Power Data**: The dataset, encompassing AC power output readings with a 5-minute resolution, was ingested into a Pandas DataFrame. Subsequently, we executed a seamless transition of the timestamp column into a DatetimeIndex, a step that greatly facilitated time-based operations. * **Weather Data**: This dataset featured a range of readings, including air temperature, humidity, and solar irradiance, collected at a 30-minute resolution throughout 2006. We diligently assimilated this data, harmonizing its date-time columns to establish a cohesive Datetime index, mirroring that of the solar data. It is noteworthy that both datasets underwent a rigorous integrity check, effectively confirming the absence of any missing values. #### 2.2.2 Enhancement of Data Granularity To enhance the model's sensitivity to potential power fluctuations, we proceeded to increase the granularity of the weather dataset. Leveraging a linear interpolation method, the 30-minute intervals were smoothly transitioned into 5-minute intervals, thus establishing a synchronized time series platform for model training and analysis. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Attribute** & **Real-World Solar Plant Data** & **Simulated Solar Power Data** \\ \hline **Source** & Kaggle [12] & NREL’s Solar Power Data for Integration Studies [13] \\ **Geographical Coordinates** & Operational data from two PV plants in India & 33.75\({}^{\circ}\) N, 116.65\({}^{\circ}\) W (Near Daggett, California) \\ **Capacity** & \(\bullet\) Max DC Power: \(\sim\)298.94 kW & 200 MW Utility Scale PV \\ **Duration** & May 15, 2020 - June 17, 2020 (34 days) & Full year of 2006 \\ **Resolution** & 15-minute intervals & \(\bullet\) Power 5-minute intervals [13] \\ **Attributes** & \(\bullet\) Power Output Variables: DC power, AC power, Daily Yield, Total Yield & \(\bullet\) Power Output Variables: Power (MW) \\ & \(\bullet\) Weather Variables: Ambient temperature, module temperature, irradiation & Weather Variables: Temperature, DAH, cloud type, relative humidity, dew point, pressure, windspeed, solar angle \\ & \(\bullet\) Metadata: Timestamp, plant ID, sensor/inverter ID & Metadata: Datetime \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of Real-World and Simulated Solar Power Data #### 2.2.3 Feature Engineering and Selection * **Temporal Features**: Acknowledging the significance of temporal attributes in predicting solar power generation, we embarked on a journey of feature engineering that saw the inclusion of numerous time-related variables, such as hour, day, month, and day of the week, among others. * **Lagged Features**: In an effort to augment the model's predictive prowess, we introduced lagged features that encapsulated preceding weather and power data points, thus offering an enriched contextual background for forecasting. * **Data Normalization**: Prior to model training, we executed a stringent normalization process, effectively ensuring a uniform data scale, thereby facilitating the seamless training of LSTM models. #### 2.2.4 Integration of Datasets * **Data Storage**: In order to ensure a smooth, model training process free from data leakage concerns, we stored the consolidated dataset in CSV formats. This step, although seemingly mundane, is of utmost importance in maintaining data integrity. * **Data Partitioning and Standardization**: Adhering to established machine learning norms, we divided our dataset into an 80-20 ratio. This strategy provides a substantial training dataset while retaining an ample portion for model validation. Subsequently, we standardized all attributes within the range of 0 to 1 using min-max scaling, thereby enhancing model convergence rates. * **Temporal Windowing**: To capture the underlying temporal dynamics in our data, we adopted a rolling window approach. Preliminary experiments revealed the efficacy of using the preceding 8-time steps as predictors, with the subsequent time step serving as the target variable. This structured data was adeptly transformed into PyTorch tensors, a crucial step to facilitate batch training. Notably, to preserve data consistency, the training data underwent shuffling, whereas the test data was left in chronological order. * **Batch Data Configuration**: To streamline our model training process, we encapsulated the windowed training and test tensors into Dataset objects. Using DataLoaders enabled us to process data iteratively in batches, while preserving its temporal architecture. It's worth noting that we chose a batch size of 32, keeping computational constraints in mind. This rigorous pre-processing regimen plays a pivotal role in enhancing the efficacy of our time series modeling. It guarantees a direct and unbiased comparison between QLSTM and classical LSTM models in the context of solar power forecasting. ### Simulation Framework Our exploration of Quantum Long Short-Term Memory (QLSTM) models was made possible using the PennyLane quantum machine learning framework [3]. PennyLane, at its core, blends quantum and traditional computing to help build and refine models, benefiting from its ability to automatically adjust model parameters. A standout feature of PennyLane is its capacity to smoothly combine quantum elements--based on variational circuits--with regular neural network parts. This allows the creation of advanced structures like QLSTMs. These models mix traditional repeatable patterns with quantum behaviors such as superposition and entanglement. For our QLSTM model, PennyLane's qml.QNode feature was crucial. It helped set up the quantum node of the model. These quantum nodes, designed with time series data in mind, use specific rotation and entanglement actions, namely RY, RZ, and CNOT gates. Bridging the gap between the quantum and regular sections, PennyLane's qml.qnn.TorchLayer connects the quantum elements with the regular PyTorch framework [17]. This ensures a smooth flow of adjustments during the optimization phase. For faster results during quantum simulations, we mainly used the DefaultQubit tool from PennyLane which mainly utilizes the CPU for this purpose. To further boost the speed, we tried PennyLane's lightning.gpu simulator to run natively on CUDA-enabled GPUs using the NVIDIA cuQuantum SDK. This tool, backed by NVIDIA technology, moves the quantum simulation to high-speed GPUs. During model development, we found the lightning.gpu device provided up to a 5 times speedup for batched inference of quantum circuits on our test system with an NVIDIA Tesla V100 GPU compared to DefaultQubit. However, the training time reduction was not as significant. As QLSTM models grow more complex with more quantum bits and detailed circuits, faster simulations using GPUs become more crucial. Luckily, PennyLane offers multiple tools, making it easier to switch between different simulation methods for the best results. In short, with the help of both DefaultQubit and lightning.gpu tools, we were able to design, refine, and test our QLSTM model and compare it with regular LSTM models in the PyTorch setting. To conclude, PennyLane, integration with Pytorch and NVIDIA technology, was essential for our study. It allowed us to effortlessly combine quantum and traditional modeling while ensuring relatively fast simulations as compared to the classical CPU based device. This provided us with the perfect platform to compare the potentials of quantum and traditional LSTM models. ### Architecture The LSTM and QLSTM architectures are detailed in Appendix A. This research encompasses the design and deployment of both LSTM and QLSTM architectures. These architectures were judiciously crafted to allow a fair comparison between classical and quantum techniques, with a focal point on solar power forecasting. The LSTM model adopts a stacked configuration, constituting two recurrent hidden layers. Each layer houses classical LSTM cells, encapsulating the conventional input (\(i_{\text{LSTM}}\)), output (\(o_{\text{LSTM}}\)), forget (\(f_{\text{LSTM}}\)) gates, and a memory cell (\(c_{\text{LSTM}}\)). This multi-layered design empowers the model to discern and remember long-range temporal dependencies in the time series data, an attribute indispensable for precise renewable energy forecasting. To augment generalization and curb overfitting, dropout layers with a rate of 0.2 are judiciously placed between each LSTM layer. Diverging, the QLSTM model replaces the classical LSTM cells with variational quantum circuits (VQCs), an implementation adapted and enhanced from qlstm repository for parts of speech tagging [23]. This substitution aims to harness the computational advantages unique to quantum mechanisms. Echoing the LSTM's design, the QLSTM layers two of these quantum circuits. The VQCs, in their capacity as quantum feature extractors, exploit the deep representational capabilities of quantum states, encoding intricate time series dynamics. These parametric circuits oscillate between rotation and entanglement gates, ensuring a concise representation of temporal patterns within the exponentially expansive Hilbert space. The qubit quantity and circuit depth were adapted in alignment with the specific characteristics intrinsic to the solar forecasting data. Notably, outside of its quantum encoding, the QLSTM's broader workflow aligns seamlessly with the LSTM's, facilitating a direct comparison. A dropout rate of 0.2 is also infused between the QLSTM layers, maintaining consistency. **Core QLSTM Cell Components:** * **Quantum Gates:** This involves the Hadamard gate (qml.Hadamard) responsible for quantum superposition, rotation gates like RX, RY, and RZ for feature encoding, and CNOT gates (qml.CNOT) ensuring quantum entanglement. * **Quantum Variational Circuit (VQC):** Data is encoded into the quantum circuit using rotation operations, followed by alternating entanglement and variational rotation layers detailed in Appendix A. * **Quantum Nodes:** Four distinct quantum nodes represent the four LSTM gates: forget, input, update, and output. * **Quantum Feature Extractor:** A Classical Linear Layer converts data to match qubit dimensions before being processed by the QLSTM cells. * **Quantum-to-Classical Transformation:** Outputs from QLSTM cells are transformed to classical data using a linear layer, ensuring a smooth transition between quantum and classical realms. Both models converge at a linear output layer, producing the ultimate forecast. Their optimization leverages the ADAM algorithm, zeroing in on minimizing the mean squared error (MSE). With structural symmetry between LSTM and QLSTM, while differing in their core computational elements, this architecture sets the stage for evaluating enhancements attributed solely to quantum encoding. The models are intricately molded to resonate with the spatiotemporal subtleties inherent to solar forecasting data, which encompasses recurring weather patterns and energy variations. By employing the 2006 NREL dataset, a comprehensive archive detailing diverse weather conditions over a year, this research is positioned to deliver a rigorous evaluation. This meticulous comparison seeks to shine a light on the quantum methodologies' adeptness in encapsulating real-world solar phenomena. ### Model Training and Hyperparameters In order to foster a robust comparative analysis between LSTM and QLSTM models, a rigorous hyperparameter optimization phase was implemented, specifically tailored for solar forecasting applications. Initially, a grid search method was employed to delineate appropriate parameter ranges, encapsulating pivotal variables such as window size, batch size, learning rate, epochs, and model-specific parameters including quantum circuit shape. This preliminary exploration paved the way for the identification of prospective parameter values. Following this, a more refined tuning process was undertaken utilizing the Optuna framework, thus automating and enhancing the hyperparameter optimization procedure. The objective function facilitated the evaluation of parameter combinations by training models on a validation dataset and quantifying their performance through the metric of mean squared error loss. The LSTM model witnessed a comprehensive parameter exploration, incorporating window sizes ranging from 5 to 50 timesteps, batch sizes varying from 16 to 128, logarithmically scaled learning rates between 0.0001 to 0.1, and epochs extending from 10 to 100. Over 180 trials were conducted, with Optuna's Tree-Parzen Estimator sampler adaptively selecting new configurations based on previous results, ultimately identifying optimal hyperparameters including a window size of 8, a batch size of 32, a learning rate of 0.001, and 20 epochs. Interestingly, these findings corroborated our initial manual tuning experiments, affirming the efficacy of our automated optimization strategy. A similar extent of optimization was conducted for the QLSTM, encompassing over 150 trials that scrutinized various parameters including the number of qubits (ranging from 2 to 8), circuit layers (varying from 1 to 4), learning rates (between 0.0001 to 0.1), batch sizes (from 16 to 128), and epochs (between 10 to 100). Notably, the optimal configuration closely mirrored the top-performing LSTM hyperparameters, fostering a fair and balanced evaluation process. This meticulous optimization procedure methodically investigated a broad parameter space, empirically pinpointing optimal model configurations. By maintaining a consistent tuning approach for both LSTM and QLSTM models, the integrity of our comparison was upheld, critically evaluating their representational capabilities. The recurrent convergence noted in our experiments stands as a potent validation of our methodology, affirming our model design choices, particularly within the domain of real-world solar forecasting applications. ## 3 Results ### Statistical Analysis Train Loss:The statistical analysis of the train loss is depicted in Table 3. Despite the p-value slightly exceeding the conventional 0.05 threshold for statistical significance, it hints at a potential trend where the QLSTM model is gradually outperforming its classical counterpart in terms of train loss. The moderate effect size (Cohen's d = 0.626952) further indicates a noticeable, albeit not substantial, divergence between the two models, suggesting that with further optimizations, the QLSTM could potentially demonstrate significantly superior performance. Test Loss:The statistical analysis of the test loss is depicted in Table 4. The test loss results unequivocally point to a significant superiority of the QLSTM model. The highly significant p-value (0.000002) accentuates a substantial difference in the performance of the two models, with the QLSTM model achieving markedly lower test loss values. Furthermore, the pronounced effect size (Cohen's d = -1.760950) substantiates this claim, portraying the QLSTM as a \begin{table} \begin{tabular}{c l l} \hline \hline **Layer \#** & **LSTM Architecture** & **QLSTM Architecture** \\ \hline 1 & Input layer & Input layer \\ 2 & LSTM layer (with input, output, forget gates and memory cell) & QLSTM layer (with VQCs featuring rotation and entanglement gates) \\ 3 & Dropout (0.2 rate) & Dropout (0.2 rate) \\ 4 & LSTM layer (with input, output, forget gates and memory cell) & QLSTM layer (with VQCs featuring rotation and entanglement gates) \\ 5 & Dropout (0.2 rate) & Dropout (0.2 rate) \\ 6 & Linear output layer & Linear output layer \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of LSTM and QLSTM Architectures promising avenue for advancing the state of the art in time series forecasting, specifically in the context of solar power production. ### Performance Analysis Predictive Accuracy:The QLSTM model exhibited superior predictive accuracy compared to the classical LSTM model, as illustrated in Table 5. The lower values of MAE, MSE, and RMSE for the QLSTM model indicate a higher predictive accuracy, suggesting a promising avenue in the advancement of time series forecasting in the domain of solar power production. Rate of Convergence:The QLSTM model showcased a remarkably rapid convergence rate, reaching its nadir of test loss as early as the inaugural epoch, thereby exemplifying efficiency and computational frugality. This swift convergence is indicative of the model's adeptness at quickly adapting to the underlying patterns in the data, a trait that stands in stark contrast to the classical LSTM model which required seven epochs to attain a similar state of optimization. This attribute can be particularly advantageous in real-time forecasting applications where timely insights are pivotal. Stability of Learning:The analysis of learning stability is presented in Table 6. The QLSTM model demonstrated a heightened stability in learning, characterized by lower variance in train and test loss metrics across epochs compared to the classical LSTM model, indicating a more stable learning trajectory. The inclusion of box plots could further elucidate the distribution of loss values, providing visual evidence of the reduced spread and outlier values in the QLSTM model, thus underlining its robustness and stability. Generalization Performance:The analysis of generalization performance is illustrated in Table 7. The QLSTM model demonstrated superiority in terms of generalization, substantiated by lower mean and median test loss values over all epochs compared to the classical LSTM model. This performance, coupled with more accurate and reliable predictions, holds the potential to revolutionize solar power production forecasting. \begin{table} \begin{tabular}{l l l} \hline \hline **Metric** & **QLSTM** & **Classical LSTM** \\ \hline Train Loss SD & 0.0028 & 0.0044 \\ Test Loss SD & 0.0012 & 0.0026 \\ \hline \hline \end{tabular} \end{table} Table 6: Stability of Learning Analysis Figure 4: Residual Boxplot Figure 5: Comparative Analysis over train and test \begin{table} \begin{tabular}{l l l} \hline \hline **Metric** & **QLSTM** & **Classical LSTM** \\ \hline Mean Test Loss & 0.0579 & 0.0614 \\ Median Test Loss & 0.0580 & 0.0612 \\ \hline \hline \end{tabular} \end{table} Table 7: Generalization Performance Analysis Discussion and Limitations Upon comparing classical LSTM with QLSTM, our findings strongly indicate a potential advantage of quantum machine learning in enhancing accuracy and convergence rates for solar power forecasting. However, further investigations are imperative to substantiate these preliminary conclusions. An intriguing characteristic of QLSTM, although currently viewed as a limitation, lies in its extended runtime. While certain forecasting scenarios might accommodate this delay, it is undeniable that improving efficiency is pivotal for enabling real-time applications and cost reductions. Specifically, the longer inference times of QLSTMs may present challenges in operational contexts requiring instantaneous solar power forecasting to optimize storage or grid distribution strategies. The evolving trajectory of quantum computing instills optimism, with the aspiration of rivaling the inferential speed of classical LSTMs. Our study carries inherent limitations: 1. Dataset Scope: We constrained our study to two solar sites over the span of one month and a synthetic dataset spanning a year. Further testing on broader datasets encompassing diverse conditions is warranted. 2. Model Design: While we optimized specific hyperparameters and explored architectural variations, a more comprehensive exploration is essential. Factors such as input lengths, quantum circuit designs, and classical layer structures warrant deeper investigation. 3. Simulation Limitations: Our simulations did not account for quantum noise. Assessing QLSTM on actual quantum devices would offer a more precise evaluation of real-world performance. Beyond the realm of solar power, the capabilities of QLSTMs suggest potential applicability in other renewable energy sectors, such as wind or hydro power forecasting. QLSTMs also hold promise in industries dealing with intricate time-series data, including finance and equipment maintenance, where they can provide substantial benefits In summary, this preliminary study presents promising enhancements in accuracy and convergence rates achieved by QLSTMs in the context of solar forecasting. Ongoing advancements in quantum software, algorithms, and hardware are poised to unlock this potential fully. Addressing the outlined limitations, including broader validation, optimization, and robustness assessment, has the potential to pave the way for high-impact publications that offer definitive evidence of quantum advantages in real-world renewable and industrial forecasting. Simultaneous research endeavors, focusing on hybrid quantum-classical paradigms and quantum-inspired architectures, hold the promise of bridging efficiency gaps. ## 5 Conclusion In this significant study, our objective was to determine whether Quantum Long Short-Term Memory (QLSTM) models present substantial advantages over their classical LSTM counterparts in the field of solar power production forecasting. Our central hypothesis was rooted in the belief that QLSTMs, with their vastly expanded representational capabilities, might unveil new insights into the intricate spatiotemporal relationships inherent in solar power data. Our experiments brought to light several key advantages of QLSTM, marked by notably faster training convergence and early-stage improvements in accuracy when compared to classical LSTMs. Notably, after just a single epoch, QLSTM achieved superior test accuracy, underscoring its remarkable ability to swiftly capture complex time-series relationships through quantum phenomena such as superposition. However, it's important to acknowledge the extended runtimes and the imperative for broader dataset evaluations, both of which currently restrict real-world applications. Evaluating QLSTM performance in the presence of quantum noise remains a critical step to gain a comprehensive understanding of its potential. Nonetheless, this study represents a significant step toward the convergence of quantum and classical paradigms, potentially leading to unprecedented advancements in forecasting accuracy. Our central hypothesis regarding the enhanced representational capabilities of QLSTM stands provisionally confirmed, aligning seamlessly with the rapid progress in quantum computing. The findings from this research could have meaningful real-world impacts. More accurate solar power forecasting enabled by QLSTMs can help grid operators better predict renewable energy supply, reducing the need for expensive supplemental generation. Utilities could leverage heightened forecasting accuracy to optimize energy storage and transmission infrastructure, facilitating the integration of solar into the grid. Overall, translating these quantum machine learning advancements to practice could accelerate the sustainable energy transition through improved renewable energy forecasting capabilities. Looking forward, ongoing research should prioritize extending evaluations across diverse energy domains, refining quantum circuits and architectural designs, and pioneering the development of hybrid quantum-classical models capable of overcoming present efficiency limitations. With continued progress, QLSTM-driven forecasting could provide the precision and reliability necessary for integrating solar power and other renewables onto energy grids worldwide. To sum up, this body of work sheds light on a promising trajectory where quantum machine learning holds the potential to revolutionize the field of renewable and industrial time series forecasting, ushering in a new era of precision insights. ## 6 Future Work Quantum machine learning presents a realm of potential advancements: * Quantum Hardware Testing: Direct evaluations on emerging quantum devices will offer insights into real-world challenges and performance. * Optimization: Continuous refinements in hyperparameters, quantum circuits, and architectures could enhance forecast accuracy. * Hybrid Approaches: Marrying quantum and classical techniques may harness the strengths of both worlds. * Broader Applications: Expanding to other renewable energy sources and industrial time series offers a broader testing ground for QLSTMs. * Quantum-Inspired Models: Focusing on these models may provide interim solutions addressing efficiency issues. Additionally, formulating specific research questions around noise resilience and scalability of QLSTMs could guide subsequent studies and advancements. Fostering collaboration between quantum physicists, computer scientists, and machine learning experts will be key in translating theoretical potential into real-world impact. Practical renewable energy implementations, such as wind and solar forecasting for grid operators, represent promising near-term applications of QLSTM advancements. With ongoing developments in quantum computing, quantum machine learning could reshape forecasting in the coming decade. ## Declaration ### Conflict of interest/Competing interests The authors declare no conflict of interest or competing interests. #### Availability of data and materials Data is available upon request. #### Authors' contribution All authors contributed equally to this work. ## Appendix A LSTM and QLSTM Details This appendix provides details on the LSTM and QLSTM model architectures used in the study. ### Lstm The LSTM architecture used in this study stacks multiple LSTM cells to model long-term dependencies. The information flow in an LSTM cell is described by the equations: \[f_{t} =\sigma(W_{f}\cdot v_{t}+b_{f}),\] \[i_{t} =\sigma(W_{i}\cdot v_{t}+b_{i}),\] \[C_{t} =\tanh(W_{C}\cdot v_{t}+b_{C}),\] \[c_{t} =f_{t}\cdot c_{t-1}+i_{t}\cdot C_{t},\] \[h_{t} =o_{t}\cdot\tanh(c_{t}),\] where \(\sigma\) denotes the sigmoid activation function, \(W\) and \(b\) are learnable parameters, \(f\) is the forget gate, \(i\) is the input gate, \(C\) is the cell state, \(c\) is the hidden state, and \(o\) is the output gate. The LSTM was chosen due to its proven ability to model sequence data across various domains. The LSTM cell architecture is illustrated as follows: ### Qlstm The QLSTM replaces LSTM cells with 6 variational quantum circuits (VQCs) to form a quantum LSTM cell. VQCs leverage a small number of qubits and gates to represent complex functions. This quantum layer showed quicker convergence and more stable loss than the classical LSTM [8]. The information flow in a quantum LSTM cell is described by the equations: \[f_{t} =\sigma(\text{VQC}_{1}(v_{t})),\] \[i_{t} =\sigma(\text{VQC}_{2}(v_{t})),\] \[C_{t} =\tanh(\text{VQC}_{3}(v_{t})),\] \[c_{t} =f_{t}\cdot c_{t-1}+i_{t}\cdot C_{t},\] \[h_{t} =\text{VQC}_{5}(o_{t}\cdot\tanh(c_{t})).\] The \(\text{VQC}_{x}\) represent different quantum circuits used in the hybrid model. The QLSTM cell architecture is depicted as follows: ## Appendix B Evaluation Methodology This appendix provides specifics on the quantitative metrics and procedures used to evaluate the LSTM and QLSTM model performance on the solar forecasting task. ### Evaluation Metrics The following quantitative metrics were computed to assess model accuracy: Figure 6: LSTM Circuit [8] * **Mean Absolute Error (MAE)**: Measures average absolute difference between predicted and actual values. Gives an indication of overall error. Lower is better. \[\text{MAE}=\frac{1}{N}\sum|y_{i}-\hat{y}_{i}|\] * **Mean Squared Error (MSE)**: Computes average squared difference between predicted and actual values. More sensitive to outliers than MAE. Lower is better. \[\text{MSE}=\frac{1}{N}\sum(y_{i}-\hat{y}_{i})^{2}\] * **Root Mean Squared Error (RMSE)**: Square root of MSE. Allows interpretability in units of the target variable. Lower is better. \[\text{RMSE}=\sqrt{\text{MSE}}\] * **T-statistic**: The T-statistic is a measure used to determine if there is a significant difference between the means of two groups. It is calculated as the difference between the sample means divided by the standard error of the difference between the means. The formula is given by: Figure 8: QLSTM Circuit [8] Figure 7: Generic VQC architecture for QLSTM. It consists of three layers: the data encoding layer (with the H, Ry, and Rz gates), the variational layer (dashed box), and the quantum measurement layer. [8] \[T=\frac{\bar{X}_{1}-\bar{X}_{2}}{s_{p}\sqrt{\frac{2}{n}}}\] Where \(\bar{X}_{1}\) and \(\bar{X}_{2}\) are the sample means, \(s_{p}\) is the pooled standard deviation, and \(n\) is the sample size for each group. * **P-value**: The p-value is a fundamental concept in hypothesis testing. It represents the probability that the observed data (or something more extreme) would occur if the null hypothesis were true. A smaller p-value typically indicates stronger evidence against the null hypothesis. Conventionally, a p-value below 0.05 is considered statistically significant. * **Effect Size (Cohen's \(d\))**: While the T-statistic tells us if there is a statistically significant difference between groups, effect size quantifies the size of this difference. One commonly used measure is Cohen's \(d\), calculated as: \[d=\frac{\bar{X}_{1}-\bar{X}_{2}}{s_{p}}\] Where \(s_{p}\) is the pooled standard deviation. Cohen's \(d\) values can be interpreted as small (0.2), medium (0.5), and large (0.8) effects. These metrics were selected as standard measures of predictive accuracy for time series forecasting problems. MAPE was included due to its interpretability for solar power production. RMSE and \(R^{2}\) were used as primary metrics for model comparison. ### Evaluation Procedure Metrics were computed on scaled predictions compared to scaled actual values for both the training and test sets. This enabled directly evaluating model generalization. Statistical significance testing using a paired t-test on RMSE values was also conducted to assess whether differences in LSTM and QLSTM errors were statistically significant. Model loss curves, prediction plots, and other visualizations were generated to provide qualitative evaluation. By leveraging both quantitative metrics and qualitative assessments on scaled holdout data, this methodology enabled thoroughly evaluating how effectively the models learned to generalize. The comparative analysis focused on assessing whether the QLSTM architecture demonstrated significantly improved accuracy over classical LSTM for real-world solar forecasting.
2301.01111
Fluctuating landscapes and heavy tails in animal behavior
Animal behavior is shaped by a myriad of mechanisms acting on a wide range of scales, which hampers quantitative reasoning and the identification of general principles. Here, we combine data analysis and theory to investigate the relationship between behavioral plasticity and heavy-tailed statistics often observed in animal behavior. Specifically, we first leverage high-resolution recordings of C. elegans locomotion to show that stochastic transitions among long-lived behaviors exhibit heavy-tailed first passage time distributions and correlation functions. Such heavy tails can be explained by slow adaptation of behavior over time. This particular result motivates our second step of introducing a general model where we separate fast dynamics on a quasi-stationary multi-well potential, from non-ergodic, slowly varying modes. We then show that heavy tails generically emerge in such a model, and we provide a theoretical derivation of the resulting functional form, which can become a power law with exponents that depend on the strength of the fluctuations. Finally, we provide direct support for the generality of our findings by testing them in a C. elegans mutant where adaptation is suppressed and heavy tails thus disappear, and recordings of larval zebrafish swimming behavior where heavy tails are again prevalent.
Antonio Carlos Costa, Gautam Sridhar, Claire Wyart, Massimo Vergassola
2023-01-03T14:11:31Z
http://arxiv.org/abs/2301.01111v4
# Emergent complexity in slowly driven stochastic processes ###### Abstract We consider the distribution of first passage time events in the presence of non-ergodic modes that drive otherwise ergodic dynamics on a potential landscape. We find that in the limit of slow and large enough fluctuations the distribution of first passage time events, \(f(t)\), exhibits heavy tails dominated by a power law with exponent \(f(t)\sim t^{-2}\), and corrections that depend on the strength and the nature of fluctuations. We support our theoretical findings through direct numerical simulations in illustrative examples. _Introduction.--_Complex dynamics are ubiquitous in the natural world. Despite their intrinsic irregularity and unpredictability, they can nonetheless exhibit coherent and universal emergent properties. Of particular importance in the study of complex systems is the understanding of the time it takes for rare events to occur [1, 2, 3]. Notable examples include natural disasters [4] or the spreading of a virus [5]. In fact, first passage times are central to many fields within physics and beyond, with important examples stemming from chemistry, biology and finance (see, e.g., [6, 7, 8, 9, 10, 11, 12] and references therein). Biology in particular is ripe with examples where time is of the essence [13], such as fertilization [14], intracellular events [15, 16, 17, 18, 19, 20, 21], search processes [22, 23, 24], neural activity [25, 26] or population dynamics [27]. We here consider the estimation of first passage time distributions (FPTDs) from finite-time observations in an experimental context. In particular, we are interested in systems with intrinsic time scales comparable to the observation time, for which weak ergodicity breaking becomes evident [28, 29]. Such dynamics can be found for instance in glassy systems [30, 31, 32, 33], where the time scales of equilibration are so long that one can decompose the dynamics into a stationary component and an "aging" component that breaks time-translation invariance. Our main inspiration comes from the less traditional branch of the physics of animal behavior [34, 35]. Remarkably, recent advances in machine vision (see, e.g., [36, 37, 38, 39]) have resulted in an explosion of high spatio-temporal resolution behavioral data. Analysis of fine-scale posture movements shows that much like the run-and-tumble behavior of bacteria [40], more complex organisms also exhibit stereotyped behaviors, albeit with a more intricate structure [41, 42, 43, 44, 45, 46, 47, 48]. The notion of stereotyp in behavior inherently stems from the time scale separation between variations on what is defined as a behavioral state, and the transitions between behavioral states, much like a particle hopping between wells in a potential landscape. For example, while foraging for food the nematode worm _C. elegans_ transitions between coarse-grained "runs" and "pirouettes", which are stereotyped sequences of finer scale movements [49, 48]. However, unlike the particle hopping among potential wells which has a characteristic exponential distribution of transition times, the time spent in a given behavior can be heavy-tailed (see, e.g. Fig. 4E of [48] or Fig. 3 of [50]). We here hypothesize that such heavy-tailed distributions reflect the slow continuous modulation of behavior on longer time scales, resulting from environmental factors or fluctuating internal states driven by neuromodulation, such as hunger, stress or arousal (see, e.g., [51, 52, 53]). Indeed, it has been shown that _C. elegans_ continuously modulates its rate of reorientation events to explore larger and larger arenas in search of food [54]. In order to truly capture the multiscale nature of behavior, we therefore need to account for the fact that it can be modulated on time scales comparable to the observation time. In this Letter, we introduce a general model of behavior in which the pose dynamics evolves in potential landscapes that fluctuate over time. We then study how these dynamics impact the estimation of the distribution of times spent in a given behavior. In the first section, we introduce our phenomenological description of the behavioral dynamics, decomposing it into ergodic dynamics on a potential landscape and the non-ergodic modulation of the landscape. We then derive a general result for the distribution of first passage times, and illustrate it through direct numerical simulations in three example systems. _Slowly driven ergodic dynamics.--_Given a set of observations of animal locomotion (e.g. from video imaging), we consider that the dynamics can be decomposed into ergodic and non-ergodic components. The former are the state-space variables that mix sufficiently well and define the potential wells that correspond to the stereotyped behaviors ; the latter non-ergodic components evolve on time scales comparable to the observation time and slowly modulate the potential landscape. The full dynamics is thus given by \[\begin{cases}\dot{\vec{X}}=F(\vec{X},\vec{\lambda})\\ \tau_{\lambda}\dot{\vec{\lambda}}=G(\vec{X},\vec{\lambda})\end{cases}\quad, \tag{1}\] where \(\vec{X}\in\mathbb{R}^{D}\) represents the ergodic components, \(\vec{\lambda}\in\mathbb{R}^{D_{\lambda}}\) represents the non-ergodic degrees of freedoms, \(F\) and \(G\) are nonlinear, possibly noisy, functions, and \(\tau_{\lambda}\) is assumed to be of the order of the measurement time \(T_{\rm exp}\), \(\tau_{\lambda}={\cal O}(T_{\rm exp})\), such that the \(\vec{\lambda}\) dynamics do not mix. Given the time scale separation between the dynamics of \(\vec{X}\) and \(\vec{\lambda}\), we assume that the dynamics of \(\vec{X}\) is well approximated by quasi-stationary Fokker-Planck dynamics \(\dot{\rho}={\cal L}\rho\), where \({\cal L}\) represents the Fokker-Planck operator. Since we are primarily interested in the long time scale behavior of the system, we consider a projection of the dynamics onto the slowest mode of \({\cal L}\), yielding a generalized Langevin equation [55; 56] with history-dependent friction and fluctuations. Assuming that we can sample the system on a time scale longer than the noise correlation time, we obtain an effective overdamped description : \[\dot{\vec{X}}=F(\vec{X},\vec{\lambda})\Rightarrow\dot{x}=-\partial_{x}U(x, \lambda)+\sqrt{2T_{x}}\eta_{x}(t)\,, \tag{2}\] where \(T_{x}\) captures the effective temperature, \(\eta_{x}\) is Gaussian white noise, and \(\lambda\) is a slow control parameter that modulates the effective potential landscape on slow time scales. Similarly, we consider that \(\lambda\) also obeys an effective overdamped Langevin equation, \[\dot{\lambda}=-\tau_{\lambda}^{-1}\partial_{\lambda}V(\lambda)+\sqrt{2T_{ \lambda}\tau_{\lambda}^{-1}}\eta_{\lambda}(t), \tag{3}\] where \(V\) is assumed to be uncoupled from the dynamics of \(x\) for simplicity, \(T_{\lambda}\) captures the degree of fluctuations in \(\lambda\) and \(\eta_{\lambda}\) is Gaussian white noise. _First passage time distributions._--We are primarily interested in studying the time spent in a given behavioral state. Within the context of the Langevin dynamics of Eq. 2, this is given by the first passage time to reach an energy barrier \(x_{f}\) from the bottom of the potential \(x_{0}\), defined as, \[\tau_{x_{0},x_{f}}(\lambda)=\inf\left\{\tau:x(t+\tau,\lambda)=x_{f}|x(t, \lambda)=x_{0}\right\}\,. \tag{4}\] Despite the general interest in this concept, finding analytical expressions for the density of first passage time events is generally a formidable task [57]. Remarkably few closed-form expressions for the FPTD are known, with most results concerning only the mean first passage time (MFPT) which is more tractable (see, e.g., [1; 6; 9]). However, the MFPT provides only limited information, especially when multiple time scales are involved [15]. Here, we are interested in studying the behavior of the full first passage time distribution, with particular focus on its long time behavior in the presence of weakly non-ergodic dynamics, Eqs. 2 and 3. As previously discussed, the measurement time \(T_{\rm exp}\) essentially separates ergodic from non-ergodic dynamics. In addition, it also sets a lower bound on the slowest observed hopping rates \(\omega_{\rm min}\sim T_{\rm exp}^{-1}\), such that when \(\tau_{\lambda}={\cal O}(T_{\rm exp})\) we can make an adiabatic approximation and assume that transition events occur within a nearly static potential. For a given hopping rate \(\omega\), the first passage time distribution is given by \[f(t,\omega)=\omega e^{-\omega t}\,,\] where \(\omega(\lambda)=1/\tau_{x_{0},x_{f}}(\lambda)\) is the dominating slow kinetic transition rate which implicitly depends on the dynamics of \(\lambda\). When we allow \(\lambda\) to fluctuate slowly, the distribution of first passage times \(f(t)\) is given by the expectation value of \(f(t,\omega)\) over the distribution of \(\omega\), \(p(\omega)\), weighted by the effective number of transition observed within \(T_{\rm exp}\), which is proportional to \(\omega\). Marginalizing over \(\omega\) we get \[f(t)\sim\int_{\omega_{\rm min}}^{\omega_{\rm max}}p(\omega)\omega^{2}e^{- \omega t}d\omega. \tag{5}\] While the barrier height is going to depend on the dynamics of a slow control parameter \(\lambda\), the tail of the distribution is going to be dominated by instances in which the barrier height is the largest, motivating the use of Kramers approximation (see, e.g., [2]), \[\omega(\lambda)=\omega_{0}\exp\left\{-\frac{\Delta U(\lambda)}{T_{x}}\right\}, \tag{6}\] where \(\Delta U(\lambda)=U(x_{f},\lambda)-U(x_{0},\lambda)\) and \(\omega_{0}\) is a constant. For multiple realization of Eq. 3 with different initial conditions, the distribution of \(\lambda\) is given by the Boltzmann weight [58], \[p(\lambda)\sim\exp\left\{-\frac{V(\lambda)}{2T_{\lambda}}\right\}. \tag{7}\] Leveraging Eqs. S1,6,7 we can obtain an asymptotic approximation of the FPTD in the large \(t\) limit (see Supplemental Material), \[f(t)\sim t^{-2}\exp\left\{-\frac{V(\Delta U^{-1}(T_{x}\log(\omega_{0}t)))}{2T _{\lambda}}\right\}\,, \tag{8}\] where \(\Delta U^{-1}(\cdot)\) represents the inverse function of the potential difference defined by Eq. 6 and we have kept only the dominant order of the asymptotic approximation (see Supplemental Material). For very general conditions on \(V(\lambda)\) and \(U(x,\lambda)\), we thus get \(f(t)\sim t^{-2}\) for \(t\rightarrow\infty\) and \(T_{\lambda}\gg 1\). In the following section we will demonstrate the validity of this result in three illustrative examples. _Illustrative examples: Slowly-driven harmonic oscillator._--Consider that \(x\) evolves in a harmonic potential, \(U(x,s)=(x-sx_{f})^{2}\), that is driven by a slow parameter \(s\) that fluctuates within \(V(s)=s^{2}/2\), pushing \(U(x,s)\) closer or further from \(x_{f}\) in a time scale \(\tau_{s}\), Fig. 1(a). The equations of motion are given by a set of Ito stochastic differential equation, corresponding to coupled Ornstein-Uhlenbeck processes, \[\begin{cases}dx_{t}=-(x_{t}-s_{t}x_{f})dt+\sqrt{2T_{x}}dW_{t}\\ ds_{t}=-\tau_{s}^{-1}s_{t}+\sqrt{2T_{s}\tau_{s}^{-1}}dW_{t}\end{cases}\,, \tag{9}\] where \(T_{x}\) and \(T_{s}\) captures the degree of fluctuations, \(dW_{t}\) is a Wiener Gaussian white noise process. We are interested in the density of first passage time events from the minimum of the potential \(x_{0}=s\) to \(x_{f}=1\), for which it is challenging to find a closed form analytical expression, even when \(s(t)=s\in\mathbb{R}\)[57]. In the Supplemental Material, we derive the FPTD in Laplace space [59] and leverage it to estimate the FPTD through numerical inversion [60] for varying values of \(\tau_{s}\) (as in Ref. [61]), see Fig. S2. We find that when \(s\) fluctuates fast enough, \(\tau_{s}\to 0\), we can average out \(s\) and get the simpler dynamics \(dx_{t}=-\left(x_{t}-\langle s\rangle x_{f}\right)dt+\sqrt{2T_{x}}dW_{t}\). In this case, the FPTD is well approximated by \(f(t)\approx f(t,\langle\omega\rangle)=\langle\omega\rangle e^{-\langle\omega \rangle t}\), where \(\langle\omega\rangle\) is the average hopping rate which is set by \(\langle s\rangle\). Even when \(\tau_{s}>0\) but short, it is possible to obtain a self-consistent Markovian dynamics for \(x(t)\) (see e.g., [1]). In this case, the distribution of first passage times is still dominantly exponential, but with a corrected first passage time which depends on the ratio of temperatures \(T_{s}/T_{x}\) and the slow time scale \(\tau_{s}\). However, as we have shown in the previous section \(\tau_{s}\) is large enough, \(\tau_{s}\sim T_{\rm exp}\), the distribution of first passage times becomes heavy-tailed. In this limit, we can leverage Eq. 8 to derive an asymptotic approximation to the distribution of first passage times. The tail of the distribution will be dominated by low \(\omega\) values, which correspond to \(|s|>>1\). In this limit, the barrier height primarily behaves as \(\Delta U(s)=s^{2}/2+\mathcal{O}(s)\). In addition, since \(V(s)=s^{2}/2\), we see that \(V(\Delta U^{-1}(x))=x\) and Eq. 8 yields (see Supplemental Material), \[f(t)\sim t^{-2-\frac{T_{x}}{2T_{s}}}\,, \tag{10}\] which matches what we obtain from direct numerical simulations of Eq. 9, Figs. 1(b),S2,S3(a). _Illustrative examples: Slowly-driven double-well potential._--We now consider a symmetric double-well potential in which the barrier height is slowly modulated according to an Ornstein-Uhlenbeck process, Fig. 2(a), \[\begin{cases}dx_{t}=-4s_{t}^{2}x_{t}(x_{t}-1)^{2}dt+\sqrt{2T_{x}}dW_{t}\\ ds_{t}=-\tau_{s}^{-1}(s_{t}-\mu_{s})dt+\sqrt{2T_{s}\tau_{s}^{-1}}dW_{t}\end{cases}\,, \tag{11}\] where all the parameters are the same as in Eq. 9 with an extra \(\mu_{s}\) that represents the expectation value of \(s\), which we set as \(\mu_{s}=1\). In this case, we have a quartic potential for \(x\), \(U(x,s)=s^{2}(x^{2}-1)^{2}\), which yields \(\Delta U(s)=s^{2}\). Since \(V(s)=s^{2}/2\), we see that \(V(\Delta U^{-1}(x))=x/2\) and Eq. 8 yields (see Supplemental Material), \[f(t)\sim t^{-2-\frac{T_{x}}{4T_{s}}}\,, \tag{12}\] matching what we find through direct numerical simulations of Eq. 11, Figs. 2(b),S3(b) _Illustrative examples: Slowly-driven rugged parabolic potential._--Finally, we consider a rugged parabolic potential as a simple model of the rough energy landscapes found across complex systems, from glasses to proteins (see, e.g., [19; 20; 62]). We construct a rugged landscape by superimposing a sinusoidal perturbation onto a harmonic potential [63], \(U(x,s)=U_{0}(x,s)+U_{1}(x)\), where Figure 2: Heavy-tailed first passage time distribution of a slowly-driven double-well potential. (a) Schematic of the variation in the double-well potential with \(s\) (colored from blue to red; the black line represents \(s=\mu_{s}\)). (b) FPTDs from direct numerical simulations of Eq. 11 for different values of \(T_{s}\). As expected, the tail of the distribution behaves as a power law \(f(t)\sim t^{-2-\alpha}\), where \(\alpha=\frac{T_{x}}{2T_{s}}\) (colored line). The black dashed line represents the \(T_{s}\to\infty\) limit. Figure 1: Heavy-tailed first passage time distributions for a slowly-driven overdamped harmonic oscillator. (a) We simulate the dynamics of a particle in a harmonic oscillator while slowly driving the potential landscape, and estimate the distribution of times it takes to reach \(x_{f}\). The gray line represents the minimum of potential, \(x_{0}=s\), and the color scheme different values of \(s\). (b) FPTDs obtained from direct numerical simulations of Eq. 9 for different values of the temperature \(T_{s}\) that controls the level of fluctuations for the parameter driving the slow variations of the potential landscape. As predicted, the tail of the distribution behaves as a power law with an exponent \(f(t)\sim t^{-2-\alpha}\), with \(\alpha=\frac{T_{x}}{2T_{s}}\). The color scheme represents different ratios of temperatures, and the black dashed line the \(T_{s}\to\infty\) limit. \(U_{0}(x,s)=(x-s)^{2}/2\) and \(U_{1}(x)=-\cos(2\pi kx)/(k\pi)\). The corresponding dynamics are given by, \[\begin{cases}dx_{t}=-\left(x_{t}-s_{t}+2\sin(2\pi kx_{t})\right)dt+ \sqrt{2T_{x}}dW_{t}\\ ds_{t}=-\tau_{s}^{-1}s_{t}+\sqrt{2T_{s}\tau_{s}^{-1}}dW_{t}\end{cases}, \tag{13}\] where \(k\) sets the number of smaller barriers between the global minimum of the potential and \(x_{f}=1\). We set \(k=10\) resulting in a rugged potential as illustrated in Fig. 3(a). In this case, since \(U(x,s)\) is not as simple as before, it is more challenging to derive the correction terms to the power law. However, it has been shown [63] that by spatial averaging of \(U_{1}(x)=-\cos(2\pi kx)/(k\pi)\) over one period, the resulting hopping rate is simply corrected by a constant prefactor \(\omega=I_{0}^{-2}(k^{-1}\pi^{-1}T_{x}^{-1})\omega_{0}\), where \(I_{0}\) is the modified Bessel function and \(\omega_{0}\) is the hopping rate in the absence of the sinusoidal perturbation (from \(U_{0}(x,s)=(x-s)^{2}/2\)). As such, we expect the asymptotic behavior of \(f(t)\) to be the same as for the slowly driven harmonic potential, Eq. 10. Indeed, this is what we observe in Figs. 3(b),S3(c). _Discussion._--Inspired by quantitative analysis of animal behavior, we here examined how the existence of slow non-ergodic modes impacts the statistics collected experimentally, focusing on the distribution of first passage time events. Our results show the emergence of heavy-tailed distributions. In particular, we find that the distribution asymptotes to a power law with an exponent \(f(t)\sim t^{-2}\) in the limit of large fluctuations, regardless of the details of the dynamics. As remarked in the Introduction, our results have important implications to a wide variety of fields, and we here discuss some of these in detail. In the context of animal behavior, heavy-tailed first passage times with an exponent \(f(t)\approx t^{-2}\) have been found extensively across multiple species, from bacteria [64], termites [65] and rats [66] to marine animals [67; 68], humans [69] and even fossil records [70]. In the context of search behaviors (e.g., when foraging for food), such observations have led researchers to hypothesize that Levy-flights (power law distributed run lengths) are efficient search strategies and thus evolutionarily favorable [71; 72; 73; 74; 75]. However, we here show that such fat tails may emerge when the animal is continuously adapting its behavior (slowly modulating the potential landscape), even in the absence of external drives. We therefore predict that disrupting the internal mechanisms for slow modulation of behavior (e.g. neuromodulatory pathways) should result in distribution of first passage times that have exponential tails. Power laws have been observed in a wide variety of systems, from solar flares [76; 77] to the brain [78] and different hypotheses have been put forward to explain their emergence (for a review see e.g. [79]). Among these, work inspired by phase transitions in statistical mechanics associates power laws to "criticality", mostly due to the fact that models inferred from the data appear to require fine-tuning of the model parameters to a special regime between two qualitatively different "phases" (see, e.g., [80]). However, as we have shown here, power laws can emerge without fine tuning and far from "criticality". Indeed, slow modes that evolve on time scales comparable to the observation time are challenging to infer from data, and can give rise to best-fit models that appear "critical". While some of the arguments we have put forward have also been proposed in other contexts [81; 82; 83; 84; 85; 22], we here place them into the framework of out-of-equilibrium statistical mechanics, explicitly connecting the long time scale emergent behavior with the underlying effective fluctuations. In addition, unlike other approaches [82; 84], our framework does not require explicit external drives, but simply collective modes that evolve in a weakly non-ergodic fashion. Our starting point is an effective description of the long time scale dynamics, and further work will be required to fully bridge between microscopic dynamics, and the emergent long time behavior of the first passage time distribution that we uncovered. For example, we find that for intermediate values of \(1\ll\tau_{\lambda}\ll T_{\rm exp}\) the FPTD behaves as a truncated power law with an effective exponent that is slightly smaller that -2 (see Supplemental Material), which goes beyond arguments presented here. What are the minimum \(\tau_{\lambda}\) and \(T_{\lambda}\) for power laws to be measurable, and how do simple exponentials (\(\tau_{\lambda}\ll T_{\rm exp}\)) transition to power law behavior? These are important questions if one hopes to test our predictions in an experimental context (using for example a set-up akin to the ones used to test stochastic resonance [85; 86]). Additionally, we note that when \(\tau_{\lambda}\gg T_{\rm exp}\), the distribution of initial conditions determines the emergent behav Figure 3: Heavy-tailed first passage time distribution in slowly driven rugged parabolic potential. (a) We estimate the first passage time to reach \(x_{f}\) from the global minimum of a rugged parabolic potential. (b) FPTDs from direct numerical simulations of Eq. 13 for different values of \(T_{s}\). As expected, the tail of the distribution behaves as a power law \(f(t)\sim t^{-2-\alpha}\) (colored lines) with \(\alpha=\frac{T_{s}}{2T_{s}}\). The black dashed line corresponds to the \(T_{s}\to\infty\) limit. ior, see Fig. S4. Inspired by experiments in animal behavior, which are typically done with multiple animals, we here assume that the initial condition for the slow mode is sampled according to its Boltzmann distribution \(\lambda(t=0)\sim e^{-\frac{V(\lambda)}{2T_{\lambda}}}\), reflecting the variability across individuals. In this case, the emergent behavior we have derived holds true from \(\tau_{\lambda}\sim T_{\text{exp}}\) to \(\tau_{\lambda}\rightarrow\infty\). However, if the variability across experiments is smaller than that of the Boltzmann distribution, the \(\tau_{\lambda}\rightarrow\infty\) limit will differ from the behavior at \(\tau_{\lambda}\sim T_{\text{exp}}\). Indeed, if the variance of the initial distribution of \(\lambda\) is smaller than that of the Boltzmann distribution, the temperature \(T_{\lambda}\) in our derivation should be changed to a new effective temperature \(T_{\lambda}^{0}<T_{\lambda}\) reflecting the lower variance of the initial conditions. Making this transformation we still get a power law distribution of first passage times, but with a modified exponent that reflects the lower variance (see Supplemental Material). To conclude, we have considered the effect of slow non-ergodic modulations and theoretically captured their effects on the distribution of first passage times, a result that we believe is widely relevant to a range of natural systems. We thank Adrian van Kan, Stephan Fauve, Federica Ferretti, Tosif Ahamed and Arghyadip Mukherjee for comments. This work was partially supported by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL* and by the NIH Grant 1RF1NS128865-01. AC also acknowledges useful discussions at the Aspen Center for Physics, which is supported by National Science Foundation Grant PHY-1607611.
2310.18301
Interactive Joint Planning for Autonomous Vehicles
In highly interactive driving scenarios, the actions of one agent greatly influences those of its neighbors. Planning safe motions for autonomous vehicles in such interactive environments, therefore, requires reasoning about the impact of the ego's intended motion plan on nearby agents' behavior. Deep-learning-based models have recently achieved great success in trajectory prediction and many models in the literature allow for ego-conditioned prediction. However, leveraging ego-conditioned prediction remains challenging in downstream planning due to the complex nature of neural networks, limiting the planner structure to simple ones, e.g., sampling-based planner. Despite their ability to generate fine-grained high-quality motion plans, it is difficult for gradient-based planning algorithms, such as model predictive control (MPC), to leverage ego-conditioned prediction due to their iterative nature and need for gradient. We present Interactive Joint Planning (IJP) that bridges MPC with learned prediction models in a computationally scalable manner to provide us the best of both the worlds. In particular, IJP jointly optimizes over the behavior of the ego and the surrounding agents and leverages deep-learned prediction models as prediction priors that the join trajectory optimization tries to stay close to. Furthermore, by leveraging homotopy classes, our joint optimizer searches over diverse motion plans to avoid getting stuck at local minima. Closed-loop simulation result shows that IJP significantly outperforms the baselines that are either without joint optimization or running sampling-based planning.
Yuxiao Chen, Sushant Veer, Peter Karkus, Marco Pavone
2023-10-27T17:48:25Z
http://arxiv.org/abs/2310.18301v4
# Interactive Joint Planning for Autonomous Vehicles ###### Abstract In highly interactive driving scenarios, the actions of one agent greatly influences those of its neighbors. Planning safe motions for autonomous vehicles in such interactive environments, therefore, requires reasoning about the impact of the ego's intended motion plan on nearby agents' behavior. Deep-learning-based models have recently achieved great success in trajectory prediction and many models in the literature allow for ego-conditioned prediction. However, leveraging ego-conditioned prediction remains challenging in downstream planning due to the complex nature of neural networks, limiting the planner structure to simple ones, e.g., sampling-based planner. Despite their ability to generate fine-grained high-quality motion plans, it is difficult for gradient-based planning algorithms, such as model predictive control (MPC), to leverage ego-conditioned prediction due to their iterative nature and need for gradient. We present Interactive Joint Planning(IJP) that bridges MPC with learned prediction models in a computationally scalable manner to provide us the best of both the worlds. In particular, IJP jointly optimizes over the behavior of the ego and the surrounding agents and leverages deep-learned prediction models as prediction priors that the join trajectory optimization tries to stay close to. Furthermore, by leveraging homotopy classes, our joint optimizer searches over diverse motion plans to avoid getting stuck at local minima. Closed-loop simulation result shows that IJP significantly outperforms the baselines that are either without joint optimization or running sampling-based planning. ## I Introduction A cornerstone for safe motion planning for autonomous vehicles is the ability to reason about interactions between the ego vehicle and other traffic agents, such as human-driven vehicles and pedestrians. A standard approach to deal with interactive scenarios is to leverage prediction models--heuristic [1] or data-driven [2, 3]--to generate predictions of the traffic agents' future motions and plan the ego motion accordingly. In particular, various deep-learned prediction models now represent the state of the art in prediction [4, 5, 6]. Modern deep learning prediction models widely use ego-conditioning, i.e., condition the prediction of adjacent agents' motion on the ego's future motion, to improve the prediction quality and capture the interaction between the ego and the agents. The resulting prediction is then consumed by a planner that aims to generate an ego motion plan that avoids collisions and makes progress towards the goal. Depending on how the prediction is consumed, there are two typical styles of planners: sampling-based planners and iterative planners. The former takes a bunch of ego motion samples, calls the prediction model to generate ego-conditioned predictions and searches for a motion plan [3, 7]. An iterative planner, on the other hand, iteratively refines the ego motion plan, with e.g. gradient [8] or Bayesian optimization [9]. While the latter may achieve finer granularity for the ego motion due to iterative refinement, it needs to evaluate the ego motion plan significantly more times than a sampling-based counterpart and the evaluation cannot be parallelized. As a result, the computational complexity prohibits the use of complex deep-learned ego-conditioned prediction models together with an iterative planner--when a prediction model is used, it is typically limited to simple analytical models [10]. In this paper, we propose a computationally tractable approach, called Interactive Joint Planning (IP), which reasons about interactivity by combining deep-learned prediction models with iterative planners. IJP significantly outperforms other baselines yielding safer motion plans without sacrificing liveness and being overly conservative. **Contributions and paper organization.** We propose IJP, which is a model predictive control (MPC)-based planner that is compatible with any (deep learned) prediction model. The two main novelties are (i) IJP jointly optimizes over the ego vehicle and the nearby agents' motion with collision avoidance constraints while penalizing deviation from the unconditioned predicted trajectories of the agents. The "planned" motion for the agents then serve as the ego-conditioned trajectory predictions for those agents and are integrated in the gradient-based planner. (ii) To remedy the local minimum issue of nonconvex optimization, we introduce the novel concept of free-end homotopy that allows us to efficiently explore a diverse range of motions. In particular, free-end homotopy is an extension of homotopy to trajectories that do not share the same end point. We empirically show that IJP significantly outperforms a baseline without joint optimization and is superior to a sampling-based planner baseline in both performance and computation complexity. ## II Related Works **Interactive Planning.** Interactive / social-aware planning has been studied extensively in the literature. Some of the early approaches modeled the uncontrolled agents' behavior as Gaussian uncertainty [11] without consideration for the impact of ego behavior on nearby agents. Ignoring the ego's impact can lead to overly conservative motion plans as was famously shown in the freezing robot problem [12]. This led to a plethora of research on navigating crowds while accounting for the reactivity of other agents, such as the joint optimization via Gaussian Process (GP) approach in [12, 13] and the reinforcement learning (RL) approach in [14]. **Reactive Behavior Modeling.** The crux of interactive planning is to properly model other agents' reactive behavior. Inverse reinforcement learning (IRL) [15] is an obvious choice, e.g., in [16, 17], and it is used subsequently in optimization over the ego motion [18, 19]. Another popular formulation leverages game theory, which assumes that every agent tries to maximize its own utility [20, 21]; however, the computational complexity of equilibrium solving remains a challenge and it is not straightforward to combine game theory with data-driven methods. Partially-observable Markov Decision Process (POMDP)-based methods were applied to interactive planning and inferring the hidden intention of surrounding agents [22, 23], but similar to the game-theoretic approaches, POMDPs also suffer from high computational complexity and they are typically hand-crafted, making them difficult to scale. Other analytical models such as Intelligent Driver Models (IDM) [24] and Probabilistic Graphic Models (PGM) [25] have also been applied to intention estimation and interactive planning, however, they are limited to simple scenarios, such as highway driving. The idea of joint optimization has been studied for conflict resolution [26], yet assumes knowledge about the other agent's cooperativeness. **Deep-Learned Prediction.** The above-mentioned methods, though very different in nature, all make assumptions (e.g. rationality) about the surrounding agents' decision processes, and the planner then leverages the assumptions to make the planning problem tractable. In contrast, modern prediction methods are predominantly deep-learned phenomenological models [6, 27, 28, 4, 5], i.e., models trained with data to match the ground truth without a clear explanation of the decision process. While they achieve good prediction accuracy and are capable of ego-conditioned prediction, working with downstream interactive planner remains difficult, as pointed out previously. The expensive inference of ego-conditioned prediction under a large number of ego plans made it prohibitive to evaluate fine-grained ego plans. In [29] the authors use a linear system to represent the ego-conditioned prediction compactly, but the performance is limited by the simplicity of linear systems. **Homotopy Planning.** Homotopy planning has been widely studied for motion planning of autonomous systems. To distinguish among homotopy classes of trajectories, [30] uses the relative lateral position (i.e., left or right) between two vehicles, [31, 32] partition the free space into sub-regions, [33, 34] construct homotopy-invariant words, while [35, 36] use a magnetic-field inspired approach. All these approaches require the start and end points of all candidate trajectories to coincide for homotopy classes to be well-defined--there exists no concept in the literature on homotopy that accommodates distinguishing trajectories that do not share the same end point. In this paper, we generalize homotopy to rigorously develop the notion of free-end homotopy which provides the same benefits as homotopy to motion planning, but for trajectories that _do not_ share the same end point. ## III Free-end Homotopy Classes In this section, we will introduce the notion of free-end homotopies that will facilitate faster planning by reducing the number of trajectory initializations for the joint optimization. ### _Background: Introduction to Homotopy_ Homotopy classes are defined as follows: **Definition 1** (Homotopy Class [35]): _Two continuous trajectories \(\mathbf{x}_{1}:\mathbb{R}\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:\mathbb{R}\rightarrow\mathcal{X}\) connecting the same start and end coordinates \(x_{s}\) and \(x_{g}\), respectively, belong to the same homotopy class if and only if one can be continuously deformed into the other without intersecting any obstacles._ **Remark 1**: _It should be clarified that homotopy is not enforced in an open-ended trajectory optimization problem and it is the limitation of gradient-based optimization that causes the local miminum issue. Nonetheless, studying homotopy offers an intuitive way to partition the solution space into disjoint subsets._ Directly checking Definition 1 to verify if two trajectories belong to the same homotopy class is not straightforward. Multiple approximation methods and work-arounds have been proposed in the literature. [30] use the relative lateral position (left or right) when two vehicles longitudinal location coincides to determine the homotopy class, yet the criteria is ambiguous as the direction of longitudinal and lateral is not clear in scenarios with curving roads and intersections. In [31] the authors partition the free space into non-intersecting polytopes and use the order of region traversing to identify homotopy classes. However, free space partitioning is expensive and only works for static environments. A more common implicit approach is to use multiple trajectory samples as initialization for the gradient-based planner and hoping that one of the solutions is the global optimum [37]. However, it is generally inefficient with random initialization as many optimization instances will converge to the same local minimum. A key feature of planning for AVs is that the motion plan may not have a fixed end point. For instance, if we require the AV to progress along the road while avoiding obstacles, a particular goal state is not prescribed to the planner. To account for this, we will introduce the notion of free-end homotopy by using magnetic-field homotopy introduced in [35]. ### _Introduction to Magnetic-Field-Based Homotopy_ The magnetic field approach for homotopy class verification [35] is based on Ampere's law: \[\oint_{\mathcal{C}}\mathbf{B}\cdot d\mathbf{l}=\mu_{0}l_{\text{enc}},\] which states that the line integral of magnetic field around a closed curve is equal to the product of the magnetic constant \(\mu_{0}\) and the current enclosed \(l_{\text{enc}}\). Ampere's law establishes an equivalence condition among all closed curves that encloses the same current, which can also be extended to curves sharing the same starting and ending position. Applying this to homotopy classes in motion planning, the authors in [35] let obstacles carry current and calculate the Ampere circuit integral along the robot's trajectory, which is then used to categorize trajectories into different homotopy classes. In 2D space, all obstacles can be viewed as having genus (number of holes) 0 and the imaginary current can be set perpendicular to the X-Y plane crossing the center of the obstacle. Furthermore, the path integral of the magnetic field is easy to compute in 2D; specifically, using Biot-Savart law, the magnetic field near an infinitely long wire at point \(p\) with current \(I\) perpendicular to the X-Y plane is given by \[\mathbf{B}(\mathbf{r})=\frac{\mu_{0}I}{2\pi||\mathbf{r}||},\] and the direction follows the right-hand law. It follows that the path integral of the magnetic filed along a directional curve that does not intersect with \(p\) is simply \(\frac{\mu_{0}I}{2\pi}\Delta\theta\), where \(\Delta\theta\) is the angular distance from the start to the end. Fig. 1 illustrates an example where the obstacle is marked in green and the imaginary current that goes through its center \(p\), generating a magnetic field \(\mathbf{B}\), visualized with the dashed lines. The path integral is then proportional to the angular distance from the start to the end point of the curve. Note that the angular distance is directional and can be negative; when the curve circles \(p\) counter-clockwise /clockwise once, the angular distance increases/decreases by \(2\pi\), respectively. The angular distance provides two major benefits: (i) it is easy to compute and enforce as a constraint, and (ii) it can be easily extended to moving obstacles, as discussed next. Let \(\mathbf{x}\) be the trajectory of the ego and \(\mathbf{x}^{\mathrm{o}}\) be the trajectory of an obstacle. To calculate the angular distance \(\Delta\theta\), discretize the (X,Y) coordinates of the curves \(\mathbf{x}\) and \(\mathbf{x}^{\mathrm{o}}\) into a sequence of waypoints \(\{(X_{i},Y_{i})\}_{i=1}^{N}\), \(\{(X_{i}^{\mathrm{o}},Y_{i}^{\mathrm{o}})\}_{i=1}^{N}\) and \(\Delta\theta\) is computed as \[\Delta\theta(\mathbf{x},\mathbf{x}^{\mathrm{o}}):=\sum_{i=1}^{N-1}\arctan \frac{Y_{i+1}-Y_{i+1}^{\mathrm{o}}}{X_{i+1}-X_{i+1}^{\mathrm{o}}}-\arctan \frac{Y_{i}-Y_{i}^{\mathrm{o}}}{X_{i}-X_{i}^{\mathrm{o}}} \tag{1}\] It is clear that (1) applies to moving obstacles as well. ### _Free-end Homotopy_ As mentioned above, the motion planning problem for AV may not have a fixed end point. Homotopy classes are not well-defined for two curves with different ending points. To resolve this issue, we introduce free-end homotopy, an extension of homotopy, for trajectories that share the same initial point but different end point. The overarching objective is to develop an equivalence class of trajectories, which we call free-end homotopy classes, whose members execute the same relative motion with respect to other agents (e.g., overtake from left of agent 1 and stay behind agent 2) while being continuously transformable to any other member in the class. Free-end homotopy classes facilitate efficient planning by allowing us to downsample motion plan candidates to only those that belong to different free-end homotopy classes, i.e., ones with different relative motions with respect to obstacles. Let \(\mathbf{x}\) be the trajectory of the ego and \(\mathbf{x}^{\mathrm{o}}\) be the trajectory of a particular obstacle. We begin by defining the _mode_\(m:(\mathbf{x},\mathbf{x}^{\mathrm{o}})\mapsto m(\mathbf{x},\mathbf{x}^{\mathrm{o }})\in\mathbb{Z}\) of a trajectory with respect to a particular obstacle using the angular distances \(\Delta\theta\): \[m(\mathbf{x},\mathbf{x}^{\mathrm{o}}):=\begin{cases}-(k+1),&-(\hat{\theta}+k \pi)\leq\Delta\theta(\mathbf{x},\mathbf{x}^{\mathrm{o}})<-(\hat{\theta}+(k+1) \pi)\\ 0,&-\hat{\theta}\leq\Delta\theta(\mathbf{x},\mathbf{x}^{\mathrm{o}})<\hat{ \theta}\\ k+1,&\hat{\theta}+k\pi\leq\Delta\theta(\mathbf{x},\mathbf{x}^{\mathrm{o}})< \hat{\theta}+(k+1)\pi\end{cases} \tag{2}\] where \(\hat{\theta}\) is a suitably large threshold for differentiating between the three modes. We refer to these three classes as clockwise (CW), stationary (S), and counter-clockwise (CCW), as illustrated in Fig. 2. In CW mode, the ego vehicle moves clockwise relative to the object, in CCW the ego vehicle moves counter-clockwise relative to the object, while in S, the ego vehicle remains roughly static relative to the object. **Remark 2**: _Modes with more refined quantization, e.g. consider \(\Delta\theta\in[k\pi,(k+1)\pi]\), can be chosen. We chose only three categories as they were found to be sufficient to cover the typical driving scenarios._ If there are \(M\) obstacles in the scene, then the _mode vector_\(h\) for an ego trajectory \(\mathbf{x}\) is defined as the cartesian product of the modes (2) with respect to each obstacle in the scene, i.e., \(h(\mathbf{x},\{\mathbf{x}_{i}^{\mathrm{o}}\}_{i=1}^{M}):=(m(\mathbf{x}, \mathbf{x}_{1}^{\mathrm{o}}),\cdots,m(\mathbf{x},\mathbf{x}_{M}^{\mathrm{o}}))\); Fig. 3 illustrates \(h\) with an example scene with two cars near the ego vehicle and three example trajectories. With this, we are now ready to define free-end homotopy. **Definition 2** (Free-end Homotopy): _Let \(\mathbf{x}_{1}:\mathbb{R}\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:\mathbb{R}\rightarrow\mathcal{X}\) be two continuous trajectories that share the same start point, but do not necessarily share the same end point. Then, a continuous mapping \(f:[0,1]\times\mathbb{R}\rightarrow\mathcal{X}\) is called a free-end homotopy if it satisfies the following criteria:_ * \(f(0,\cdot)=\mathbf{x}_{1}(\cdot)\)_,_ * \(f(1,\cdot)=\mathbf{x}_{2}(\cdot)\)_, and_ * _for all_ \(\lambda\in[0,1]\)_, the mode vector_ \(h_{\lambda}\) _for_ \(f(\lambda,\cdot)\) _are equal._ If a free-end homotopy \(f\) exists between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), then the two trajectories are said to be free-end homotopic. Free-end homotopy is a generalization of the notion of homotopy. We make this clear in the next lemma which shows that all homotopic trajectories (with the same start and end points), are also free-end homotopic. **Lemma 1**: _Continuous trajectories with same start and end points are homotopic, if, and only if, they are also free Fig. 1: Magnetic path integral in 2D Fig. 2: Three homotopy classes: CW, S, and CCW end homotopic. Furthermore, the homotopy is also a free-end homotopy. Proof:: Proof provided in the appendix. In the next lemma we show that the free-end homotopy relation defined above is, in fact, an equivalence relation. **Lemma 2** (Free-end homotopy is an equivalence relation): _Free-end homotopy, presented in Definition 1, is an equivalence relation._ Proof:: Proof provided in the appendix. This result ensures that all trajectories that are free-end homotopic can be continuously transformed from one to another while retaining the same mode vector. Hence, we can limit our planning to just one candidate per free-end homotopy class. However, it remains unresolved whether all trajectories with the same mode vector belong to the same free-end homotopy class. Indeed, in the next theorem we will show that only one free-end homotopy class corresponds to one mode vector. This ensures that if we find a continuous trajectory with a particular mode vector, we can _continuously_ transform it to any other trajectory with the same mode vector, since they all belong to the same free-homotopy class. This facilitates faster planning by letting us run a continuous optimizer (as discussed later in Section IV) on only one trajectory per mode vector. **Theorem 1** (One free-end homotopy class per mode vector): _Continuous trajectories with the same mode vector are free-end homotopic._ Proof:: The proof is provided in the appendix. ### _Applying free-end homotopy classes in motion planning_ When initializing a motion planner, a naive choice is to consider all possible free-end homotopy classes, however, the number of free-end homotopy classes grows exponentially with the number of nearby objects and many of the classes are not realistic. For example, in the situation depicted in Fig. 3, CCW for the blue vehicle is not viable as there is not enough space to pass by its right side. For faraway objects, the free-end homotopy class is most likely S due to a small angular distance within the planning horizon. To identify the promising free-end homotopy classes, we take a sampling approach. Specifically, we use a trajectory sampler to generate \(N\) trajectory samples for the ego vehicle. Then we invoke a trajectory predictor to provide scene-centric trajectory predictions for all \(M\) objects in the scene. Together there are \(N\times M\) free-end homotopy class candidates that are expressed as mode vectors, as described in Section III-C. Among these \(N\times M\) mode vectors, many result in repeated mode vectors. Leveraging Theorem 1, we only retain the trajectory with the highest reward among all trajectories sharing the same mode vector as a representative for the corresponding free-end homotopy class. The reward function can be any scalar-valued function that scores the performance of the ego trajectory sample amidst the objects' prediction. Now that we are left with \(K\) trajectories out of the \(N\times M\) candidates, each with a unique free-end homotopy class for the whole scene, an ego trajectory sample and predictions for the objects. These \(K\) trajectories are used to initialize the gradient-based motion planner in two ways: (i) the nonlinear planning problem is linearized around the ego and the objects' trajectories to create an efficiently-solvable sequential quadratic program (SQP), and (ii) the free-end homotopy class of the trajectory is enforced as a constraint in the planning problem. ## IV Method As discussed in the introduction, modern trajectory planners for autonomous vehicles rely on predictions for nearby agents, and ego-conditioning has been shown to improve the planning performance yet is expensive to run. The proposed IJP does not require ego-conditioned predictions, but replaces them with a joint optimization. Specifically, we invoke a prediction module to forecast the non-ego-conditioned future trajectories of the surrounding agents and pass them to the MPC planner. Intuitively, this prior supplies the optimizer with the non-ego agents' intent. The MPC planner then plans for both the ego vehicle and the surrounding agents to minimize the cost function, which we shall discuss in detail later, while enforcing collision avoidance constraint. In reality, the AV can only control its own motion, thus assuming control over surrounding agents without any limitation is obviously naive. To remedy this, the cost contains two terms, a term that penalizes nearby agents' deviation from the predicted trajectories, and a term that penalizes their acceleration and jerk. These two terms are interpreted as the price for the ego to force nearby agents away from their nominal path. The resulting "planned" trajectories of nearby agents can be viewed as the ego-conditioned prediction that roughly centers around the unconditioned trajectory prediction. Fig. 9 illustrates the joint optimization as ego-conditioned prediction. The dashed line shows the unconditioned prediction for the blue agent, which comes from the trajectory predictor; the joint optimization then choose to let the ego (red) change lane and let the blue agent swerve to avoid a collision with the ego, which is viewed as the ego-conditioned prediction, and the deviation from the unconditioned prediction is penalized. Since the prediction model is only called once without ego-conditioning, the inference time decreases significantly. Moreover, the joint optimization result provides a much finer granularity compared to running ego-conditioned prediction on ego trajectory samples. Next, we break down the key compen Fig. 3: Homotopy classes for two nearby objects where trajectory A is categorized as S for both objects; trajectory B is categorized as CW for the blue car and CCW for the brown car; and trajectory C is categorized as S for the blue car and CW for the brown car. ### _Dynamic constraints_ We use a Dubin's car model for all vehicles and cyclists in the scene (including the ego vehicle). \[x=\begin{bmatrix}X\\ Y\\ v\\ \end{bmatrix},u=\begin{bmatrix}\dot{v}\\ \dot{\psi}\\ \end{bmatrix},x^{+}=\begin{bmatrix}X+v\cos(\psi)\Delta t\\ Y+v\sin(\psi)\Delta t\\ v+\dot{v}\Delta t\\ \psi+\dot{\psi}\Delta t\end{bmatrix}. \tag{3}\] where \(X,Y\) are the longitudinal and lateral coordinates, \(v\) and \(\dot{v}\) are the longitudinal velocity and acceleration, \(\psi\) and \(\dot{\psi}\) are the heading angle and yaw rate. The pedestrians follow a double integrator model with the following dynamics: \[x=\begin{bmatrix}X\\ Y\\ v_{x}\\ v_{y}\end{bmatrix},u=\begin{bmatrix}\dot{v}_{x}\\ \dot{v}_{y}\end{bmatrix},x^{+}=\begin{bmatrix}X+v_{x}\Delta t\\ Y+v_{y}\Delta t\\ v_{x}+\dot{v}_{x}\Delta t\\ v_{y}+\dot{v}_{y}\Delta t\end{bmatrix}.\] These dynamic models are linearized around an initial guess of \(x,u\) pair generated by a trajectory sampler as mentioned in Section III. The initial guess satisfies the nonlinear dynamic equations, and the linearized dynamic model takes the form \(x^{+}=Ax+Bu+C\). Furthermore, we impose dynamic constraints on state and inputs of the agents. Specifically, for all vehicles, \[v\in[v^{\min},v^{\max}] \tag{4}\] \[|v\dot{\psi}|\leq a_{x}^{\max}\] (5) \[|\psi|\leq\frac{\delta^{\max}}{l}|v|\] (6) \[\dot{v}\in[a_{x}^{\min},a_{x}^{\max}], \tag{7}\] where \([v^{\min},v^{\max}]\) is the velocity range, \(a_{y}^{\max}\) is the maximum lateral acceleration, \(a_{x}^{\min}\) and \(a_{x}^{\max}\) are the lower and upper bounds for longitudinal acceleration, \(\delta^{\max}\) is the maximum steering angle and \(l\) is the distance between the front and rear axles. All pedestrians follow a simple norm bound on velocity and acceleration: \[||v||\leq v_{\max},\quad||\dot{v}||\leq\dot{v}_{\max}.\] The dynamic constraints are linearized (especially (6) so that the effect of velocity is accounted for) and the linearized constraints is written as \(G_{x}^{d}x+G_{u}^{d}u\leq g^{d}\). ### _Safety constraints_ Safety constraints mainly consist of two parts, collision avoidance constraints and lane boundary constraints. All vehicles are modeled as rectangles with varying size (including the ego) and the pedestrians are modeled as a circle with varying radius. The collision avoidance between the ego (rectangle) and pedestrians (circles) is encoded by checking the three cases where the maximum margin is achieved on the X axis, Y axis, and corners of the vehicle, as shown in Fig. 5. For two vehicles, we analytically calculate the 4 polytopic free spaces around one of the vehicles, as shown in Fig. 5, and enforce linear constraints that the other vehicle's 4 corners and center point all lie in one of the free spaces. Then we do the same after reversing the role of the two vehicles. In addition to the collision avoidance constraints, we also enforce the homotopy class constraint, which is computed as discussed in Section III. In simulation we observed that the MPC QP behave similarly without the homotopy constraint and the initialization/linearization is sufficient to enforce the homotopy constraint. Lane boundaries are given as polylines (sequence of waypoints with headings), the lane boundary constraints are enforced by projecting the vehicle centers to the polylines and calculate the distance margins, as shown in Fig. 6. All of the above mentioned inequality constraints are differentiable w.r.t. the state of the ego vehicle and other agents on the road, and they are linearized and enforced as linear constraints in the MPC. To ensure feasibility, we add slack variable to collision avoidance constraints and lane boundary constraints. Fig. 4: Joint optimization as ego-conditioned prediction: solid trajectories: solution of the joint optimization; dashed line unconditioned predicted trajectory of the blue agent. Fig. 5: Collision checks between vehicles and pedestrians (left), and two vehicles (right) Fig. 6: Lane boundary constraint ### _Costs and MPC QP setup_ The cost function consists of 5 terms: * Penalty on ego vehicle's tracking error w.r.t. the reference trajectory * Penalty on ego vehicle's acceleration and jerk * Penalty on nearby agents' deviation from their unconditioned trajectory prediction * Penalty on nearby agents' acceleration and jerk Putting all components together, the joint MPC solves the following QP: \[\underset{\mathbf{u}_{e},\mathbf{u}_{o},\mathbf{x}_{e},\mathbf{x}_{o}}{\text {min}}\eta_{e}(\mathcal{J}_{\text{ref}}(\mathbf{x}_{\text{e}},\mathbf{x}_{ \text{ref}})+\mathcal{J}_{\text{u}}(\mathbf{u}_{\text{e}}))+\eta_{o}(\mathcal{ J}_{\text{dev}}(\mathbf{x}_{o},\mathbf{x}_{\text{pred}})+\mathcal{J}_{\text{u}}( \mathbf{u}_{o}))) \tag{8}\] \[\text{s.t.} x_{e}[0]=x_{e}^{0},\;x_{o}[0]=x_{o}^{0} \tag{9}\] \[\forall t=0,...,T-1,i\in\{e,o_{1},...o_{n}\},\] \[x_{i}[t+1]=A_{i}[t]x_{i}[t]+B_{i}[t]u_{i}[t]+C_{i}[t]\] (10) \[G_{i,i}^{d}[t]x_{i}[t]+G_{i,i}^{d}[t]u_{i}[t]\leq g^{d}[t]\] (11) \[\forall t=1,...,T,G_{e}^{s}[t]x_{e}[t]+G_{o}^{s}[t]x_{o}[t]\leq g ^{s}[t], \tag{12}\] where \(x_{e}\) is the future state of the ego vehicle, \(x_{o_{i}}\) is the future state of agent \(i\), \(A,B,C\) are the matrices corresponding to the dynamic equality constraints, \(G_{x}^{d},G_{u}^{d},g^{d}\) are matrices corresponding to the input and state bounds, \(G_{e}^{s},G_{o}^{s},g^{d}\) define the safety constraints, including collision avoidance, lane boundary, and the homotopy constraint. The costs include \(\mathcal{J}_{\text{ref}}\) that prompts the ego vehicle to track the desired trajectory, \(\mathcal{J}_{\text{u}}\) that penalizes acceleration and jerk (both angular and linear), and \(\mathcal{J}_{\text{dev}}\) that penalizes agents' deviation from their predictions. \(\eta_{e}\) and \(\eta_{o}\) determine the distribution of emphasis on the ego vehicle and the agents. A large \(\eta_{e}\) leads to more selfish and intrusive behavior of the ego and a small \(\eta_{e}\) leads to more altruistic ego behavior. ### _Interactive Joint Planning_ The IJP planner is summarized in Algorithm 1. The inputs are the reference trajectory for the ego vehicle given by some high-level planner, scene context \(\mathbf{C}\), lane information \(\mathbf{L}\), and the current state of the ego and surrounding agents. Firstly, IJP calls the trajectory prediction model to generate predictions for the \(M\) surrounding agents from the scene context \(\mathbf{C}\). IJP can work with any prediction model that generates dynamically feasible trajectories for the agents involved. It is preferred that the prediction is scene-centric, i.e., predicting joint trajectories for all agents involved. We use Agentformer [6] as our default predictor because it is scene-centric, and is shown to work well with the downstream planner in [7]. ``` 1:procedureIP(\(\mathbf{x}_{\text{ref}},\mathbf{C},\mathbf{L},x_{e}^{0},x_{o}^{0}\)) 2:\(\{\mathbf{x}_{o,j}^{\text{pred}}\}_{j=1}^{M}\leftarrow\textsc{Traj\_pred}( \mathbf{C})\) 3:\(\{\mathbf{x}_{e,l}^{\text{sample}}\}_{j=1}^{N}\leftarrow\textsc{Ego\_sampling}(x_{e}^{0}, \mathbf{L})\) 4:\(\{(\mathbf{x}_{e,k},\mathbf{x}_{o,k},h_{k})\}_{k=1}^{K}\leftarrow\textsc{ Hom\_sel}(\{\mathbf{x}_{e,i}^{\text{sample}}\}_{i=1}^{N},\mathbf{x}_{o}^{\text{pred}})\) 5:for\(r=1,...,R\)do 6:for\(k=1,...,K\)do 7:\(\textsc{QP}_{k}\leftarrow\textsc{Linearize}(\mathbf{x}_{\text{ref}},\mathbf{x}_{e,k}, \mathbf{x}_{o,k},h_{k},x_{e}^{0},\mathbf{x}_{o}^{0},\mathbf{L})\) 8:\(\mathbf{x}_{e,k},\mathbf{x}_{o,k}\leftarrow\textsc{Solve\_QP}(\textsc{QP}_{k})\) 9:endfor 10:endfor 11:return\(\mathbf{x}_{\text{e}}\) associated with the best homotopy class. 12:endprocedure ``` **Algorithm 1** IJP Ego_sampling takes the ego state and lane information to generate ego trajectory samples with a spline sampler introduced in [7], which is then used to identify promising homotopies with the predicted trajectories of the surrounding agents in Hom_sel. With the homotopies selected, IJP uses automatic differentiation to linearize the costs, constraints, and dynamics to formulate a quadratic program. JAX [38] is used for auto-differentiation, and thanks to its powerful parallelization functionality and Just-In-Time (JIT) compilation, the linearization can be done simultaneously for all homotopy classes. The generated QP is solved with 3rd party QP solvers such as GUROBI [39] and Forces Pro [40]. In a sequential quadratic programming (SQP) manner, the nonlinear trajectory optimization problem is linearized and solved as a QP for multiple rounds, each round takes the solution from the last round as the updated linearization point. A proximal constraint is also added to limit the difference of solutions in between rounds to stabilize the SQP. **Remark 3**: _When the trajectory prediction module outputs multimodal predictions of the surrounding agents, the criteria for selecting the optimal solution among the candidate homotopy classes should also take into account the likelihood of the prediction modes, however, we observed that the mode probabilities predicted by the prediction module is usually of bad quality and thus we ignore the mode probability in the final solution selection and simply choose the mode with the lowest cost. We shall investigate how to incorporate prediction likelihood in solution selection in future work._ ## V Simulation setup and results ### _Simulation evaluation setup_ We conduct closed-loop simulation in nuPlan [41] to evaluate the proposed approach. The closed-loop planner consists of three modules, a trajectory predictor that generates the unconditioned trajectory prediction, a route planner that distills lane information and reference trajectory from the lane graph, and IJP that plans the trajectory, as shown in Fig. 8. We use AgentFormer [6] as the trajectory predictor without ego-conditioning, which generates 4 samples of predicted future trajectories lasting for 3 seconds. **Route planner.** The route planner takes the lane graph and the ego state as input, and performs a depth-first search to identify the optimal lane sequence. In nuplan simulation, no goal location is provided, instead the lane segments are labeled as "on-route" or "not on-route". The route planner's search criteria is to find the an on-route lane sequence (up to a certain depth) that balances (i) distance to the ego vehicle (ii) length of the lane plan and (iii) total curvature of the lane plan. With a lane sequence selected, the reference trajectory is generated by projecting the ego's current position to the lane centerline and interpolating given the desired ego velocity. To keep the QP complexity tractable, IJP only include a subset of nearby agents in the joint optimization, denoted as EC agents; the rest of the agents are denoted as non-EC agents and IJP simply encode collision avoidance constraint with their predicted trajectories. The assignment of EC and non-EC agents is based on their minimum distance to the ego vehicle along their predicted trajectories. When there are less agents than the prescribed number, the MPC QP is padded with dummy agents. To avoid frequent JIT compilation, the number of EC agents and non-EC agents are fixed so that the MPC QP maintains a fixed problem dimension. When the number of nearby obstacles is larger than the sum of the prescribed number of EC and non-EC agents, far away obstacles are simply ignored. We compare the performance of IJP to two baselines: (i) non-EC MPC: IJP without joint optimization, which only plan the ego behavior and try to avoid collision with the predicted trajectories of nearby agents. (ii) TPP: a sampling-based planner using ego-conditioned prediction, similar to TPP [7] but without multi-layer policy planning. **Remark 4**: _For fairness of comparison, the non-EC MPC considers a fixed number of non-EC agents, and the number is equal to the sum of EC agents and non-EC agents considered by IJP. The TPP planner instead considers all agents detected as the sampling-based algorithm does not require a fixed number of agents._ ### _Simulation result_ Fig. 9 shows an example snapshot from the nuPlan simulation of IJP where the two plots are the MPC solutions under two homotopy classes. The only difference between the two homotopy classes is the homotopy w.r.t. the circled vehicle: S (static) in the left case and CW (clockwise) in the right case. The blue curve is the solution of the EC agents' trajectories "planned" by IJP. In the right plot, as the ego (red) change lane, the trailing vehicle changes lane to the right to avoid collision with the ego vehicle, which is indeed similar to an ego-conditioned prediction. Fig. 8: Closed-loop planner structure: the trajectory predictor takes in the lane graph and the agent history to predict the unconditioned prediction for the agents \(\mathbf{x}_{\text{pred}}\); the route planner plans the desired route and generates the reference trajectory \(\mathbf{x}_{\text{ref}}\) and distills the lane information (such as lane boundaries) \(\mathbf{L}\), and finally IJP plans the ego motion. Fig. 7: Overview of IJP: the trajectory predictor takes in the lane graph and the agent history to predict the unconditioned prediction for the agents \(\mathbf{x}_{\text{pred}}\); the route planner plans the desired route and generates the reference trajectory \(\mathbf{x}_{\text{ref}}\) and distills the lane information (such as lane boundaries) \(\mathbf{L}\); the trajectory sampler samples the ego trajectory samples, and together with \(\mathbf{x}_{\text{pred}}\), the homotopy candidates are identified. Finally, IJP plans the ego motion via solving the joint model predictive control problem with SQP. Fig. 9: Comparison of the solutions under two homotopy classes: the ego vehicle is in red, the EC agents are in blue and the non-EC agents are in green. As the homotopy class w.r.t. the circled agent changes from S to CW, the ego’s behavior changes from lane keeping to lane change, and the ”predicted behavior” of the circled vehicle changes accordingly as a result of the joint optimization. Quantitatively, we run closed-loop simulation of 50 scenes from nuPlan's Boston dataset which include many interesting interactive scenarios with sophisticated road geometry. We compare key metrics such as collision rate and progress, all collected from the nuPlan simulator under IJP and the baselines, shown in Table I. The simulation statistics show that IJP significantly outperformed the baselines in safety-related metrics such as collision rate and drivable area compliance, and did reasonably well in progress. In fact, upon inspection, we found that the few incidents where the ego was blamed for causing a collision were not correctly assessed. Those few accidents were caused by nearby agents not yielding when making a right turn or lane change. We found no clear mistake made by IJP in the 50 scenes in the simulation. The key parameters of IJP can be found in Table III. It is counter-intuitive that the non-EC MPC result in worse safety performance given that it fully "respects" the prediction. We suspect that the main reason is when there are multiple agents near the ego vehicle, the prediction makes the motion planning problem infeasible (without slack), and when the prediction is of poor quality, the planner overreact, causing the performance to deteriorate. Table II shows the computation time of IJP and the baselines. Agentformer runs on a Nvidia 3090 GPU and the MPC QP runs on the CPU with Forces Pro QP solver. We separate the build and solve time of the MPC QP because the build process generates all MPC QP instances under different homotopy classes in parallel, while the solve time corresponds to solving one of the QP instances. Comparing to the non-EC MPC, IJP takes longer to build and solve as the QP problem is larger. Comparing to TPP, while IJP takes longer to solve, it saves more time on the prediction phase as no ego-conditioned prediction is needed. **Remark 5**: _TPP without ego-conditioning would have the same prediction time as IJP, but the final score dropped to 0.64 due to more safety violations._ **Remark 6**: _The computation time of IJP can be further improved in at least two ways: parallelizing the solving process of MPC QP under multiple homotopy classes, and utilizing the sparsity pattern in the QP. It is promising that IJP can run at a sufficiently high frame rate for real-time planning with these two improvement._ ## VI Conclusion and discussion We presented a new planning method, IJP, that can reason about the impact of the ego's actions on the behavior of other traffic agents by combining gradient-based joint planning for all agents with modern deep learning-based predictors. The key idea behind IJP is viewing joint optimization solutions as ego-conditioned predictions and penalizing deviations from the unconditional predictions to regularize the EC predictions. It should be pointed out that the EC predictions currently lack statistical grounding, i.e., no supervision is added in the prediction model training process to force the result of the subsequent joint optimization to match the ego-conditioned ground truth. The main missing piece is counterfactual traffic data, which is not available in general. The behavior of IJP largely depends on hyper-parameters such as \(\eta_{e}\) and \(\eta_{o}\), and currently they are hand-tuned. Nonetheless, the closed-loop performance of IJP turned out significantly better than the baselines, and we believe the main reasons are the free-end homotopy that diversifies the search space, and the fine granular solution achieved by the joint optimization. For future work, we will focus on providing a solid probabilistic grounding for the joint optimization solution viewed as ego-conditioned prediction by differentiating through the optimization and training the prediction-planning modules end-to-end. ## Appendix A Proof of Lemma 1 Let \(\mathbf{x}_{1}:[0,T]\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:[0,T]\rightarrow\mathcal{X}\) be two trajectories such that \(\mathbf{x}_{1}(0)=\mathbf{x}_{2}(0)\) and \(\mathbf{x}_{1}(T)=\mathbf{x}_{2}(T)\); the time-domain for both trajectories is chosen to be \(T>0\) without loss of generality1. To prove this lemma, we will first show free-end homotopy\(\implies\)homotopy and then homotopy\(\implies\)free-end homotopy for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\). Footnote 1: The time domain of the trajectories can be assumed to be the same without loss of generality (w.l.o.g.). If the times were different, i.e., \(T\), \(T^{\prime}\) for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), respectively, we can scale the coordinates of \(\mathbf{x}_{2}\) to \(tT/T^{\prime}\). (from-end homotopy) This follows directly by noting that Definition 2 is stricter than Definition 1 due to the inclusion of the mode vector criteria. Hence, if \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are free-end homotopic, they satisfy Definition 2. Then, \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) also satisfy Definition 1, implying that they are homotopic. (homotopy\(\implies\)free-end homotopy) Let \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) be homotopic, then there exists a continuous mapping \(f:[0,1]\times\mathbb{R}\rightarrow\mathcal{X}\) which satisfies the first two criteria of the definition of free-end homotopy in Definition 2. We only need to establish the last property in Definition 2, i.e., mode vectors for all trajectories along the homotopy transformation by \(f\) will be the same. To establish this, let \(\mathbf{x}^{o}:[0,T]\rightarrow\mathcal{X}\) be the trajectory of an arbitrary obstacle in the scene. From [35, Lemma 3] it follows that since the trajectories are homotopic, \begin{table} \begin{tabular}{c|c c c} & prediction & Build time & Solve time \\ \hline IJP & 0.051s & 0.150s & 0.152s \\ non-EC MPC & 0.051s & 0.045s & 0.007s \\ TPP & 0.690s & - & 0.006s \\ \end{tabular} \end{table} TABLE II: Computation time of IJP and the two baselines \begin{table} \begin{tabular}{c|c c c} Number of & Number of & Number of & Horizon & Time step \\ homotopy & EC agents & \(\mathbf{x}^{o}:[0,T]\rightarrow\mathcal{X}\) & 0.15s \\ \hline 6 & 6 & 10 & 3s & 0.15s \\ \end{tabular} \end{table} TABLE III: Key parameters of IJP \(\Delta\theta(\mathbf{x}_{1},\mathbf{x}^{\mathbf{o}})=\Delta\theta(\mathbf{x}_{2}, \mathbf{x}^{\mathbf{o}})\). Using this in the definition of the mode \(m\) (2) ensures that \(m(\mathbf{x}_{1},\mathbf{x}^{\mathbf{o}})=m(\mathbf{x}_{2},\mathbf{x}^{\mathbf{o}})\). Finally, we note that the mode is equal for the two trajectories for an arbitrary obstacle, therefore, it is equal for both trajectories for all obstacles in the scene. Hence the mode _vector_ for both the trajectories is the same, completing the proof of this implication, as well as the lemma. Proof:: To show that free-end homotopy is an equivalence relation, we must establish that it satisfies the reflexive, symmetric, and transitive properties. * Reflexive: Let \(\mathbf{x}:\mathbb{R}\rightarrow\mathcal{X}\) have a mode vector \(h\). The map \(f(\lambda,\cdot):=\mathbf{x}(\cdot)\), which is continuous because \(\mathbf{x}(\cdot)\) is continuous, is a free-end homotopy from \(\mathbf{x}\) to \(\mathbf{x}\). * Symmetric: Let \(\mathbf{x}_{1}:\mathbb{R}\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:\mathbb{R}\rightarrow\mathcal{X}\) be two continuous trajectories that are free-end homotopic. Let \(f_{1\to 2}(\lambda,\cdot)\) be the free-end homotopy from \(\mathbf{x}_{1}\) to \(\mathbf{x}_{2}\). Then, \(f_{2\to 1}(\lambda,\cdot):=f_{1\to 2}(1-\lambda,\cdot)\) is a free-end homotopy from \(\mathbf{x}_{2}\) to \(\mathbf{x}_{1}\). * Transitive: Let \(f_{1\to 2}(\lambda,\cdot)\) be the free-end homotopy from \(\mathbf{x}_{1}\) to \(\mathbf{x}_{2}\) and let \(f_{2\to 3}(\lambda,\cdot)\) be the free-end homotopy from \(\mathbf{x}_{2}\) to \(\mathbf{x}_{3}\). Then, \[f_{1\to 3}(\lambda,\cdot):=\begin{cases}f_{1\to 2}(2\lambda,\cdot),& \text{if}\quad 0\leq\lambda\leq 0.5,\\ f_{2\to 3}(2\lambda-1,\cdot),&\text{if}\quad 0.5<\lambda\leq 1,\end{cases}\] is a free-end homotopy from \(\mathbf{x}_{1}\) to \(\mathbf{x}_{3}\). This completes the proof of this lemma. To prove Theorem 1, we first establish the following lemma: **Lemma 3**: _For continuous trajectories with same start and end points, if the mode vector is the same then the trajectories are homotopic._ Proof:: We will prove the contrapositive: if the trajectories are not homotopic, then they cannot have the same mode vector. Let \(\mathbf{x}_{1}:\mathbb{R}\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:\mathbb{R}\rightarrow\mathcal{X}\) be two continuous trajectories that share the same start and end point, but they are not homotopic. Then, for some obstacle with trajectory \(\mathbf{x}^{\mathbf{o}}\), we have that \(\Delta\theta(\mathbf{x}_{1},\mathbf{x}^{\mathbf{o}})\neq\Delta\theta(\mathbf{ x}_{2},\mathbf{x}^{\mathbf{o}})\). However, since the start and end points are the same, there exists some \(k\in\mathbb{Z}\setminus\{0\}\) for which \(\Delta\theta(\mathbf{x}_{1},\mathbf{x}^{\mathbf{o}})=\Delta\theta(\mathbf{x}_ {2},\mathbf{x}^{\mathbf{o}})+2k\pi\) which implies \(|\Delta\theta(\mathbf{x}_{1},\mathbf{x}^{\mathbf{o}})-\Delta\theta(\mathbf{x}_ {2},\mathbf{x}^{\mathbf{o}})|\geq 2\pi\). Since the modes differ from each other by at most an angle of \(\pi\) (see (2)), it follows that with a gap of at least \(2\pi\), the modes for the two trajectories must be different. This completes the proofs. Proof:: Let \(\mathbf{x}_{1}:\mathbb{R}\rightarrow\mathcal{X}\) and \(\mathbf{x}_{2}:\mathbb{R}\rightarrow\mathcal{X}\) be two arbitrary continuous trajectories with the same mode vector. As we are working with finite-time trajectories, without loss of generality, let the domain of the trajectories be \([0,T]\) where \(T>0\). The mode constraint for each obstacle enforces a spatial constraint on the end point of the trajectory. For an arbitrary obstacle, if the mode is \(0\), the end point must lie within the convex cone that sweeps an angle of \(2\hat{\theta}\leq\pi\) given by the two rays emanating from the obstacle center, while if the mode is a non-zero integer, then the end-point must lie within a convex half-space. To satisfy the mode vector, the end point must, therefore, lie within the intersection \(E\) of all these spatial constraint sets; since each of these sets is convex, the intersection set is also convex. As both the trajectories have the same mode vector, their end points \(\mathbf{x}_{1}(T)\) and \(\mathbf{x}_{2}(T)\) lie within \(E\). Due to the convexity of \(E\), there exists a straight line path \(p:[T,2T]\rightarrow\mathcal{X}\) defined as \(p(t):=(\mathbf{x}_{2}(T)-\mathbf{x}_{1}(T))(t-T)/T+\mathbf{x}_{1}(T)\) for which \(p(T)=\mathbf{x}_{1}(T)\) and \(p(2T)=\mathbf{x}_{2}(T)\) and \(p(t)\in E\) for all \(t\in[T,2T]\). Now we construct a new trajectory \[\mathbf{\hat{x}}_{1}(t):=\begin{cases}\mathbf{x}_{1}(2t),&\text{if}\quad 0\leq t \leq T/2,\\ p(2t),&\text{if}\quad T/2<t\leq T.\end{cases}\] Let \(f_{1}:[0,1]\times[0,T]\rightarrow\mathcal{X}\) be a free-end homotopy candidate from \(\mathbf{x}_{1}\) to \(\mathbf{\hat{x}}_{1}\) as follows: \[f_{1}(\lambda,t):=\mathbf{\hat{x}}_{1}(t(1+\lambda)/2)\] which moves the ending point of \(\mathbf{\hat{x}}_{1}\) from \(\mathbf{\hat{x}}_{1}(T/2)\) to \(\mathbf{\hat{x}}_{1}(T)\). Clearly, \(f_{1}\) is continuous and satisfies the first two criteria in Definition 2. Since the end point of \(f_{1}(\lambda,\cdot)\) lies within \(E\) for all \(\lambda\), it follows that the mode vectors for the trajectories for any \(\lambda\) are also the same. Hence, \(f_{1}\) satisfies Definition 2. We observe that \(\mathbf{\hat{x}}_{1}\) and \(\mathbf{x}_{2}\) have the same mode vector and share the same end points. Using Lemma 3, there exists a homotopy \(f_{2}:[0,1]\times[0,T]\rightarrow\mathcal{X}\) between them. Furthermore, we know from Lemma 1 that \(f_{2}\) is also a free-end homotopy. Finally, since \(\mathbf{x}_{1}\) is free-end homotopic to \(\mathbf{\hat{x}}_{1}\), which is free-end homotopic to \(\mathbf{x}_{2}\), by the transitive property of free-end homotopy (Lemma 2), \(\mathbf{x}_{1}\) is free-end homotopic to \(\mathbf{x}_{2}\), completing the proof of this theorem.
2301.11302
Minimax estimation of discontinuous optimal transport maps: The semi-discrete case
We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb R^d$. We study a computationally efficient estimator initially proposed by Pooladian and Niles-Weed (2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.
Aram-Alexandre Pooladian, Vincent Divol, Jonathan Niles-Weed
2023-01-26T18:41:38Z
http://arxiv.org/abs/2301.11302v2
# Minimax estimation of discontinuous optimal transport maps: The semi-discrete case ###### Abstract We consider the problem of estimating the optimal transport map between two probability distributions, \(P\) and \(Q\) in \(\mathbb{R}^{d}\), on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution \(Q\) is a discrete measure supported on a finite number of points in \(\mathbb{R}^{d}\). We study a computationally efficient estimator initially proposed by [13], based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate \(n^{-1/2}\), independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems. *Pooladian and Divol contributed equally to this work. ## 1 Introduction The theory of optimal transport (OT) defines a natural geometry on the space of probability measures [12, 13] and has become ubiquitous in modern data-driven tasks. In this area, _optimal transport maps_ are a central object of study: suppose \(P\) and \(Q\) are two probability distributions with finite second moments, with \(P\) having a density with respect to the Lebesgue measure on \(\mathbb{R}^{d}\). Then, Brenier's theorem (see Section 2.1) states that there exists a convex function \(\varphi_{0}\) whose gradient defines a unique _optimal transport map_ between \(P\) and \(Q\). This map is optimal in the sense that it minimizes the following objective function: \[\nabla\varphi_{0}\coloneqq\operatorname*{argmin}_{T\in\mathcal{T}(P,Q)}\int \tfrac{1}{2}\|x-T(x)\|^{2}\,\mathrm{d}P(x)\,, \tag{1}\] where \(\mathcal{T}(P,Q)\coloneqq\{T:\mathbb{R}^{d}\to\mathbb{R}^{d}\mid X\sim P,\ T(X) \sim Q\}\) is the set of transport maps between \(P\) and \(Q\). The optimal value of the objective function in Equation (1) is called the (squared) 2-Wasserstein distance, written explicitly as \[\mathrm{S}_{0}(P,Q)=\int\tfrac{1}{2}\|x-\nabla\varphi_{0}(x)\|^{2}\,\mathrm{d }P(x)\,,\] though a more general formulation is available (see Section 2.1). Computing or approximating \(\mathrm{S}_{0}(P,Q)\) as well as \(\nabla\varphi_{0}\) has found use in several academic communities, such as economics [1, 1, 1, 2], computational biology [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], and computer vision [1, 1, 12], among many others. Practitioners seldom have access to \(P\) or \(Q\), but instead have access to i.i.d. samples \(X_{1},\ldots,X_{n}\sim P\) and \(Y_{1},\ldots,Y_{n}\sim Q\). On the basis of these samples, practitioners face both computational and statistical challenges when estimating \(\nabla\varphi_{0}\). From a theoretical perspective, the statistical task of estimating optimal transport maps has attracted much interest in the last few years [1, 1, 1, 1, 1, 1, 10, 11, 12]. The first finite-sample analysis of this problem was performed by [1], who proposed an estimator for \(\nabla\varphi_{0}\) under the assumption that \(\varphi_{0}\) is \(s+1\)-times continuously differentiable, for \(s>1\). They showed that a wavelet-based estimator \(\hat{\varphi}_{\mathrm{W}}\) satisfies \[\mathbb{E}\|\nabla\hat{\varphi}_{\mathrm{W}}-\nabla\varphi_{0}\|_{L^{2}(P)}^{ 2}\lesssim n^{-\frac{2s}{2s+d-2}}\log^{2}(n)\,,\] and that this rate is minimax optimal up to logarithmic factors. Their analysis requires that \(P\) and \(Q\) have bounded densities with compact support \(\Omega\subseteq\mathbb{R}^{d}\), and that \(\varphi_{0}\) be both strongly convex and smooth. Implementing the estimator \(\hat{\varphi}_{\mathrm{W}}\) is computationally challenging even in moderate dimensions, and is practically infeasible for \(d>3\). Follow up work has proposed alternative estimators which improve upon \(\hat{\varphi}_{\mathrm{W}}\) either in computational efficiency or in the generality in which they apply. Though these subsequent works go significantly beyond the setting considered by [1], none has eliminated the crucial assumption that \(\varphi_{0}\) is smooth, i.e., that the transport map \(\nabla\varphi_{0}\) is Lipschitz. We highlight two estimators proposed in this line of work that are particularly practical. [1] study the 1-Nearest Neighbor estimator \(\hat{T}_{\mathrm{1NN}}\). This estimator is obtained by solving the empirical optimal transport problem between the samples, which is then extended to a function defined on \(\mathbb{R}^{d}\) using a projection scheme; see Section 4 for more details. Given \(n\) samples from the source and target measures in \(\mathbb{R}^{d}\), \(\hat{T}_{\mathrm{1NN}}\) has a runtime of \(\mathcal{O}(n^{3})\) via the Hungarian Algorithm [13, Chapter 3], and, for \(d\geq 5\), achieves the rate \[\mathbb{E}\|\hat{T}_{\mathrm{1NN}}-\nabla\varphi_{0}\|_{L^{2}(P)}^{2}\lesssim n ^{-\frac{2}{d}} \tag{2}\] whenever the optimal transport map \(\varphi_{0}\) is smooth and strongly convex, and under mild regularity conditions on \(P\). In another work, [1] conducted a statistical analysis of an estimator originally proposed by [11] based on entropic optimal transport. The efficiency of Sinkhorn's algorithm for large-scale problems [12, 13] makes this estimator attractive from a computational perspective, and [21] also give statistical guarantees, though these fall short of being minimax-optimal. Despite this progress, none of the aforementioned results can be applied in situations where \(\nabla\varphi_{0}\) is not Lipschitz. And in practice, even requiring the _continuity_ of the transport map can be far too stringent. It is indeed too much to hope for that an underlying data distribution (e.g. over the space of images) has one single connected component; this is supported by recent work that stipulates that the underlying data distribution is the union of _disjoint_ manifolds of varying intrinsic dimension [1]. In such a setting, the transport map \(\nabla\varphi_{0}\) will not be continuous, demonstrating the need of considering the problem of the statistical estimation of _discontinuous_ transport maps to get closer to real-world situations. As a first step, we choose to focus on the case where the target distribution \(Q=\sum_{j=1}^{J}q_{j}\delta_{y_{j}}\) is discrete while the source measure \(P\) has full support, often called the _semi-discrete_ setting in the optimal transport literature. In this setting, the optimal transport map \(\nabla\varphi_{0}\) is constant over regions known as Laguerre cells (each cell corresponding to a different atom of the discrete measure), while displaying discontinuities on their boundaries (see Section 2.1.1 for more details). Figure 1 provides such an example. Semi-discrete optimal transport therefore provides a natural class of discontinuous transport maps. We focus on this setting for two reasons. First, it has garnered a lot of attention in recent years, in both computational and theoretical circles [see, e.g., 11, 12, 13], due in particular to its connection with the quantization problem [10]. Second, the semi-discrete setting is intriguing from a statistical perspective: existing results show that statistical estimation problems involving semi-discrete optimal transport can escape the curse of dimensionality [11, 1, 12, 13]. For example, [14, Theorem Figure 1: An illustration of a semi-discrete optimal transport map. The support of \(P\), the whole rectangle, is partitioned into regions, each of which is transported to one of the atoms of the discrete target measure \(Q\). The resulting map is discontinuous at the boundaries of each cell. 3.2] show that if \(P_{n}\) and \(Q_{n}\) are empirical measures consisting of i.i.d. samples from \(P\) and \(Q\), then the semi-discrete assumption implies \[\mathbb{E}|S_{0}(P,Q)-S_{0}(P_{n},Q_{n})|\lesssim n^{-1/2}\,.\] These results offer the tantalizing possibility that semi-discrete transport maps can be estimated at the rate \(n^{-1/2}\), in sharp contrast to the dimension-dependent rates obtained in bounds such as (2). However, the optimal rates of estimation for semi-discrete transport maps are not known, and no estimators with finite-sample convergence guarantees exist. #### Main Contributions We show that the computationally efficient estimator \(\hat{T}_{\varepsilon}\) based on entropically regularized optimal transport, originally studied in [18, 19], provably estimates discontinuous semi-discrete optimal transport maps at the optimal rate. More precisely, our contributions are the following: 1. For \(Q\) discrete and \(P\) with full support on a compact, convex set, we show that \(\hat{T}_{\varepsilon}\) achieves the following _dimension-independent_ convergence rate to the optimal transport map (see Theorem 3.1) \[\mathbb{E}\|\hat{T}_{\varepsilon}-\nabla\varphi_{0}\|_{L^{2}(P)}^{2}\lesssim n ^{-1/2}\,,\] (3) when the regularization parameter \(\varepsilon\asymp n^{-1/2}\). We further show (Proposition 4.1) that this rate is minimax optimal. 2. As a by-product of our analysis, we give new _parametric_ rates of convergence to the entropic Brenier map \(T_{\varepsilon}\), a result which improves exponentially on prior work in the dependence on \(\varepsilon\) (see Theorem 3.5 and Remark 3.6). 3. Our proof technique requires several new results, including a novel stability bound for the entropic Brenier maps (Proposition 3.7), and a new stability result for the entropic dual Brenier potentials in the semi-discrete case (Proposition 3.9). 4. We show that, unlike \(\hat{T}_{\varepsilon}\), the 1-Nearest-Neighbor estimator is provably suboptimal in the semi-discrete setting (see Proposition 4.2) by exhibiting a discrete measure \(Q\) such that the risk suffers from the curse of dimensionality: \[\mathbb{E}\|\hat{T}_{1\text{NN}}-\nabla\varphi_{0}\|_{L^{2}(P)}^{2}\gtrsim n^{ -1/d}\,.\] 5. In Section 4, we verify our theoretical findings on synthetic experiments. We also show by simulation that the entropic estimator appears to perform well even outside the semi-discrete setting, suggesting it as a promising choice for estimating other types of discontinuous maps. Background on optimal transport ### Optimal transport We define \(\mathcal{P}(\Omega)\) to be the space of probability measures whose support lies in a compact subset \(\Omega\subseteq\mathbb{R}^{d}\). If a probability measure \(P\) has a density with respect to the Lebesgue measure on \(\mathbb{R}^{d}\) with support \(\Omega\subseteq\mathbb{R}^{d}\), then we write \(P\in\mathcal{P}_{\mathrm{ac}}(\Omega)\). For two probability measures \(P,Q\in\mathcal{P}(\Omega)\), we define the _(squared) \(2\)-Wasserstein distance_ to be [10] \[\mathrm{S}_{0}(P,Q):=\min_{\pi\in\Gamma(P,Q)}\iint\tfrac{1}{2}\|x-y\|^{2}\, \mathrm{d}\pi(x,y)\,, \tag{4}\] where \(\pi\in\Gamma(P,Q)\subseteq\mathcal{P}(\Omega\times\Omega)\) such that for any event \(A\), \[\pi(A\times\Omega)=P(A)\,,\quad\pi(\Omega\times A)=Q(A)\,.\] We call \(\Gamma(P,Q)\) the set of _couplings_ between \(P\) and \(Q\). In this work, we focus on the squared-Euclidean cost but Equation (4) is well-defined for convex, lower-semicontinuous costs; see [11, 12] for more information on optimal transport under general costs. Equation (4) is a convex optimization problem on the space of joint measures, and a minimizer, denoted \(\pi_{0}\), always exists; we call \(\pi_{0}\) an _optimal plan_ from \(P\) to \(Q\). Moreover, Equation (4) possesses the following dual formulation, \[\mathrm{S}_{0}(P,Q)=\tfrac{1}{2}M_{2}(P)+\tfrac{1}{2}M_{2}(Q)-\inf_{(\varphi, \psi)\in\Phi}\int\varphi\,\mathrm{d}P+\int\psi\,\mathrm{d}Q \tag{5}\] where \(M_{2}(P):=\int\|x\|^{2}\,\mathrm{d}P(x)\) (similarly for \(M_{2}(Q)\)) and the functions \((\varphi,\psi)\in\Phi\subseteq L_{1}(P)\times L_{1}(Q)\) satisfy \[\langle x,y\rangle\leq\varphi(x)+\psi(y)\text{ for all }x,y\in\Omega\,.\] As with the primal formulation, the infimum in Equation (5) is attained at functions \((\varphi_{0},\psi_{0})\). These minimizers are called _(optimal) Brenier potentials_. In particular, at optimality, we have that these Brenier potentials are convex conjugates of one another, i.e. the Legendre transform of one of the potentials gives the other: \[\varphi_{0}^{*}(y):=\sup_{x}\{\langle x,y\rangle-\varphi_{0}(x)\}=\psi_{0}(y)\,, \tag{6}\] and vice-versa. Apart from these two formulations of optimal transport under the squared-Euclidean cost, there exists a third, known as the Monge problem: \[T_{0}:=\operatorname*{argmin}_{T\in\mathcal{T}(P,Q)}\int\tfrac{1}{2}\|x-T(x) \|^{2}\,\mathrm{d}P(x)\,, \tag{7}\] where \(\mathcal{T}(P,Q)\) is the set of admissible transport maps, i.e. for \(X\sim P\), \(T(X)\sim Q\). This optimization problem is non-convex in \(T\), and a solution is not always guaranteed to exist for arbitrary \(P\) and \(Q\). The following theorem unifies these three formulations of optimal transport under the squared-Euclidean cost: **Theorem 2.1** (Brenier's theorem; Bre91).: _Let \(P\in\mathcal{P}_{ac}(\Omega)\) and let \(Q\in\mathcal{P}(\Omega)\), then_ 1. _the solution to Equation_ (_7_) _exists and is of the form_ \(T_{0}=\nabla\varphi_{0}\)_, where_ \(\varphi_{0}\) _solves Equation_ (_5_)_._ 2. \(\pi_{0}\) _is also uniquely defined as_ \[\mathrm{d}\pi_{0}(x,y)=\,\mathrm{d}P(x)\delta_{\{\nabla\varphi_{0}(x)\}}(y)\,.\] When we want to place emphasis on the underlying measures, we will write \(\varphi_{0}=\varphi_{0}^{P\to Q}\), \(\psi_{0}=\psi_{0}^{P\to Q}\) and \(T_{0}=T_{0}^{P\to Q}\). #### 2.1.1 OT in the semi-discrete case In optimal transport, the semi-discrete setting refers to the case where \(P\) has as density with respect to the Lebesgue measure on \(\mathbb{R}^{d}\), and \(Q\) is a discrete measure supported on points. The following theorem characterizes the optimal transport map in this situation, which exhibits a particular structure compared to the general results in the previous section. Let \([J]=\{1,\ldots,J\}\). **Proposition 2.2** (Aha98).: _If \(P\in\mathcal{P}_{ac}(\Omega)\) and \(Q\) is a discrete measure supported on the points \(y_{1},\ldots,y_{J}\), then the optimal transport map \(\nabla\varphi_{0}\) is given by_ \[\nabla\varphi_{0}(x)\coloneqq\operatorname*{argmax}_{j\in[J]}\{\langle x,y_{j} \rangle-\psi_{0}(y_{j})\}\,, \tag{8}\] _where \(\psi_{0}\) is the dual to \(\varphi_{0}\) in the sense of Equation (6)._ Here, the optimal dual Brenier potential \(\psi_{0}\) can be identified with a _vector_ in \(\mathbb{R}^{J}\), defined by the number of atoms, and the optimal Brenier potential is consequently given by \[\varphi_{0}\coloneqq\max_{j\in[J]}\{\langle x,y_{j}\rangle-\psi_{0}(y_{j})\}\,.\] Although \(\varphi_{0}\) is not differentiable, only subdifferentiable, we still use the gradient notation as \(\nabla\varphi_{0}\) is well-defined \(P\)-almost everywhere. The map \(\nabla\varphi_{0}\) partitions the space into \(J\) convex polytopes \(L_{j}\coloneqq\nabla\varphi_{0}^{-1}(\{y_{j}\})\) called _Laguerre cells_; recall Figure 1. From this definition, it is clear that for a given \(x\in L_{j}\), \(x\mapsto\nabla\varphi_{0}(x)=y_{j}\) is the optimal transport mapping. The difficulty in finding this map lies in determining the cells \(L_{j}\), or equivalently the dual variables \(\psi_{0}(y_{j})\). ### Entropic optimal transport Entropic regularization was introduced to both optimal transport and machine learning communities in the seminal paper by [11], allowing approximate optimal transport distances to be computed at unprecedented speeds. Entropic optimal transport (EOT) is defined as the following regularized version of Equation (4): for \(\varepsilon>0\) \[\mathrm{S}_{\varepsilon}(P,Q):=\min_{\pi\in\Gamma(P,Q)}\iint\tfrac{1}{2}\|x-y \|^{2}\,\mathrm{d}\pi(x,y)+\varepsilon\mathrm{KL}(\pi\|P\otimes Q)\,, \tag{9}\] where \(\operatorname{KL}(\mu\|\nu)=\int\log\frac{\mathrm{d}\mu}{\mathrm{d}\nu}\,\mathrm{d}\mu\) when \(\mu\in\mathcal{P}(\Omega)\) is absolutely continuous with respect to \(\nu\in\mathcal{P}(\Omega)\). This speedup is due to the elegant connection of (9) to Sinkhorn's algorithm; we refer the interested reader to [14, Chapter 4] for more information. The computational tractability of \(\mathrm{S}_{\varepsilon}\) compared to \(\mathrm{S}_{0}\) when dealing with many samples lends itself to being a central object of study in its own right [see, e.g., GCB\({}^{+}\)19, MNW19, CRL\({}^{+}\)20, RS22, GSLNW22]. Equation (9) admits the following dual formulation, which is now an unconstrained optimization problem [13, 1] \[\begin{split}\mathrm{S}_{\varepsilon}(P,Q)=\tfrac{1}{2}M_{2}(P)+ \tfrac{1}{2}M_{2}(Q)&-\inf_{\varphi,\psi}\bigg{(}\int\varphi\, \mathrm{d}P+\int\psi\,\mathrm{d}Q\\ &+\varepsilon\iint(e^{(\langle x,y\rangle-\varphi(x)-\psi(y))/ \varepsilon}-1)\,\mathrm{d}P(x)\,\mathrm{d}Q(y)\bigg{)},\end{split} \tag{10}\] where \((\varphi,\psi)\in L_{1}(P)\times L_{1}(Q)\). When \(P\) and \(Q\) have finite second moments, Equation (9) admits a _unique_ minimizer, \(\pi_{\varepsilon}\) and we have the existence of minimizers to Equation (10), which we denote as \((\varphi_{\varepsilon},\psi_{\varepsilon})\). We call \(\pi_{\varepsilon}\) the _entropic optimal plan_ and \((\varphi_{\varepsilon},\psi_{\varepsilon})\) are called _entropic Brenier potentials_. The following optimality relation further relates these primal and dual solutions [11]: \[\mathrm{d}\pi_{\varepsilon}(x,y):=e^{(\langle x,y\rangle-\varphi_{\varepsilon }(x)-\psi_{\varepsilon}(y))/\varepsilon}\,\mathrm{d}P(x)\,\mathrm{d}Q(y)\,.\] As a consequence, the following relationship holds at optimality: \[\mathrm{S}_{\varepsilon}(P,Q)=\tfrac{1}{2}M_{2}(P)+\tfrac{1}{2}M_{2}(Q)-\! \int\varphi_{\varepsilon}\,\mathrm{d}P\;-\!\int\psi_{\varepsilon}\,\mathrm{d}Q\,,\] and, moreover, we can define versions of \(\varphi_{\varepsilon}\) and \(\psi_{\varepsilon}\) such that the following relationships hold [see 16, 17] over all \(x\in\mathbb{R}^{d}\) and \(y\in\mathbb{R}^{d}\), respectively: \[\varphi_{\varepsilon}(x) =\varepsilon\log\int e^{(\langle x,y\rangle-\psi_{\varepsilon}(y) )/\varepsilon}\,\mathrm{d}Q(y)\,, \tag{11}\] \[\psi_{\varepsilon}(y) =\varepsilon\log\int e^{(\langle x,y\rangle-\varphi_{\varepsilon }(x))/\varepsilon}\,\mathrm{d}P(x)\,, \tag{12}\] which are smoothed version of the Legendre transform, see Appendix A for details. In what follows, we always assume that we have selected \(\varphi_{\varepsilon}\) and \(\psi_{\varepsilon}\) so that these identities hold. #### 2.2.1 Entropic Brenier Map If \((X,Y)\sim\pi_{\varepsilon}\), we may define the conditional probability \(\pi_{\varepsilon}^{x}\) of \(Y\) given that \(X=x\), with density \[\frac{\mathrm{d}\pi_{\varepsilon}^{x}}{\mathrm{d}Q}(y)\propto\exp\left(( \langle x,y\rangle-\psi_{\varepsilon}(y))/\varepsilon\right)\,. \tag{13}\] The barycentric projection of the optimal entropic coupling \(\pi_{\varepsilon}\), or _entropic Brenier map_, is a central object of study in several works e.g. [1, 16, 17, 1, 1], defined as \[T_{\varepsilon}(x)=\!\int\!y\,\mathrm{d}\pi_{\varepsilon}^{x}(y)=\nabla\varphi _{\varepsilon}(x)\,, \tag{14}\] where \(\pi_{\varepsilon}^{x}\) is as in Equation (13). Note that this quantity is well defined for all \(x\in\mathbb{R}^{d}\) as long as the source and target measures have compact support; in particular, it applies to both discrete and continuous measures. The second equality follows from Equation (11) and the dominated convergence theorem. As in the unregularized case, we will write \(\varphi_{\varepsilon}=\varphi_{\varepsilon}^{P\to Q}\), \(\psi_{\varepsilon}=\psi_{\varepsilon}^{P\to Q}\) and \(T_{\varepsilon}=T_{\varepsilon}^{P\to Q}\) when we want to emphasize on the dependency with respect to the underlying measures. This particular barycentric projection was proposed as a tool for large-scale optimal transport by [13], but analyzed statistically for the first time by [16] as an estimator for the optimal transport map. We mention some of their results to highlight the differences with our new results for the semi-discrete setting in Section 3. First, they prove the following approximation result for \(T_{\varepsilon}\). **Proposition 2.3** (Pw21, Corollary 1).: _Let \(P,Q\) be compactly supported absolutely continuous measures on a compact set \(\Omega\subseteq\mathbb{R}^{d}\) with densities \(p\) and \(q\), that are bounded away from \(0\) and \(\infty\). Assume that \(\varphi_{0}\) is smooth and strongly convex, and that \(\varphi_{0}^{*}\) is at least \(\mathcal{C}^{3}\). Then,_ \[\|T_{\varepsilon}-\nabla\varphi_{0}\|_{L^{2}(P)}^{2}\lesssim\varepsilon^{2}\,. \tag{15}\] Their main statistical result is the following theorem: **Proposition 2.4** (Pw21, Theorem 3).: _Suppose the same assumptions as Proposition 2.3, and let \(P_{n}\) and \(Q_{n}\) denote the empirical measures of \(P\) and \(Q\) constructed from i.i.d. samples. Let \(\hat{T}_{\varepsilon}=T_{\varepsilon}^{P_{n}\to Q_{n}}\) denote the entropic Brenier map from \(P_{n}\) to \(Q_{n}\) and let \(T_{0}=\nabla\varphi_{0}\) be the optimal transport map from \(P\) to \(Q\). Then, if \(\varepsilon\asymp n^{-\frac{1}{d^{\prime}+3}}\)_ \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2}\lesssim n^{-\frac{3 }{2(d^{\prime}+3)}}\log(n)\,, \tag{16}\] _where \(d^{\prime}=2\lceil d/2\rceil\)._ Note that in particular the the rate of convergence of the entropic estimator critically depends on the ambient dimension \(d\) in the continuous-to-continuous case. #### 2.2.2 Related work Characterizing the convergence of entropic objects (e.g. potentials, cost, plans) to their unregularized counterparts in the \(\varepsilon\to 0\) regime has been a topic of several works in recent years. Convergence of the costs \(\mathrm{S}_{\varepsilon}\) to \(\mathrm{S}_{0}\) with precise rates was investigated in [14, 15, 16]. The works [1, 13, 1, 12] study the convergence of the minimizers \(\pi_{\varepsilon}\) to \(\pi_{0}\) under varying assumptions. Convergence of the potentials in a very general setting was established in [11], though without a rate of convergence in \(\varepsilon\). In the semi-discrete case, this gap was closed in [1] followed closely by [13], which gave non-asymptotic rates. The Sinkhorn Divergence, a non-negative, symmetric version of \(\mathrm{S}_{\varepsilon}\), was introduced in [10], was statistically analysed in [11] and also in [12, 1], and was connected to the entropic Brenier map in [10]. The recent preprint by [14] proved parametric rates of estimation between the empirical entropic Brenier map and its population counterpart, though with an exponentially poor dependence on the regularization parameter (see Remark 3.6). Using covariance inequalities, the entropic Brenier potentials were used give a new proof of Caffarelli's contraction theorem; see [11]; this approach was recently generalized in [12]. Entropic optimal transport has also come into contact with the area of deep generative modelling through the following works [10, 1], among others. ## 3 Statistical performance of the entropic estimator in the semi-discrete setting Let \(P_{n}\) and \(Q_{n}\) be the empirical measures associated with two \(n\)-samples from \(P\) and \(Q\). We make the following regularity assumptions on \(P\), already introduced by [1]. **(A)**: The measure \(P\) has a compact convex support \(\Omega\subseteq B(0;R)\), with a density \(p\) satisfying \(0<p_{\min}\leq p\leq p_{\max}<\infty\) for positive constants \(p_{\min}\), \(p_{\max}\) and \(R\). For example, \(P\) can be the uniform distribution over \(\Omega\), or a truncated Gaussian distribution. Furthermore, we will need the following assumption on \(Q\). **(B)**: The discrete probability measure \(Q=\sum_{j=1}^{J}q_{j}\delta_{y_{j}}\) is such that \(q_{j}\geq q_{\min}>0\) and \(y_{j}\in B(0;R)\) for all \(j\in[J]\). The goal of this section is to prove the following theorem: **Theorem 3.1**.: _Let \(P\) satisfy **(A)** and let \(Q\) satisfy **(B)**. Let \(\hat{T}_{\varepsilon}=T_{\varepsilon}^{P_{n}\to Q_{n}}\). Then, for \(\varepsilon\asymp n^{-1/2}\) and \(n\) large enough,_ \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2}\lesssim n^{-1/2}\,. \tag{17}\] Let \(T_{\varepsilon}=T_{\varepsilon}^{P\to Q}\) denote the entropic Brenier map associated to \(P\) and \(Q\). Our proof relies on the following bias-variance decomposition: \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2}\lesssim\mathbb{E}\| \hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(P)}^{2}+\|T_{\varepsilon}-T_{0 }\|_{L^{2}(P)}^{2}.\] Following the next two results (Theorem 3.2 and Theorem 3.5) and the preceding decomposition, the proof of Theorem 3.1 is merely a balancing act in the regularization parameter \(\varepsilon\). **Theorem 3.2**.: _Let \(P\) satisfy **(A)** and let \(Q\) satisfy **(B)**. Then, for \(\varepsilon\) small enough,_ \[\|T_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2}\lesssim\varepsilon\,. \tag{18}\] The proof of Theorem 3.2 relies on the following qualitative picture: if a point \(x\) belongs to some Laguerre cell \(L_{j}\), and is far away from the boundary of \(L_{j}\), then the entropic optimal plan \(\pi_{\varepsilon}\) will send almost all of its mass towards the point \(y_{j}=T_{0}(x)\), sending an exponentially small amount of mass to the other points \(y_{j}\). Such a picture is correct as long as \(x\) is at distance at least \(\varepsilon\) from the boundary of the Laguerre cell \(L_{j}\), incurring a total error of order \(\varepsilon\). A rigorous proof of Theorem 3.2 can be found in Appendix B. Note that this rate is slower than the rate appearing in Proposition 2.3 in the continuous-to-continuous case. The following example shows that the dependency in \(\varepsilon\) is optimal in Theorem 3.2, indicating that the presence of discontinuities necessarily affects the approximation properties of the entropic Brenier map. _Example 3.3_.: Let \(P\) be a probability measure on \(\mathbb{R}\) having a symmetric bounded density \(p\) continuous at \(0\), and let \(Q=\frac{1}{2}(\delta_{-1}+\delta_{1})\). Following [1, Section 3], one can check that the entropic Brenier map in this setting is the following scaled sigmoidal function \[T_{\varepsilon}(x)=\tanh(2x/\varepsilon)\,,\] whereas the optimal transport map \(T_{0}(x)=\operatorname{sign}(x)\). Then, performing a computation \[\|T_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2} =2\int_{0}^{\infty}(1-\tanh(2x/\varepsilon))^{2}p(x)\,\mathrm{d}x\] \[=\varepsilon\int_{0}^{\infty}(1-\tanh(u))^{2}p(u\varepsilon/2)\, \mathrm{d}u\] \[=\varepsilon p(0)(\log(4)-1)+o(\varepsilon)\,,\] where in the last step we invoked the dominated convergence theorem, and computed the limiting integral. _Remark 3.4_.: Assumption **(A)** can be relaxed for Theorem 3.2 to hold. More precisely, it can be replaced by Assumptions 2.2 and 2.9 of [1], that hold for unbounded measures such as the normal distribution. Finally, we present the sample-complexity result: **Theorem 3.5**.: _Let \(P\) satisfy **(A)** and let \(Q\) satisfy **(B)**. Then, for \(0<\varepsilon\leq 1\) such that \(\log(1/\varepsilon)\lesssim n/\log(n)\)_ \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(P)}^{2}\lesssim \varepsilon^{-1}n^{-1}\,. \tag{19}\] _Remark 3.6_.: In [13], the authors show that if \(P\) and \(Q\) are merely compactly supported with \(\operatorname{supp}(P),\operatorname{supp}(Q)\subseteq B(0;R)\), then \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(P)}^{2}\lesssim e^ {cR^{2}/\varepsilon}\varepsilon^{-1}n^{-1}\,, \tag{20}\] where \(c>0\) is some absolute positive constant. Thus, under the additional structural assumptions of the semi-discrete formulation, we are able to significantly improve the rate of convergence between the empirical and population entropic Brenier maps. The proof of Theorem 3.5 relies on a novel stability result, reminiscent of [1, Theorem 6], which is of independent interest. We provide the proof in Appendix C. **Proposition 3.7**.: _Let \(\mu,\nu,\mu^{\prime},\nu^{\prime}\) be four probability measures supported in \(B(0;R)\). Then the entropic maps \(T_{\varepsilon}^{\mu\to\nu}\) and \(T_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\) satisfy_ \[\frac{\varepsilon}{8R^{2}}\|T_{\varepsilon}^{\mu\to\nu}-T_{ \varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\|_{L^{2}(\mu)}^{2}\leq\int(\varphi_ {\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}-\varphi_{\varepsilon}^{\mu\to\nu })\,\mathrm{d}\mu+\int(\varphi_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}- \psi_{\varepsilon}^{\mu\to\nu})\,\mathrm{d}\nu+\varepsilon\text{KL}(\nu\|\nu^ {\prime}).\] _Remark 3.8_.: The right side of the bound in Proposition 3.7 is equal to \[S_{\varepsilon}(\mu,\nu)\ \ -\ S_{\varepsilon}(\mu^{\prime},\nu^{\prime})\ \ +\ \int f_{ \varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\,\mathrm{d}(\mu^{\prime}\ -\ \mu)\ +\ \int g_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\,\mathrm{d}(\nu^{\prime}\ -\ \nu)\ +\ \varepsilon\text{KL}(\nu\|\nu^{\prime})\,,\] where \(f_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}=\frac{1}{2}\|\cdot\|^{2}- \varphi_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\) and \(g_{\varepsilon}^{\mu^{\prime}\to\nu^{\prime}}=\frac{1}{2}\|\cdot\|^{2}-\psi_{ \varepsilon}^{\mu^{\prime}\to\nu^{\prime}}\). Proposition 3.7 is therefore the entropic analogue of the stability bounds of [1, Theorem 6] and [13, Lemma 5.1]. Unlike those results, Proposition 3.7 allows both the source and target measure to be modified, and does not require any smoothness assumptions. ### Proof sketch of Theorem 3.5 To prove Theorem 3.5, we first consider the _one-sample setting_, where we assume that we only have access to samples \(Y_{1},\ldots,Y_{n}\sim Q\), but we have full access to \(P\). We then consider the one-sample entropic estimator \(T_{\varepsilon}^{P\to Q_{n}}\). We apply Proposition 3.7 with \(\mu=\mu^{\prime}\coloneqq P\), \(\nu\coloneqq Q_{n}\) and \(\nu^{\prime}\coloneqq Q\), yielding (see Corollary C.1 for details) \[\frac{\varepsilon}{8R^{2}}\mathbb{E}\|T_{\varepsilon}^{P\to Q_{n}}-T_{ \varepsilon}\|_{L^{2}(\mu)}^{2}\leq\mathbb{E}\Big{(}\int(\psi_{\varepsilon}- \psi_{\varepsilon}^{P\to Q_{n}})\,\mathrm{d}(Q_{n}-Q)+\varepsilon\mathrm{KL}( Q_{n}\|Q)\Big{)}.\] Let \(\chi^{2}(P\|Q)\) denote the \(\chi^{2}\)-divergence between probability measure. Young's inequality (see Lemma H.1) and the inequality \(\mathrm{KL}(Q_{n}\|Q)\leq\chi^{2}(Q_{n}\|Q)\) yield the following bound: \[\mathbb{E}\|T_{\varepsilon}^{P\to Q_{n}}-T_{\varepsilon}\|_{L^{2}(P)}^{2} \leq\frac{8R^{2}}{\varepsilon}\Big{(}\frac{\mathbb{E}[\mathrm{Var}_{Q}(\psi_{ \varepsilon}^{P\to Q_{n}}-\psi_{\varepsilon})]}{2}+\frac{\mathbb{E}[\chi^{2}( Q_{n}\|Q)]}{2}\Big{)}+8R^{2}\mathbb{E}[\chi^{2}(Q_{n}\|Q)]\,.\] To complete our proof sketch, we use a new stability result on the entropic dual Brenier potentials, catered for the semi-discrete setting. **Proposition 3.9**.: _Let \(\mu\) be a measure that satisfies **(A)**. Let \(\nu\), \(\nu^{\prime}\) be two discrete probability measures supported on \(\{y_{1},\ldots,y_{J}\}\), with \(\nu^{\prime}\geq\lambda\nu\) for some \(\lambda>0\). Then, for \(0<\varepsilon\leq 1\),_ \[\mathrm{Var}_{\nu}(\psi_{\varepsilon}^{\mu\to\nu^{\prime}}-\psi_{\varepsilon} ^{\mu\to\nu})\leq\frac{C}{\lambda^{2}}\chi^{2}(\nu^{\prime}\|\nu), \tag{21}\] _where \(C\) depends on \(R\), \(p_{\min}\) and \(p_{\max}\)._ Moreover, a computation provided in Lemma H.2 shows that \(\mathbb{E}[\chi^{2}(Q_{n}\|Q)]=\frac{J-1}{n}\), which is enough to conclude the proof of the one-sample case, see Appendix E for details. The two-sample setting is tackled using similar reasoning, where we ultimately prove in Appendix F that the risk \(\mathbb{E}\|\hat{T}_{\varepsilon}-T_{\varepsilon}^{P\to Q_{n}}\|_{L^{2}(P)}^{2}\) is upper bounded by \[\frac{8R^{2}}{\varepsilon}\mathbb{E}\int(\varphi_{\varepsilon}^{P\to Q_{n}}- \varphi_{\varepsilon}^{P_{n}\to Q_{n}})\,\mathrm{d}(P_{n}-P)\,.\] Such a quantity can again be related to the estimation of the dual potentials \(\psi_{\varepsilon}^{P\to Q_{n}}\) and \(\psi_{\varepsilon}^{P_{n}\to Q_{n}}\). Using the same reasoning as before, we expect a parametric rate of convergence for this term as well. Merging the two results completes the proof of Theorem 3.5. We refer to Appendix F for full details. ## 4 Comparing against the 1NN estimator ### Rate optimality of the entropic Brenier map The upper bound of Theorem 3.5 shows that our estimator achieves the \(n^{-1/2}\) rate. In fact, the following simple proposition tells us that this rate is optimal in the semi-discrete case. **Proposition 4.1**.: _Let \(P\) be the uniform distribution on \([-1/2,1/2]^{d}\) and for any \(J\geq 2\), let \(\mathcal{Q}_{J}\) denote the space of of probability measures with at most \(J\) atoms, supported on \([-1/2,1/2]^{d}\). Define the minimax rate of estimation_ \[\mathcal{R}_{n}(\mathcal{Q}_{J})=\inf_{\hat{T}}\sup_{Q\in\mathcal{Q}_{J}} \mathbb{E}_{Q^{\otimes n}}[\|\hat{T}-T_{0}^{P\to Q}\|_{L_{2}(P)}^{2}]\,.\] _Then, it holds that \(\mathcal{R}_{n}(\mathcal{Q}_{J})\geq n^{-1/2}/64\)._ Proof.: Let \(e\) be a vector of the canonical basis of \(\mathbb{R}^{d}\), scaled by \(1/2\). Fix \(0<r<1/2\) and let \(Q_{0}=\frac{1}{2}\delta_{-e}+\frac{1}{2}\delta_{e}\) and \(Q_{1}=(\frac{1}{2}-r)\delta_{-e}+(\frac{1}{2}+r)\delta_{e}\). A computation gives \(\|T_{\varepsilon}^{P\to Q_{0}}-\frac{P\to Q_{1}}{\varepsilon}\|_{L_{2}(P)}^{2} =r\). Therefore, by Le Cam's lemma [see, e.g., 15, Chapter 15], \[\mathcal{R}_{n}(\mathcal{Q}_{J,R})\geq\frac{r}{8}(1-\mathrm{d}_{\mathrm{TV}} (Q_{0}^{\otimes n},Q_{1}^{\otimes n})). \tag{22}\] Let \(\mathrm{d}_{\mathrm{H}^{2}}(Q_{0},Q_{1})\) denote the (squared) Hellinger distance between measures. We have \(\mathrm{d}_{\mathrm{TV}}(Q_{0}^{n},Q_{1}^{n})^{2}\leq\mathrm{d}_{\mathrm{H}^ {2}}(Q_{0}^{n},Q_{1}^{n})\leq n\mathrm{d}_{\mathrm{H}^{2}}(Q_{0},Q_{1})\). Furthermore, a computation gives \[\mathrm{d}_{\mathrm{H}^{2}}(Q_{0},Q_{1}) =\left(\sqrt{\frac{1}{2}-r}-\sqrt{\frac{1}{2}}\right)^{2}+\left( \sqrt{\frac{1}{2}+r}-\sqrt{\frac{1}{2}}\right)^{2}\] \[=2-(\sqrt{1+2r}+\sqrt{1-2r})\] \[\leq 4r^{2}.\] We obtain the conclusion by picking \(r=n^{-1/2}/4\). ### The 1NN estimator is proveably suboptimal The 1-Nearest-Neighbor estimator, henceforth denoted \(\hat{T}_{\mathrm{1NN}}\), was proposed by [14] as a computational surrogate for estimating optimal transport maps in the low smoothness regime. Written succinctly, their estimator is \(\hat{T}_{\mathrm{1NN}}(x)=\sum_{i=1}^{n}\mathbf{1}_{V_{i}}(x)Y_{\hat{\pi}(i)}\), where \((V_{i})_{i=1}^{n}\) are Voronoi regions i.e. \[V_{i}\coloneqq\{x\in\mathbb{R}^{d}\ :\ \|x-X_{i}\|\leq\|x-X_{k}\|\,,\forall\ k \neq i\}\,,\] and \(\hat{\pi}\) is the optimal transport plan between the empirical measures \(P_{n}\) and \(Q_{n}\), which amounts to a permutation. Computing the closest \(X_{i}\) to a new sample \(x\) has runtime \(\mathcal{O}(n\log(n))\), though the complexity of this estimator is determined by computing the plan \(\hat{\pi}\), which takes \(\mathcal{O}(n^{3})\) time via, e.g., the Hungarian Algorithm [see 19, Chapter 3]. When \(\varphi_{0}\) is smooth and strongly convex, [14] showed that, for \(d\geq 5\), \[\mathbb{E}\|\hat{T}_{\mathrm{1NN}}-\nabla\varphi_{0}\|_{L^{2}(P)}^{2}\lesssim n ^{-2/d}\,.\] In contrast to the rate optimality of the entropic Brenier map, we now show that \(\hat{T}_{\mathrm{1NN}}\) is proveably suboptimal in the semi-discrete setting. Not only does it fail to recover the minimax rate obtained by the entropic Brenier map, but its performance in fact degrades in comparison to the smooth case. A proof appears in Appendix G. **Proposition 4.2**.: _There exist a measure \(P\) satisfying **(A)** and a discrete measure \(Q\) satisfying **(B)** such that for \(d\geq 3\)_ \[\mathbb{E}\|\hat{T}_{\mathrm{1NN}}-T_{0}^{P\to Q}\|_{L^{2}(P)}^{2}\gtrsim n^{ -1/d}\,.\] ### Experiments We briefly verify our theoretical findings on synthetic experiments. To create the following plots, we draw two sets of \(n\) i.i.d. points from \(P\), \((X_{1},\ldots,X_{n})\) and \((X^{\prime}_{1},\ldots,X^{\prime}_{n})\), and create target points \(Y_{i}=T_{0}(X^{\prime}_{i})\), where \(T_{0}\) is known to us in advance in order to generate the data. Our estimators are computed on the data \((X_{1},\ldots,X_{n})\) and \((Y_{1},\ldots,Y_{n})\), and we evaluate the Mean-Squared error criterion \[\text{MSE}(\hat{T})=\|\hat{T}-T_{0}\|_{L^{2}(P)}^{2}\] of a given map estimator \(\hat{T}\) using Monte Carlo integration, using 50000 newly sampled points from \(P\). We plot the means across 10 repeated trials, accompanied by their standard deviations. #### 4.3.1 Semi-discrete example First consider \(P=\text{Unif}([0,1]^{d})\) and create atoms \(\{y_{1},\ldots,y_{J}\}\) by partitioning the points along the first coordinate for all \(j\in[J]\): \[(y_{j})[1]=\frac{(j-1/2)}{J}\,,\quad(y_{j})[2]=\cdots=(y_{j})[d]=0.5\,.\] We choose uniform \(q_{j}=1/J\) for \(j\in[J]\). In this case, it is easy to see that the optimal transport map \(T_{0}(x)\) is uniquely defined by the first coordinate of \(x_{1}\). Figure 2 illustrates the rate-optimal performance of the entropic Brenier map, and the proveably suboptimal performance of the 1-Nearest-Neighbor estimator. #### 4.3.2 Discontinuous example We turn our attention to a discontinuous transport map, where for \(x\in\mathbb{R}^{d}\), all the coordinates are fixed except for the first one \[T_{0}(x)=2\text{sign}(x[1])\otimes x[2]\otimes\cdots\otimes x[d]\,. \tag{23}\] We choose \(P=\text{Unif}([-1,1]^{d})\) to exhibit a discontinuity in the data. Focusing on \(d=10\), we see in Figure 3 that the entropic map estimator avoids the curse of dimensionality and enjoys a faster convergence rate, with better constants. ## 5 Conclusion Understanding optimal transport maps in the semi-discrete case is a natural stepping-stone to understanding the case for general discontinuous transport maps. In this work, we propose a tractable, minimax optimal estimator of the Brenier map in the semi-discrete setting, where the rate of estimation is dimension independent. To prove our result, we require several new results and techniques, and, as a by-product of our analysis, give the first parametric rates of estimation the entropic Brenier map, without exponential dependence in the regularization parameter. Our synthetic experiments indicate that the entropic Brenier map might be useful in estimating other variants of discontinuous transport maps, which constitutes an interesting direction for future research. ## Acknowledgements AAP would like to thank Tudor Manole for fruitful discussions, and gratefully thanks funding sources NSF Award 1922658, and Meta AI Research. JNW is thanks the Sloan Research Fellowship and NSF grant DMS-2210583 Reminders on semi-discrete entropic optimal transport We recall in this section some known results on entropic optimal transport that will be needed later. Let \(\mu,\nu\in\mathcal{P}(\Omega)\), where \(\Omega\subset B(0;R)\) is a compact set. **Lemma A.1** (Gcb\({}^{+}\)19).: _The entropic potential \((\varphi_{\varepsilon}^{\mu\to\nu},\psi_{\varepsilon}^{\mu\to\nu})\) have a bounded amplitude, in the sense that_ \[\max_{x\in\Omega}\varphi_{\varepsilon}^{\mu\to\nu}-\min_{x\in\Omega}\varphi_ {\varepsilon}^{\mu\to\nu}\leq cR \tag{24}\] _for some absolute constant \(c\), and similarly for \(\psi_{\varepsilon}^{\mu\to\nu}\)._ Assume now that \(\nu=\sum_{j=1}^{J}\nu_{j}\delta_{y_{j}}\) is a discrete measure. In this situation, only the values of the dual potential \(\psi_{\varepsilon}^{\mu\to\nu}\) on the points \(y_{1},\ldots,y_{J}\) are relevant. We therefore consider \(\psi_{\varepsilon}^{\mu\to\nu}\) as a vector in \(\mathbb{R}^{J}\). The potentials \(\varphi_{\varepsilon}^{\mu\to\nu}\) and \(\psi_{\varepsilon}^{\mu\to\nu}\) are dual of one another, in the sense of the \(\varepsilon\)-Legendre transform. Given a finite measure \(\rho\), the \(\varepsilon\)-Legendre transform of a function \(h\) with respect to \(\rho\) is given by \[\Phi_{\varepsilon}^{\rho}(h)(y)=\varepsilon\log\int e^{(\langle x,y\rangle-h (x))/\varepsilon}\,\mathrm{d}\rho(y). \tag{25}\] Relations (11) and (12) express that \(\varphi_{\varepsilon}^{\mu\to\nu}=\Phi_{\varepsilon}^{\nu}(\psi_{\varepsilon} ^{\mu\to\nu})\) and vice-versa. In the semi-discrete setting, it is also convenient to introduce the \(\varepsilon\)-Legendre transform with respect to the counting measure \(\sigma\) on \(\{y_{1},\ldots,y_{J}\}\). For a vector \(\psi\in\mathbb{R}^{J}\), we have \[\Phi_{\varepsilon}(\psi)(x)\coloneqq\Phi_{\varepsilon}^{\sigma}(\psi)(x)= \varepsilon\log\sum e^{(\langle x,y_{j}\rangle-\psi(y_{j}))/\varepsilon}. \tag{26}\] The \(\Phi_{\varepsilon}\) transform and the \(\Phi_{\varepsilon}^{\nu}\) transform are linked through the relation \[\Phi_{\varepsilon}^{\nu}(\psi)=\Phi_{\varepsilon}(\tilde{\psi})\qquad\text{ where}\qquad\tilde{\psi}(y_{j})=\psi(y_{j})-\varepsilon\log\nu_{j}, \tag{27}\] while we call \(\tilde{\psi}\) a _shifted_ potential. With such a notation, the optimality condition on the potentials can be rephrased. Let \[F_{\varepsilon}^{\mu\to\nu}:\psi\in\mathbb{R}^{J}\to\int\Phi^{\varepsilon}( \psi)+\int\psi\,\mathrm{d}\nu\,. \tag{28}\] Then, the function \(F_{\varepsilon}^{\mu\to\nu}\) is minimized at \(\tilde{\psi}_{\varepsilon}^{\mu\to\nu}\). For \(\psi\in\mathbb{R}^{J}\) and \(x\in\mathbb{R}^{d}\), we introduce the probability measure supported on \(\{y_{1},\ldots,y_{J}\}\) given by \[\forall i\in[J],\quad\pi_{\varepsilon}^{x}[\psi](y_{i})=\frac{e^{(\langle x,y _{i}\rangle-\psi(y_{i}))/\varepsilon}}{\sum_{j=1}^{J}e^{(\langle x,y_{j} \rangle-\psi(y_{j}))/\varepsilon}}=e^{(\langle x,y_{i}\rangle-\Phi_{ \varepsilon}(\psi)(x)-\psi(y_{i}))/\varepsilon}. \tag{29}\] A computation gives \(\nabla F_{\varepsilon}^{\mu\to\nu}(\psi)=\int\pi_{\varepsilon}^{x}[\psi]\, \mathrm{d}\mu(x)-\nu\), so that at optimality, we have \[\int\pi_{\varepsilon}^{x}[\tilde{\psi}_{\varepsilon}^{\mu\to\nu}]\,\mathrm{d} \mu(x)=\nu. \tag{30}\] In this case, \(\pi_{\varepsilon}^{x}=\pi_{\varepsilon}^{x}[\tilde{\psi}_{\varepsilon}^{\mu\to \nu}]\) is the conditional distribution of the second marginal of \(\pi_{\varepsilon}\) given that the first is equal to \(x\), as in Section 2.2.1. More generally, for any potential \(\psi\), the first order condition implies that \(\psi\) is equal to \(\tilde{\psi}_{\varepsilon}^{\mu\to\nu_{\psi}}\), the optimal dual potential between \(\mu\) an \(\nu_{\psi}=\int\pi_{\varepsilon}^{x}[\psi]\,\mathrm{d}\mu(x)\). Bound on the approximation error Proof of Theorem 3.2.: Let \(i,j\in[J]\). We define the \(j\)th slack at \(x\in L_{i}\) by \[\frac{1}{2}\Delta_{ij}(x)=-\langle x,y_{j}\rangle+\varphi_{0}(x)+\psi_{0}(y_{j}). \tag{31}\] As \(\varphi_{0}\) is the Legendre transform of \(\psi_{0}\), we have \(\Delta_{ij}(x)\geq 0\). If the cells \(L_{i}\) and \(L_{j}\) have a nonempty intersection, the set \(H_{ij}(t)=\{x\in L_{i}:\ \Delta_{ij}(x)=t\}\) represents the trace on \(L_{i}\) of the hyperplane spanned by the boundary between \(L_{i}\) and \(L_{j}\), shifted by \(t\). It is stated in [1] that for every nonnegative measurable function \(f:\mathbb{R}\to\mathbb{R}\), \[\int_{L_{i}}f(\Delta_{ij}(x))p(x)\,\mathrm{d}x=\frac{1}{2\|y_{i}-y_{j}\|}\int_ {0}^{\infty}f(t)h_{ij}(t)\,\mathrm{d}t, \tag{32}\] where \(h_{ij}(t)=\int_{H_{ij}(t)}p(x)\,\mathrm{d}\mathcal{H}_{d-1}(x)\) and \(\mathcal{H}_{d-1}\) is the \((d-1)\)-dimensional Hausdorff measure. In particular, \(w_{ij}=h_{ij}(0)\) is the (weighted) surface of the boundary between the \(i\)th and \(j\)th Laguerre cells (should it exist). Given \(x\in L_{i}\), let \(s(x)=\min_{j\neq i}\frac{1}{2}\Delta_{ij}(x)\). When the point \(x\) is sufficiently inside its Laguerre cell, the conditional probability \(\pi_{\varepsilon}^{x}\) becomes extremely concentrated around the point \(y_{i}\), as the next lemma shows. Note that \(\pi_{0}^{x}=\delta_{y_{i}}\) when \(x\in L_{i}\). **Lemma B.1**.: _Let \(x\in L_{i}\). For \(\varepsilon\) small enough, it holds that for every \(j\in[J]\), \(|\pi_{\varepsilon}^{x}(y_{j})-\pi_{0}^{x}(y_{j})|\leq ce^{-s(x)/\varepsilon}\), where \(c\) depends on \(J\), the distances \(\|y_{i}-y_{j}\|\) and on the quantities \(w_{ij}\)._ Such a result was already stated in [1, Corollary 2.2], although while requiring that the source measure \(P\) has a Holder continuous density. Only assumption **(A)** is needed here. Proof.: According to [1, Proposition 4.6], for \(\varepsilon\) small enough, \[\varepsilon^{-1}\|\tilde{\psi}_{\varepsilon}-\psi_{0}\|_{\infty}\leq C, \tag{33}\] where \(\tilde{\psi}_{\varepsilon}\) is the shifted version of \(\psi_{\varepsilon}\) (see (26)) and \(C\) depends on the distances \(\|y_{i}-y_{j}\|\) and on the \(w_{ij}\)s. Following [1, Proof of Corollary 2.2] and (29), we have for \(j\neq i\) \[|\pi_{\varepsilon}^{x}(y_{j})-\pi_{0}^{x}(y_{j})|=\pi_{\varepsilon}^{x}(y_{j} )=\frac{e^{(\langle x,y_{j}\rangle-\tilde{\psi}_{\varepsilon}(y_{j}))/ \varepsilon}}{\sum_{j^{\prime}=1}^{J}e^{(\langle x,y_{j^{\prime}}\rangle- \tilde{\psi}_{\varepsilon}(y_{j^{\prime}}))/\varepsilon}}\leq e^{2C}\frac{e^{( \langle x,y_{j}\rangle-\psi_{0}(y_{j}))/\varepsilon}}{\sum_{j^{\prime}=1}^{J} e^{(\langle x,y_{j^{\prime}}\rangle-\tilde{\psi}_{0}(y_{j^{\prime}}))/ \varepsilon}}\leq e^{2C}e^{-s(x)/\varepsilon}.\] A similar computation yields that \(|\pi_{\varepsilon}^{x}(y_{i})-\pi_{0}^{x}(y_{i})|=|\pi_{\varepsilon}^{x}(y_{i })-1|\leq Je^{2C}e^{-s(x)/\varepsilon}\). We can bound for any \(x\in L_{i}\), \[\|T_{\varepsilon}(x)-T_{0}(x)\|=\|\sum_{j=1}^{J}y_{j}(\pi_{\varepsilon}^{x}(y _{j})-\pi_{0}^{x}(y_{j}))\|\leq c\sum_{j=1}^{J}\|y_{j}\|e^{-s(x)/\varepsilon}. \tag{34}\] Therefore, letting \(C^{\prime}\) denote a constant, which may depend on \(J\), whose value may change from line to line, we obtain \[\|T_{\varepsilon}-T_{0}\|_{L^{2}(P)}^{2} =\sum_{i=1}^{J}\int_{L_{i}}\|T_{\varepsilon}(x)-T_{0}(x)\|^{2}\, \mathrm{d}P(x)\leq C^{\prime}\sum_{i=1}^{J}\int_{L_{i}}\sum_{j=1}^{J}e^{-2s(x) /\varepsilon}\,\mathrm{d}P(x) \tag{35}\] \[\leq C^{\prime}\sum_{i\neq j}\int_{L_{i}}e^{-\Delta_{ij}(x)/ \varepsilon}\,\mathrm{d}P(x)\leq C^{\prime}\sum_{i\neq j}\frac{1}{2\|y_{i}-y_{ j}\|}\int_{0}^{\infty}e^{-t/\varepsilon}h_{ij}(t)\,\mathrm{d}t\,, \tag{36}\] where in the second equality, we used the definition of \(s(x)\). Assumption **(A)** ensures that the functions \(h_{ij}\)s are bounded, which implies that the right-hand side in (36) is of order \(\varepsilon\). ## Appendix C Stability of entropic transport plans Proof of Proposition 3.7.: Note that we may assume without loss of generality that \(\nu\ll\nu^{\prime}\) and that \(\mathrm{KL}(\nu\|\nu^{\prime})<\infty\), for otherwise the bound is vacuous. For notational convenience, we omit the dependence on \(\varepsilon\) in the subscripts. Write \(\pi^{\mu,\nu}=\gamma^{\mu,\nu}(x,y)\mathrm{d}\mu(x)\mathrm{d}\nu(y)\) for the entropic optimal plan between \(\mu\) and \(\nu\), where \[\gamma^{\mu,\nu}=\exp\left(\frac{1}{\varepsilon}(\langle x,y\rangle-\varphi^{ \mu\to\nu}(x)-\psi^{\mu\to\nu}(y))\right)\,,\] and analogously define \[\gamma^{\mu^{\prime},\nu^{\prime}}=\exp\left(\frac{1}{\varepsilon}(\langle x, y\rangle-\varphi^{\mu^{\prime}\to\nu^{\prime}}(x)-\psi^{\mu^{\prime}\to\nu^{ \prime}}(y))\right)\,.\] Consider the measure \(\gamma^{\mu^{\prime},\nu^{\prime}}(x,y)\,\mathrm{d}\mu(x)\,\mathrm{d}\nu^{ \prime}(y)\). The first-order optimality condition for \((\varphi^{\mu^{\prime}\to\nu^{\prime}},\psi^{\mu^{\prime}\to\nu^{\prime}})\) implies that \[\int\gamma^{\mu^{\prime},\nu^{\prime}}(y)\,\mathrm{d}\nu^{\prime}(y)=1\quad \forall x\in\Omega\,, \tag{37}\] so that \(\gamma^{\mu^{\prime},\nu^{\prime}}(x,y)\,\mathrm{d}\nu^{\prime}(y)\) is a probability measure. Let us write \(\mathrm{d}\pi^{x}(y)=\gamma^{\mu,\nu}(x,y)\,\mathrm{d}\nu(y)\) and \(\mathrm{d}\rho^{x}(y)=\gamma^{\mu^{\prime},\nu^{\prime}}(x,y)\,\mathrm{d}\nu^ {\prime}(y)\). We make the following observations: first, \(T^{\mu\to\nu}(x)=\int y\,\mathrm{d}\pi^{x}(y)\) and \(T^{\mu^{\prime}\to\nu^{\prime}}(x)=\int y\,\mathrm{d}\rho^{x}(y)\). Second, the support of \(\rho^{x}\) lies inside \(B(0;R)\); since any Lipschitz function \(f\) on \(B(0;R)\) satisfies \(\sup_{x}f(x)-\inf_{x}f(x)\leq 2R\), Hoeffding's lemma [see 2, Lemma 2.2] implies that if \(f\) is Lipschitz and \(\int f\,\mathrm{d}\rho^{x}=0\), then \[\int e^{tf}\,\mathrm{d}\rho^{x}\leq e^{2R^{2}t^{2}}\quad\forall t\in\mathbb{ R}\,.\] This implies [3, Theorem 3.1] that \[W_{1}(\pi^{x},\rho^{x})^{2}\leq 8R^{2}\mathrm{KL}(\pi^{x}\|\rho^{x})\,. \tag{38}\] Third, Jensen's inequality implies that for any coupling \(\gamma\) between \(\pi^{x}\) and \(\rho^{x}\), \[\int\|y-y^{\prime}\|\,\mathrm{d}\gamma(y,y^{\prime})\geq\left\|\int(y-y^{ \prime})\,\mathrm{d}\gamma(y,y^{\prime})\right\|=\left\|T^{\mu\to\nu}(x)-T^{ \mu^{\prime}\to\nu^{\prime}}(x)\right\|, \tag{39}\] so that in particular, \(\|T^{\mu\to\nu}(x)-T^{\mu^{\prime}\to\nu^{\prime}}(x)\|\leq W_{1}(\pi^{x},\rho ^{x})\). Combining these facts, we obtain \[\frac{1}{8R^{2}}\|T^{\mu\to\nu}(x)-T^{\mu^{\prime}\to\nu^{\prime}}(x)\|^{2} \leq\mathrm{KL}(\pi^{x}\|\rho^{x})=\int\log\left(\frac{\gamma^{\mu,\nu}}{ \gamma^{\mu^{\prime},\nu^{\prime}}}(x,y)\frac{\mathrm{d}\nu}{\mathrm{d}\nu}(y )\right)\gamma^{\mu,\nu}(x,y)\,\mathrm{d}\nu(y)\,. \tag{40}\] Integrating both sides of this equation with respect to \(\mu\) yields \[\frac{1}{8R^{2}}\|T^{\mu\to\nu}(x)-T^{\mu^{\prime}\to\nu^{\prime}}(x)\|_{L^{2}( \mu)}^{2}\leq\int\log\left(\frac{\gamma^{\mu,\nu}}{\gamma^{\mu^{\prime},\nu^{ \prime}}}(x,y)\frac{\mathrm{d}\nu}{\mathrm{d}\nu^{\prime}}(y)\right)\,\mathrm{d }\pi^{\mu,\nu}(x,y)\,. \tag{41}\] Expanding the definition of \(\gamma^{\mu,\nu}\) and \(\gamma^{\mu^{\prime},\nu^{\prime}}\) and using that \[\int\log\frac{\mathrm{d}\nu}{\mathrm{d}\nu^{\prime}}(y)\,\mathrm{d}\pi^{\mu, \nu}(x,y)=\int\log\frac{\mathrm{d}\nu}{\mathrm{d}\nu^{\prime}}(y)\,\mathrm{d} \nu(y)=\mathrm{KL}(\nu\|\nu^{\prime})\] yields the claim. We now record two corollaries of this bound, which apply when either the source or the target measures of the entropic maps agree. **Corollary C.1**.: _For any \(\mu,\nu,\nu^{\prime}\) supported in \(B(0;R)\),_ \[\frac{1}{8R^{2}}\|T_{\varepsilon}^{\mu\to\nu}-T_{\varepsilon}^{\mu\to\nu^{ \prime}}\|_{L^{2}(\mu)}^{2}\leq\varepsilon^{-1}\int(\psi_{\varepsilon}^{\mu \to\nu^{\prime}}-\psi_{\varepsilon}^{\mu\to\nu})\,\mathrm{d}(\nu-\nu^{\prime} )+\mathit{KL}(\nu\|\nu^{\prime}). \tag{42}\] Proof.: We apply Proposition 3.7 with \(\mu=\mu^{\prime}\), which yields (once again omitting the dependency in \(\varepsilon\)) \[\frac{1}{8R^{2}}\|T_{\varepsilon}^{\mu\to\nu}-T_{\varepsilon}^{\mu\to\nu^{ \prime}}\|_{L^{2}(\mu)}^{2}\leq\varepsilon^{-1}\left(\int(\varphi^{\mu\to\nu^ {\prime}}-\varphi^{\mu\to\nu})\,\mathrm{d}\mu+\int(\psi^{\mu\to\nu^{\prime}}- \psi^{\mu\to\nu})\,\mathrm{d}\nu\right)+\mathrm{KL}(\nu\|\nu^{\prime})\,. \tag{43}\] By definition, \((\varphi^{\mu\to\nu^{\prime}},\psi^{\mu\to\nu^{\prime}})\) minimizes the expression \[\int\varphi\,\mathrm{d}\mu+\int\psi\,\mathrm{d}\nu^{\prime}+\varepsilon\iint e ^{(\langle x,y\rangle-\varphi(x)-\psi(y))/\varepsilon}\,\mathrm{d}\mu(x)\, \mathrm{d}\nu^{\prime}(y)-\varepsilon\,,\] so, recalling that \(\iint e^{(\langle x,y\rangle-\varphi^{\mu\to\nu^{\prime}}(x)-\psi^{\mu\to\nu^ {\prime}}(y))/\varepsilon}\,\mathrm{d}\mu(x)\,\mathrm{d}\nu^{\prime}(y)=1\), we have in particular \[\int\varphi^{\mu\to\nu^{\prime}}\,\mathrm{d}\mu+\int\psi^{\mu \to\nu^{\prime}}\,\mathrm{d}\nu^{\prime} \leq\int\varphi^{\mu\to\nu}\,\mathrm{d}\mu+\int\psi^{\mu\to\nu} \,\mathrm{d}\nu^{\prime}+\varepsilon\iint e^{(\langle x,y\rangle-\varphi^{\mu \to\nu}(x)-\psi^{\mu\to\nu}(y))/\varepsilon}\,\mathrm{d}\mu(x)\,\mathrm{d} \nu^{\prime}(y)-\varepsilon\,,\] \[=\int\varphi^{\mu\to\nu}\,\mathrm{d}\mu+\int\psi^{\mu\to\nu}\, \mathrm{d}\nu^{\prime}\,,\] where we have used that the first-order optimality condition for \((\varphi^{\mu\to\nu},\psi^{\mu\to\nu})\) implies that \(\iint e^{(\langle x,y\rangle-\varphi^{\mu\to\nu}(x)-\psi^{\mu\to\nu}(y))/ \varepsilon}\,\mathrm{d}\mu(x)\,\mathrm{d}\nu^{\prime}(y)=1\) as well (see (11)). This implies \[\int(\varphi^{\mu\to\nu^{\prime}}-\varphi^{\mu\to\nu})\,\mathrm{d}\mu\leq-\int (\psi^{\mu\to\nu^{\prime}}-\psi^{\mu\to\nu})\,\mathrm{d}\nu^{\prime}\,. \tag{44}\] Applying this inequality to (43) yields \[\frac{1}{8R^{2}}\|T_{\varepsilon}^{\mu\to\nu}-T_{\varepsilon}^{\mu\to\nu^{ \prime}}\|_{L^{2}(\mu)}^{2}\leq\varepsilon^{-1}\int(\psi^{\mu\to\nu^{\prime}}- \psi^{\mu\to\nu})\,\mathrm{d}(\nu-\nu^{\prime})+\mathrm{KL}(\nu\|\nu^{\prime}).\qed\] **Corollary C.2**.: _For any \(\mu,\mu^{\prime},\nu\) supported in \(B(0;R)\),_ \[\frac{1}{8R^{2}}\|T^{\mu\to\nu}_{\varepsilon}-T^{\mu^{\prime}\to\nu}_{ \varepsilon}\|^{2}_{L^{2}(\mu)}\leq\varepsilon^{-1}\int(\varphi^{\mu^{\prime} \to\nu}_{\varepsilon}-\varphi^{\mu\to\nu}_{\varepsilon})\,\mathrm{d}(\mu-\mu^ {\prime})\,. \tag{45}\] Proof.: We apply Proposition 3.7 with \(\nu=\nu^{\prime}\), yielding (dropping the dependency on \(\varepsilon\)) \[\frac{1}{8R^{2}}\|T^{\mu\to\nu}-T^{\mu^{\prime}\to\nu}\|^{2}_{L^{2}(\mu)}\leq \varepsilon^{-1}\left(\int(\varphi^{\mu^{\prime}\to\nu}-\varphi^{\mu\to\nu}) \,\mathrm{d}\mu+\int(\psi^{\mu^{\prime}\to\nu}-\psi^{\mu\to\nu})\,\mathrm{d} \nu\right)\,. \tag{46}\] An argument analogous to the one used in the proof of Corollary C.1 gives the inequality \[\int\varphi^{\mu^{\prime}\to\nu}\,\mathrm{d}\mu^{\prime}+\int\psi^{\mu^{ \prime}\to\nu}\,\mathrm{d}\nu\leq\int\varphi^{\mu\to\nu}\,\mathrm{d}\mu^{ \prime}+\int\psi^{\mu\to\nu}\,\mathrm{d}\nu\,, \tag{47}\] or, equivalently, \[\int(\psi^{\mu^{\prime}\to\nu}-\psi^{\mu\to\nu})\,\mathrm{d}\nu\leq-\int( \varphi^{\mu^{\prime}\to\nu}-\varphi^{\mu\to\nu})\,\mathrm{d}\mu^{\prime}\,, \tag{48}\] and combining this inequality with (46) proves the claim. ## Appendix D Strong convexity of the entropic semi-dual problem **Proposition D.1** (Strong convexity of \(F^{\mu\to\nu}_{\varepsilon}\)).: _Let \(\nu=\sum_{j=1}^{J}\nu_{j}\delta_{y_{j}}\) be a measure supported on \(\{y_{1},\ldots,y_{J}\}\subseteq B(0;R)\) and let \(\mu\) supported on a compact convex set \(\Omega\subseteq B(0;R)\) with a density \(p\) satisfying \(p_{\min}\leq p\leq p_{\max}\) for some \(p_{\max}\geq p_{\min}>0\). For \(\psi\in\mathbb{R}^{J}\), define \(\nu_{\psi}=\int\pi^{x}_{\varepsilon}(\psi)\,\mathrm{d}\mu(x)\) and assume that \(\nu_{\psi}\geq\lambda\nu\) for some \(0<\lambda\leq 1\). Then, we have for \(\varepsilon>0\)_ \[F^{\mu\to\nu}_{\varepsilon}(\psi)-\min_{\psi}F^{\mu\to\nu}_{\varepsilon}\geq C \lambda\cdot\mathrm{Var}_{\nu}(\psi-\psi^{\mu\to\nu}_{\varepsilon}), \tag{49}\] _where \(C=\left(e^{2R^{2}}\frac{p_{\max}}{p_{\min}}+\varepsilon\right)^{-1}\frac{p_{ \min}}{p_{\max}}\)._ Proof.: As \(\mu\) and \(\varepsilon\) are fixed, we will simply write \(\psi_{\nu}\) instead of \(\psi^{\mu\to\nu}_{\varepsilon}\), and write similarly \(F_{\nu}=F^{\mu\to\nu}_{\varepsilon}\). Recall the definition (26) of the shifted potential \(\tilde{\psi}_{\nu}(y_{j})=\psi_{\nu}(y_{j})-\varepsilon\log\nu_{j}\). According to [13, Theorem 3.2], the functional \(F_{\nu}\) is minimized at the vector \(\tilde{\psi}_{\nu}\), with \[\forall v\in\mathbb{R}^{J},\quad\mathrm{Var}_{\nu}(v)\leq\left(e^{2R^{2}} \frac{p_{\max}}{p_{\min}}+\varepsilon\right)v^{\top}\nabla^{2}F_{\nu}(\tilde{ \psi}_{\nu})v. \tag{50}\] For \(t\in[0,1]\), let \(\psi_{t}=\tilde{\psi}_{\nu}+t(\psi-\tilde{\psi}_{\nu})\) and let \(\nu_{t}=\int\pi^{x}_{\varepsilon}(\psi_{t})\,\mathrm{d}\mu(x)\). The potential \(\psi_{t}\) is the (shifted) entropic Brenier potential between \(\mu\) and \(\nu_{t}\), so that it minimizes the functional \(F_{\nu_{t}}\) (see Appendix A). Also, note that \(\nabla^{2}F_{\nu}\) does not depend on \(\nu\), so that \[v^{\top}\nabla^{2}F_{\nu}(\psi_{t})v=v^{\top}\nabla^{2}F_{\nu_{t}}(\psi_{t})v \geq\left(e^{2R^{2}}\frac{p_{\max}}{p_{\min}}+\varepsilon\right)^{-1}\mathrm{ Var}_{\nu_{t}}(v). \tag{51}\] Let \(v=\psi-\psi^{\mu\to\nu}_{\varepsilon}\). A Taylor expansion of \(F_{\nu}\) gives \[F_{\nu}(\psi)-F_{\nu}(\tilde{\psi}_{\nu})=\int_{0}^{1}v^{\top}\nabla^{2}F_{\nu }(\psi_{t})v\,\mathrm{d}t\geq\left(e^{2R^{2}}\frac{p_{\max}}{p_{\min}}+ \varepsilon\right)^{-1}\int_{0}^{1}\mathrm{Var}_{\nu_{t}}(v)\,\mathrm{d}t. \tag{52}\] **Lemma D.2**.: _Write \(\nu_{t}=\sum_{j=1}^{J}\nu_{t,j}\delta_{y_{j}}\). Then, for all \(t\in[0,1]\) and \(j\in[J]\), we have \(\nu_{t,j}\geq\frac{p_{\min}}{p_{\max}}\nu_{0,j}^{1-t}\nu_{1,j}^{t}\)._ This lemma is enough to conclude the proof. Indeed, \(\nu_{1}=\nu_{\psi}\geq\lambda\nu\), so that it implies that \(\mathrm{Var}_{\nu_{t}}(v)\geq\frac{p_{\min}}{p_{\max}}\lambda\mathrm{Var}_{\nu }(v)\). Proof of Lemma D.2.: According to [10, Proof of Proposition 4.1], \[\Phi_{\varepsilon}(\psi_{t})(tx+(1-t)y)\leq t\Phi_{\varepsilon}(\tilde{\psi}_{ \varepsilon}^{\mu\to\nu})(x)+(1-t)\Phi_{\varepsilon}(\psi)(y). \tag{53}\] Therefore, if we let \(h_{t}(x)=e^{(\langle x,y_{j}\rangle-\psi_{t}(y_{j})-\Phi_{\varepsilon}(\psi_{t })(x))/\varepsilon}\), then we have \(h_{t}(tx+(1-t)y)\geq h_{0}(x)^{t}h_{1}(y)^{1-t}\). By the Prekopa-Leindler inequality, \[\nu_{t,j}=\int h_{t}(x)\,\mathrm{d}\mu(x) \geq p_{\min}\int_{\mathcal{X}}h_{t}(x)\,\mathrm{d}x\] \[\geq p_{\min}\left(\int_{\mathcal{X}}h_{0}(x)\,\mathrm{d}x\right) ^{t}\left(\int_{\mathcal{X}}h_{1}(x)\,\mathrm{d}x\right)^{1-t}\] \[\geq\frac{p_{\min}}{p_{\max}}\nu_{0,j}^{1-t}\nu_{1,j}^{t}.\] Proof of Proposition 3.9.: As in the previous proof, we drop the \(\varepsilon\) and \(\mu\) dependency in our notation. Write \(\nu_{k}=\sum_{j=1}^{J}\nu_{k,j}\delta_{y_{j}}\) for \(k=0,1\), and define as before the shifted potentials \(\tilde{\psi}_{\nu_{k}}(y_{j})=\psi_{\nu_{1}}(y_{j})-\varepsilon\log\nu_{k,j}\). Let \(\theta>0\) be a parameter to fix. According to Proposition D.1, Lemma H.1, and using the inequality \(F_{\nu_{1}}(\tilde{\psi}_{\nu_{1}})\leq F_{\nu_{1}}(\tilde{\psi}_{\nu_{0}})\), we have \[C\lambda\mathrm{Var}_{\nu_{0}}(\tilde{\psi}_{\nu_{1}}-\tilde{ \psi}_{\nu_{0}}) \leq F_{\nu_{0}}(\tilde{\psi}_{\nu_{1}})-F_{\nu_{0}}(\tilde{\psi}_{ \nu_{0}})\] \[\leq F_{\nu_{0}}(\tilde{\psi}_{\nu_{1}})-F_{\nu_{1}}(\tilde{\psi }_{\nu_{1}})+F_{\nu_{1}}(\tilde{\psi}_{\nu_{0}})-F_{\nu_{0}}(\tilde{\psi}_{ \nu_{0}})\] \[=\int(\tilde{\psi}_{\nu_{1}}-\tilde{\psi}_{\nu_{0}})(\,\mathrm{d} \nu_{0}-\,\mathrm{d}\nu_{1})\] \[\leq\frac{\theta}{2}\mathrm{Var}_{\nu_{0}}(\tilde{\psi}_{\nu_{1}} -\tilde{\psi}_{\nu_{0}})+\frac{1}{2\theta}\chi^{2}(\nu_{1}\|\nu_{0}).\] We pick \(\theta=C\lambda\) to conclude that \[\mathrm{Var}_{\nu_{0}}(\tilde{\psi}_{\nu_{1}}-\tilde{\psi}_{\nu_{0}})\leq\frac {1}{(C\lambda)^{2}}\chi^{2}(\nu_{1}\|\nu_{0}). \tag{54}\] Therefore, using the inequality \(|\log(a/b)|\leq|a-b|/\min\{a,b\}\) for \(a,b>0\), \[\mathrm{Var}_{\nu_{0}}(\psi_{1}-\psi_{0}) \leq 2\mathrm{Var}_{\nu_{0}}(\tilde{\psi}_{1}-\tilde{\psi}_{0})+2 \sum_{j=1}^{J}\nu_{0,j}\left(\log\left(\frac{\nu_{1,j}}{\nu_{0,j}}\right) \right)^{2}\] \[\leq\frac{2}{(C\lambda)^{2}}\chi^{2}(\nu_{1}\|\nu_{0})+2\sum_{j= 1}^{J}\nu_{0,j}\left(\frac{\nu_{1,j}-\nu_{0,j}}{\min\{\nu_{0,j},\nu_{1,j}\}} \right)^{2}\] \[\leq\frac{2}{(C\lambda)^{2}}\chi^{2}(\nu_{1}\|\nu_{0})+\frac{2}{ \lambda^{2}}\sum_{j=1}^{J}\frac{1}{\nu_{0,j}}(\nu_{1,j}-\nu_{0,j})^{2}\leq \left(\frac{2}{(C\lambda)^{2}}+\frac{2}{\lambda^{2}}\right)\chi^{2}(\nu_{1} \|\nu_{0}).\qed\] Control of the fluctuations in the one-sample case **Lemma E.1** (Sample complexity in the one-sample case).: _Assume that \(P\) satisfy **(A)** and that \(Q\) satisfy **(B)**. Then, it holds that \(\mathbb{E}\|T_{\varepsilon}^{P\to Q_{n}}-T_{\varepsilon}\|_{L^{2}(P)}^{2} \lesssim\varepsilon^{-1}n^{-1}\)._ Proof.: To ease notation, we write \(T_{\varepsilon,n}=T_{\varepsilon}^{P\to Q_{n}}\) and \(\psi_{\varepsilon,n}=\psi_{\varepsilon}^{P\to Q_{n}}\). As explained in Section 3, the stability result Proposition 3.7 implies that \[\mathbb{E}\|T_{\varepsilon,n}-T_{\varepsilon}\|_{L^{2}(P)}^{2}\leq\frac{8R^{2} }{\varepsilon}\Big{(}\frac{\mathbb{E}[\operatorname{Var}_{Q}(\psi_{\varepsilon,n}-\psi_{\varepsilon})]}{2}+\frac{\mathbb{E}[\chi^{2}(Q_{n}\|Q)]}{2}\Big{)}+8 R^{2}\mathbb{E}[\chi^{2}(Q_{n}\|Q)]\,. \tag{55}\] Write \(Q=\sum_{j=1}^{J}q_{j}\delta_{y_{j}}\) and \(Q_{n}=\sum_{j=1}^{J}\hat{q}_{j}\delta_{y_{j}}\), and introduce the event \(E=\{\forall j\in[J],\ \hat{q}_{j}\geq q_{j}/2\}\). If \(E\) is satisfied, we have \(Q_{n}\geq Q/2\), so that Proposition 3.9 yields \[\operatorname{Var}_{Q}(\psi_{\varepsilon,n}-\psi_{\varepsilon})\leq C\chi^{2} (Q_{n}\|Q). \tag{56}\] If \(E\) is not satisfied, we use the fact that the entropic potentials have a bounded amplitude (see Lemma A.1), to obtain that \[\operatorname{Var}_{Q}(\psi_{\varepsilon,n}-\psi_{\varepsilon})\leq C^{\prime}. \tag{57}\] **Lemma E.2**.: _Let \(E\) be the event that \(Q_{n}\geq Q/2\). Then \(\mathbb{P}(E^{c})\leq Je^{-cq_{\min}n}\) for some \(c>0\)._ Proof.: By [25, Exercise 2.3.2], we have \(\mathbb{P}(E^{c})\leq\sum_{j=1}^{J}\mathbb{P}(\hat{q}_{j}<q_{j}/2)\leq Je^{-cq _{\min}n}\) for some \(c>0\). We obtain \[\mathbb{E}\|\hat{T}_{\varepsilon,n}-T_{\varepsilon}\|_{L^{2}(P)}^{2}\lesssim \frac{R^{2}}{\varepsilon}\mathbb{E}[\chi^{2}(Q_{n}\|Q)]+\frac{R^{2}}{ \varepsilon}Je^{-cq_{\min}n}\lesssim\varepsilon^{-1}n^{-1} \tag{58}\] by Lemma H.2. ## Appendix F Control of the fluctuations in the two-sample case The goal of this section is to prove Theorem 3.5. We will actually prove a more general result, and show that _for any discrete measure_\(\nu=\sum_{j=1}^{J}\nu_{j}\delta_{y_{j}}\) supported on \(\{y_{1},\dots,y_{J}\}\) with \(\nu_{j}\geq\nu_{\min}>0\) for all \(j\in[J]\), we have for \(\log(1/\varepsilon)\lesssim n/\log(n)\), \[\mathbb{E}\|T_{\varepsilon}^{P_{n}\to\nu}-T_{\varepsilon}^{P\to\nu}\|_{L_{2}(P )}^{2}\lesssim\varepsilon^{-1}n^{-1}. \tag{59}\] Theorem 3.5 follows from (59) by conditioning on \(Q_{n}\). Let \(E\) be the event that \(Q_{n}\geq Q/2\). Then, by Lemma E.2, \[\mathbb{E}\|\hat{T}_{\varepsilon}-T_{\varepsilon}^{P\to Q_{n}}\|_{L_{2}(P )}^{2} \leq\mathbb{E}\left[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{ \varepsilon}^{P\to Q_{n}}\|_{L_{2}(P)}^{2}|Q_{n}]\mathds{1}\{E\}\right]+R^{2} \mathbb{P}(E^{c})\] \[\leq C\varepsilon^{-1}n^{-1}+R^{2}Je^{-cq_{\min}n}\lesssim \varepsilon^{-1}n^{-1}.\] We obtain Theorem 3.5 by combining this bound with Lemma E.1. To prove (59), we first use Corollary C.2 which yields \[\begin{split}\mathbb{E}\|T_{\varepsilon}^{P_{n}\to\nu}-T_{ \varepsilon}^{P\to\nu}\|_{L_{2}(P)}^{2}&\leq 8R^{2}\varepsilon^{-1} \mathbb{E}\int(\varphi_{\varepsilon}^{P_{n}\to\nu}-\varphi_{\varepsilon}^{P \to\nu})\,\mathrm{d}(P_{n}-P)\\ &=8R^{2}\varepsilon^{-1}\mathbb{E}\int(\Phi_{\varepsilon}(\tilde{ \psi}_{\varepsilon}^{P_{n}\to\nu})-\Phi_{\varepsilon}(\tilde{\psi}_{ \varepsilon}^{P\to\nu}))\,\mathrm{d}(P_{n}-P),\end{split} \tag{60}\] where we recall that for a potential \(\psi\), the shifted potential \(\tilde{\psi}\) is given by \(\tilde{\psi}_{j}=\psi_{j}-\varepsilon\log\nu_{j}\). The remainder of the proof consists in bounding this integral by using localization arguments and standard bounds on suprema of empirical processes. Our first goal is to show that the potential \(\psi_{\varepsilon}^{P_{n}\to\nu}\) is close to to the potential \(\psi_{\varepsilon}^{P\to\nu}\) for the \(\infty\)-norm. It will be convenient to work with the "\(L_{\infty}\)-variance" \[\mathrm{Var}_{\infty}(\psi)=\inf_{c\in\mathbb{R}}\max_{j\in[J]}|\psi(y_{j})-c |^{2}=\left(\frac{\max\psi-\min\psi}{2}\right)^{2}. \tag{61}\] As the measure \(\nu\) is lower bounded, it holds that \[\mathrm{Var}_{\nu}(\psi)\geq\nu_{\min}\mathrm{Var}_{\infty}(\psi). \tag{62}\] **Lemma F.1** (Supremum of \(\varepsilon\)-Legendre transforms).: _Let \(\psi_{0}\) be a fixed potential and let \(\tau>0\). Then, for all \(j\in[J]\),_ \[\mathbb{E}\left[\sup_{\mathrm{Var}_{\infty}(\psi-\psi_{0})\leq \tau^{2}}\left|\int(\pi_{\varepsilon}^{x}(\psi)_{j}-\pi_{\varepsilon}^{x}( \psi_{0})_{j})\,\mathrm{d}(P-P_{n})(x)\right|\right]\leq C\sqrt{\frac{J\max\{ \log(\tau/\varepsilon),1\}}{n}} \tag{63}\] \[\mathbb{E}\left[\sup_{\mathrm{Var}_{\infty}(\psi-\psi_{0})\leq \tau^{2}}\left|\int(\Phi_{\varepsilon}(\psi)(x)-\Phi_{\varepsilon}(\psi_{0}))( x)\,\mathrm{d}(P-P_{n})(x)\right|\right]\leq C\tau\sqrt{\frac{J}{n}} \tag{64}\] _for some absolute constant \(C\)._ Proof.: Let us prove the first inequality. The functional \(\pi_{\varepsilon}^{x}\) is invariant by translation: \(\pi_{\varepsilon}^{x}(\psi+c)=\pi_{\varepsilon}^{x}(\psi)\) for all \(c\in\mathbb{R}\). This implies that \[\sup_{\mathrm{Var}_{\infty}(\psi-\psi_{0})\leq\tau^{2}}\left|\int (\Phi_{\varepsilon}(\psi)(x)-\Phi_{\varepsilon}(\psi_{0}))(x)\,\mathrm{d}(P-P _{n})(x)\right|\] \[=\sup_{\|\psi-\psi_{0}\|_{\infty}\leq\tau}\left|\int(\Phi_{ \varepsilon}(\psi)(x)-\Phi_{\varepsilon}(\psi_{0}))(x)\,\mathrm{d}(P-P_{n})(x) \right|.\] For a metric space \((A,d)\) and \(u>0\), we let \(N(u,A,d)\) be the covering number of \(A\) at scale \(u\), that is the smallest number of balls of radius \(u\) needed to cover \(A\). Let \(B\) be the \(L_{\infty}\)-ball of radius \(\tau\) in \(\mathbb{R}^{J}\), centered at \(\psi_{0}\), and let \(\|\cdot\|_{\infty}\) denote the \(\infty\)-norm. For \(0<u\leq\tau\), we have \(\log N(u,B,\|\cdot\|_{\infty})\leq J\log(\tau/u)\). As the function \(\psi\mapsto\pi_{\varepsilon}^{x}(\psi)_{j}\) is \(\varepsilon^{-1}\)-Lipschitz continuous for every \(x\in\mathbb{R}^{d}\), we have for \(0<u\leq\tau/\varepsilon\), \[\log N(u,\{x\mapsto\pi_{\varepsilon}^{x}(\psi)_{j}:\ \psi\in B\},\|\cdot\|_{\infty}) \leq J\log(\tau/(u\varepsilon))\,.\] Remarking furthermore that \(0\leq\pi_{\varepsilon}^{x}(\psi)_{j}\leq 1\) (so that the class of functions \(\{x\mapsto\pi_{\varepsilon}^{x}(\psi)_{j}:\psi\in B\}\) admits the constant function \(1\) as an envelope function), we obtain the following control using Lemma H.3: \[\mathbb{E}\left[\sup_{\|\psi-\psi_{0}\|_{\infty}\leq\tau}\left| \int(\pi_{\varepsilon}^{x}(\psi)_{j}-\pi_{\varepsilon}^{x}(\psi_{0})_{j})( \,\mathrm{d}P-\,\mathrm{d}P_{n})(x)\right|\right]\] \[\quad\leq\frac{c_{0}}{\sqrt{n}}\int_{0}^{c_{1}}\!\!\sqrt{J\log 2N(u, \{x\mapsto\pi_{\varepsilon}^{x}(\psi)_{j}:\ \psi\in B\},\|\cdot\|_{\infty})}\,\mathrm{d}u\] \[\quad\leq\sqrt{\frac{c_{2}J\max\{\log(\tau/\varepsilon),1\}}{n}},\] where \(c_{0}\), \(c_{1}\) and \(c_{2}\) are absolute constants, and the last line follows from arguing on whether \(c_{1}<\tau/\varepsilon\) or not. The second inequality follows from the same argument, using that the function \(\psi\mapsto\Phi_{\varepsilon}(\psi)\) is \(1\)-Lipschitz continuous. Indeed, the functional \(\Phi_{\varepsilon}\) satisfies \(\Phi_{\varepsilon}(\psi+c)=\Phi_{\varepsilon}(\psi)+c\) for all \(c\in\mathbb{R}\). Then the set \(\{\psi:\ \mathrm{Var}_{\infty}(\psi-\psi_{0})\leq\tau^{2}\}\) is equal to the set \(\{\psi+c:\ \psi\in B,\ c\in\mathbb{R}\}\). As \(\int c\,\mathrm{d}(P-P_{n})=0\), we can therefore once again restrict the supremum to vectors \(\psi\in B\). Furthermore, an envelope function of the class \(\{\Phi_{\varepsilon}(\psi)-\Phi_{\varepsilon}(\psi_{0}):\ \psi\in B\}\) is the constant function equal to \(\tau\). Therefore, by Lemma H.3, we obtain \[\mathbb{E}\left[\sup_{\|\psi-\psi_{0}\|_{\infty}\leq\tau}\left| \int(\Phi_{\varepsilon}(\psi)-\Phi_{\varepsilon}(\psi_{0}))(\,\mathrm{d}P-\, \mathrm{d}P_{n})\right|\right]\] \[\quad\leq\frac{c_{0}}{\sqrt{n}}\int_{0}^{c_{1}\tau}\sqrt{J\log 2 N(u,\{\Phi_{\varepsilon}(\psi):\ \psi\in B\},\|\cdot\|_{\infty})}\,\mathrm{d}u\] \[\quad\leq\sqrt{\frac{c_{3}J\tau}{n}}.\qed\] **Proposition F.2**.: _Assume that \(P\) satisfies **(A)** and let \(\nu=\sum_{j=1}^{J}\nu_{j}\delta_{y_{j}}\) be a measure supported on \(\{y_{1},\ldots,y_{J}\}\subset B(0;R)\), with \(\nu_{j}\geq q_{\min}\) for all \(j\in[J]\). Then, for all \(0<\varepsilon\leq 1\) with \(\log(1/\varepsilon)\lesssim n/\log(n)\), it holds that_ \[\mathbb{E}\mathrm{Var}_{\infty}(\tilde{\psi}_{\varepsilon}^{P_{n}\to\nu}- \tilde{\psi}_{\varepsilon}^{P\to\nu})\lesssim n^{-1}. \tag{65}\] Proof.: To alleviate notation, we will write \(\psi_{n}=\psi_{\varepsilon}^{P_{n}\to\nu}\) and \(\psi_{0}=\psi_{\varepsilon}^{P\to\nu}\). Similarly, we write \(F_{n}=F_{\varepsilon}^{P_{n}\to\nu}\) and \(F_{0}=F_{\varepsilon}^{P\to\nu}\). Let \(\nu_{n}=\int\pi_{\varepsilon}^{x}(\psi_{\varepsilon}^{P_{n}\to\nu})\,\mathrm{ d}P(x)\). Under the event \(E=\{\nu_{n}\geq\nu/2\}\), we have according to Proposition D.1 and the fact that \(\tilde{\psi}_{n}\) minimizes \(F_{n}\), \[\begin{split} C\nu_{\min}\mathrm{Var}_{\infty}(\tilde{\psi}_{n}- \tilde{\psi}_{0})&\leq C\mathrm{Var}_{\nu}(\tilde{\psi}_{n}- \tilde{\psi}_{0})\leq F_{0}(\tilde{\psi}_{n})-F_{0}(\tilde{\psi}_{0})\\ &\leq F_{0}(\tilde{\psi}_{n})-F_{n}(\tilde{\psi}_{n})+F_{n}( \tilde{\psi}_{0})-F_{0}(\tilde{\psi}_{0})\\ &=\int(\Phi_{\varepsilon}(\tilde{\psi}_{n})-\Phi_{\varepsilon}( \tilde{\psi}_{0}))\,\mathrm{d}(P-P_{n})\end{split} \tag{66}\] Let us bound \(\mathbb{P}(E^{c})\). As \(\tilde{\psi}_{n}\) is the minimum of \(F_{n}\), we have \(\nu=\int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)\) (see Appendix A). Therefore, we may write \(\nu_{n,j}=\int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)+ \int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}(P-P_{n})(x)= \int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)\). We can now prove the following. **Lemma F.3**.: _Assume that \(P\) satisfies **(A)** and let \(\nu=\sum_{j=1}^{J}\nu_{j}\delta_{y_{j}}\) be a measure supported on \(\{y_{1},\ldots,y_{J}\}\subset B(0;R)\), with \(\nu_{j}\geq q_{\min}\) for all \(j\in[J]\). Then, for all \(0<\varepsilon\leq 1\) with \(\log(1/\varepsilon)\lesssim n/\log(n)\), it holds that_ \[\mathbb{E}\mathrm{Var}_{\infty}(\tilde{\psi}_{\varepsilon}^{P_{n}\to\nu}- \tilde{\psi}_{\varepsilon}^{P\to\nu})\lesssim n^{-1}. \tag{67}\] Proof.: To alleviate notation, we will write \(\psi_{n}=\psi_{\varepsilon}^{P_{n}\to\nu}\) and \(\psi_{0}=\psi_{\varepsilon}^{P\to\nu}\). Similarly, we write \(F_{n}=F_{\varepsilon}^{P_{n}\to\nu}\) and \(F_{0}=F_{\varepsilon}^{P\to\nu}\). Let \(\nu_{n}=\int\pi_{\varepsilon}^{x}(\psi_{\varepsilon}^{P_{n}\to\nu})\,\mathrm{d}P(x)\). Under the event \(E=\{\nu_{n}\geq\nu/2\}\), we have according to Proposition D.1 and the fact that \(\tilde{\psi}_{n}\) minimizes \(F_{n}\), \[\begin{split} C\nu_{\min}\mathrm{Var}_{\infty}(\tilde{\psi}_{n}- \tilde{\psi}_{0})&\leq C\mathrm{Var}_{\nu}(\tilde{\psi}_{n}-\tilde{ \psi}_{0})\leq F_{0}(\tilde{\psi}_{n})-F_{0}(\tilde{\psi}_{0})\\ &\leq F_{0}(\tilde{\psi}_{n})-F_{n}(\tilde{\psi}_{n})+F_{n}(\tilde{ \psi}_{0})-F_{0}(\tilde{\psi}_{0})\\ &=\int(\Phi_{\varepsilon}(\tilde{\psi}_{n})-\Phi_{\varepsilon}( \tilde{\psi}_{0}))\,\mathrm{d}(P-P_{n})\end{split} \tag{68}\] Let us bound \(\mathbb{P}(E^{c})\). As \(\tilde{\psi}_{n}\) is the minimum of \(F_{n}\), we have \(\nu=\int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)\) (see Appendix A). Therefore, we may write \(\nu_{n,j}=\int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)+ \int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}(P-P_{n})(x)= \int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}P_{n}(x)\). We can now prove the following. \(\nu_{j}+Z_{j}\), where \[Z_{j}=\int\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}\,\mathrm{d}(P-P_{n})(x)= \int(\pi_{\varepsilon}^{x}(\tilde{\psi}_{n})_{j}-\pi_{\varepsilon}^{x}(\tilde{ \psi}_{0})_{j})\,\mathrm{d}(P-P_{n})(x).\] Note that \(\mathrm{Var}_{\infty}(\tilde{\psi}_{n}-\tilde{\psi}_{0})\lesssim R^{2}\) (see Lemma A.1), so that by Lemma F.1 and Lemma H.3, \[\mathbb{P}(E^{c})\leq\sum_{j=1}^{J}\mathbb{P}(|Z_{j}|>\nu_{j}/2)\leq J\exp \left(-c\frac{\sqrt{n}q_{\min}}{(\sqrt{J\log(1/\varepsilon)}+\log n}\right) \lesssim n^{-1}, \tag{67}\] under the condition \(\log(1/\varepsilon)\lesssim n/\log(n)\). For \(k\geq 0\), let \(a_{k}=2^{k}/\sqrt{n}\) and fix some \(p>2\). Let \[B_{a}=\sup_{\mathrm{Var}_{\infty}(\psi-\tilde{\psi}_{0})\leq a^{2}}\left|\int (\Phi_{\varepsilon}(\psi)-\Phi_{\varepsilon}(\tilde{\psi}_{0}))\,\mathrm{d}(P -P_{n})\right|\,.\] Assume that \(E\) is satisfied and that \(\mathrm{Var}_{\infty}(\tilde{\psi}_{0}-\tilde{\psi}_{n})\in[a^{2},b^{2}]\). Then, according to (66), it holds that \(B_{b}\geq ca^{2}\). Using Markov's inequality, Lemma F.1 and Lemma H.3, we bound \[\mathbb{E}\mathrm{Var}_{\infty}(\tilde{\psi}_{n}-\tilde{\psi}_{0}) \leq a_{0}^{2}+\sum_{k\geq 0}\mathbb{P}(\mathrm{Var}_{\infty}( \tilde{\psi}_{n}-\tilde{\psi}_{0})\in[a_{k}^{2},a_{k+1}^{2}]\text{ and }E)a_{k+1}^{2}+C\mathbb{P}(E^{c})\] \[\lesssim n^{-1}+\sum_{k\geq 0}\mathbb{P}\left(B_{a_{k+1}}\geq ca _{k}^{2}\right)a_{k+1}^{2}+\lesssim n^{-1}+\sum_{k\geq 0}\frac{\mathbb{E}[B_{a_{k+1} }^{p}]}{a_{k}^{2p}}a_{k+1}^{2}+\] \[\lesssim n^{-1}+\sum_{k\geq 0}\frac{(2^{k}/n)^{p}}{(4^{k}/n)^{p}} \frac{4^{k+1}}{n}+\mathbb{P}(E^{c})\lesssim n^{-1}+\sum_{k\geq 0}\frac{2^{2k- pk}}{n}\lesssim n^{-1}.\qed\] **Proposition F.3**.: _Under the same assumptions than Proposition F.2, it holds that_ \[\mathbb{E}\|T_{\varepsilon}^{P_{n}\to\nu}-T_{\varepsilon}^{P\to\nu}\|_{\infty }^{2}\lesssim\varepsilon^{-1}n^{-1}. \tag{68}\] Proof.: Let \(Z=\mathrm{Var}_{\infty}(\tilde{\psi}_{n}-\tilde{\psi}_{0})\). Let once again \(a_{k}=2^{k}/\sqrt{n}\) for \(k\geq 1\), with \(a_{0}=0\). Fix some \(p>2\), with \(q=\frac{p}{p-1}\). For \(a>0\), let \(D_{a}=\sup_{\mathrm{Var}_{\infty}(\psi-\tilde{\psi}_{0})\leq a^{2}}\left|\int (\Phi_{\varepsilon}(\psi)-\Phi_{\varepsilon}(\tilde{\psi}_{0}))\,\mathrm{d}(P -P_{n})\right|\). By Holder inequality and Markov inequality, we obtain, \[\mathbb{E}\int(\Phi_{\varepsilon}(\tilde{\psi}_{n})-\Phi_{ \varepsilon}(\tilde{\psi}_{0}))\,\mathrm{d}(P-P_{n})\] \[\leq\sum_{k\geq 0}\mathbb{E}\left[\mathds{1}\{Z\in[a_{k}^{2},a_{k +1}^{2}]\}\sup_{\mathrm{Var}_{\infty}(\psi-\tilde{\psi}_{0})\leq a_{k+1}^{2}} \int(\Phi_{\varepsilon}(\psi)-\Phi_{\varepsilon}(\tilde{\psi}_{0}))\,\mathrm{d}( P-P_{n})\right]\] \[\qquad\leq\mathbb{E}[D_{a_{1}}]+\sum_{k\geq 1}\left(\mathbb{P}(Z\geq a _{k}^{2})\right)^{1/q}\mathbb{E}\left[D_{a_{k+1}}^{p}\right]^{1/p}\] \[\lesssim n^{-1}+\sum_{k\geq 0}\left(\frac{\mathbb{E}[Z]}{a_{k}^{2} }\right)^{1/q}\frac{2^{k}}{n}\lesssim\sum_{k\geq 0}\frac{2^{k(1-2/q)}}{n} \lesssim n^{-1},\] where we use Proposition F.2, Lemma F.1 and Lemma H.3 at the last line. Equation (60) then gives the conclusion. A lower bound for the performance of the 1NN estimator In this section, we prove Proposition 4.2. We let \(P\) be the Lebesgue measure on \(\Omega=[0,1]^{d}\), and let \(y_{0}=(0,1/2,\ldots,1/2)\) and \(y_{1}=(1,1/2,\ldots,1/2)\). We denote by \(P_{n}\) an empirical measure consisting of i.i.d. samples from \(P_{n}\). As in Appendix F, we work in a general setting of a generic discrete target measure \(\nu\), which may either be fixed or may be a random measure independent of \(P_{n}\). We let \(\nu=\sum_{j=0,1}\nu_{j}\delta_{y_{j}}\) for \(\nu_{0},\nu_{1}\geq\frac{1}{4}\); this latter condition will hold with overwhelming probability if \(\nu\) is an empirical measure \(Q_{n}\) corresponding to \(n\) i.i.d. samples from \(Q=\frac{1}{2}\delta_{y_{0}}+\frac{1}{2}\delta_{y_{1}}\). Following [1], we define the one-nearest neighbor estimator \(\hat{T}_{\rm 1NN}\) in this general context by \[\hat{T}_{\rm 1NN}(x)=\sum_{i=1}^{n}\sum_{j=0,1}\mathbf{1}_{V_{i}}(x)(n\hat{ \pi}(X_{i},y_{j}))\,,\] where \(\hat{\pi}\) is the empirical optimal coupling between \(P_{n}\) and \(\nu\). We first examine the structure of the Brenier map \(T_{0}=\nabla\varphi_{0}\). The considerations in Section 2.1.1 imply that \[T_{0}(x)=\begin{cases}y_{0}&\langle e_{1},x\rangle\leq\nu_{0}\\ y_{1}&\langle e_{1},x\rangle>\nu_{0}\,,\end{cases}\] where \(e_{1}\) is the first elementary basis vector. The potential \(\varphi_{0}\) is not differentiable on the separating hyperplane \(\langle e_{1},x\rangle=\nu_{0}\), which has measure \(0\) under \(P\), but we may arbitrarily assign points on this hyperplane to \(y_{0}\). Similar arguments imply that the empirical transport plan \(\hat{\pi}\) between \(P_{n}\) and \(\nu\) has the following property: there exists a (random) threshold \(\tau\in(0,1)\) such that \[\hat{\pi}(x,y_{0})=\begin{cases}1&\langle e_{1},x\rangle<\tau\\ 0&\langle e_{1},x\rangle>\tau\,.\end{cases}\] The set \(\langle e_{1},x\rangle=\tau\) may not have measure \(0\) under \(P_{n}\), and \(\hat{\pi}(x,y_{0})\) may take values strictly between \(0\) and \(1\) on this set. The following lemma shows that \(\tau\) is close to \(\nu_{0}\) with high probability. **Lemma G.1**.: _For any \(t\geq 0\),_ \[\mathbb{P}\left\{\tau\geq\nu_{0}+t\right\}\leq e^{-2nt^{2}}\,.\] Proof.: If \(\tau\geq\nu_{0}+t\), this implies that \(P_{n}(\{x:\langle e_{1},x\rangle<\nu_{0}+t\})\leq\nu_{0}\). On the other hand, \(nP_{n}(\{x:\langle e_{1},x\rangle<\nu_{0}+t\}\) is a Bin\((n,\nu_{0}+t)\) random variable. The result then follows from Hoeffding's inequality [1, Theorem 2.8]. Let us write \(H\) for the halfspace \(\{x:\langle e_{1},x\rangle\leq\nu_{0}\}\), and \(\hat{H}\) for the halfspace \(\{x:\langle e_{1},x\rangle\leq\tau\}\). Let \(x\) be any point in \(\Omega\) such that \(x\in H\). We are interested in the event that there exists an element \(X_{i}\in\{X_{1},\ldots,X_{n}\}\) such that a) \(x\in V_{i}\) and b) \(X_{i}\in\hat{H}^{c}\). Call this event \(\mathcal{E}(x)\). On this event, \(\hat{T}_{\rm 1NN}(x)=y_{1}\) and \(T_{0}(x)=y_{0}\), so \(\|\hat{T}_{\rm 1NN}(x)-T_{0}(x)\|^{2}=1\). We therefore obtain \[\mathbb{E}\|\hat{T}_{\mathrm{1NN}}-T_{0}\|_{L^{2}(P)}^{2} =\mathbb{E}\int\|\hat{T}_{\mathrm{1NN}}(x)-T_{0}(x)\|^{2}\,\mathrm{d }P(x)\] \[\geq\mathbb{E}\int_{H}\|\hat{T}_{\mathrm{1NN}}(x)-T_{0}(x)\|^{2} \mathds{1}\{\mathcal{E}(x)\}\,\mathrm{d}P(x)\] \[\gtrsim\mathbb{E}\int_{H}\mathds{1}\{\mathcal{E}(x)\}\,\mathrm{d }P(x)\] \[=\int_{H}\mathbb{P}\left\{\mathcal{E}(x)\right\}\,\mathrm{d}P(x)\,,\] where the final equality follows from the Fubin-Tonelli theorem. We now lower bound the probability of \(\mathcal{E}(x)\). Let us write \(\mathcal{A}_{t}\) for the event that \(\tau<\nu_{0}+t\), for \(t>0\) to be specified, and write \(H_{t}\) for the halfspace \(\{x:\langle e_{1},x\rangle\leq\nu_{0}+t\}\). Given any \(x\in H\), write \(\Delta=d(x,H_{t}^{c})\), and let \(B\) be a ball of radius \(2\Delta\) around \(x\), intersected with \(\Omega\). Denote by \(\mathcal{F}(x)\) the event that there are no samples in \(V=B\cap H_{t}\) but there is at least one point in \(B\cap H_{t}^{c}\). Then \(\mathcal{F}(x)\cap\mathcal{A}_{t}\subseteq\mathcal{E}(x)\), since on \(\mathcal{F}(x)\) the nearest neighbor to \(x\) must be a sample in \(H_{t}^{c}\), and on \(\mathcal{A}_{t}\) we have \(H_{t}^{c}\subseteq\hat{H}^{c}\). **Lemma G.2**.: \[\mathbb{P}\left\{\mathcal{F}(x)\cap\mathcal{A}_{t}\right\}\geq(1-\operatorname {vol}(V))^{n}-(1-\operatorname{vol}(B))^{n}-e^{-2nt^{2}}\,.\] Proof.: We first compute \(\mathbb{P}\left\{\mathcal{F}(x)\right\}\). The probability that there are no samples in \(V\) is \((1-\operatorname{vol}(V))^{n}\), and this event may be written as the disjoint union of \(\mathcal{F}(x)\) and the event that all of \(B\) is empty. The latter event has probability \((1-\operatorname{vol}(B))^{n}\). Therefore \[(1-\operatorname{vol}(V))^{n}=\mathbb{P}\left\{\mathcal{F}(x)\right\}+(1- \operatorname{vol}(B))^{n}\,.\] Since \(\mathbb{P}\left\{\mathcal{A}_{t}^{c}\right\}\leq e^{-2nt^{2}}\), the claim follows. We need the following lemma. **Lemma G.3**.: _Assume that \(\Delta>0\) and that \(d(x,\partial\Omega)\geq 2\Delta\). There exist positive constants \(c_{d,0}<1\) and \(c_{d,1}\) such that_ \[\operatorname{vol}(V)\leq c_{d,0}\operatorname{vol}(B) \tag{69}\] _and_ \[\operatorname{vol}(B)\geq c_{d,1}\Delta^{d} \tag{70}\] Proof.: This is immediate from a scaling argument: since \(d(x,\partial\Omega)\geq 2\Delta\), the set \(B\) is a Euclidean ball of radius \(2\Delta\), and the set \(V\) is a Euclidean ball of radius \(2\Delta\) minus a spherical dome cut off by a hyperlane at distance \(\Delta\) from the center. When \(\Delta=1\), it is clear that the claimed inequalities hold, and the general case is obtained by dilation. We assume in what follows that \(d(x,\partial\Omega)\geq 2\Delta\). The inequalities \((1+x)^{n}\geq 1+nx\) and \(e^{x}\leq 1+x+x^{2}\), valid for all \(x\in[-1,0]\) and \(n\geq 1\), imply that for any \(\delta>0\) there exists a constant \(c_{d,\delta}>0\) such that if \(\Delta\leq c_{d,\delta}n^{-1/d}\), then we will have \[(1-\operatorname{vol}(V))^{n}\geq 1-nc_{d,0}\operatorname{vol}(B) \tag{71}\] \[(1-\operatorname{vol}(B))^{n}\leq e^{-n\operatorname{vol}(B)}\leq 1 -(1-\delta)n\operatorname{vol}(B) \tag{72}\] Choosing \(\delta\) sufficiently small, we obtain the existence of a small \(c_{d,3}>0\) such that if \(\Delta\leq c_{d,3}n^{-1/d}\), then \[(1-\operatorname{vol}(V))^{n}-(1-\operatorname{vol}(B))^{n}\geq C_{d}n\Delta^{d}\,.\] Define \(\Delta_{n}=c_{d,4}n^{-1/d}\). Putting it all together, consider the set \[S=\{x\in H\cap\Omega:\Delta_{n}/2\leq d(x,H_{t}^{c})\leq\Delta_{n},d(x,\partial \Omega)\geq 2\Delta_{n}\}\,.\] The above considerations imply that \(\mathbb{P}\left\{\mathcal{E}(x)\right\}\geq C_{d}n(\Delta_{n}/2)^{d}-e^{-2nt^{ 2}}\geq C_{d}^{\prime}-e^{-2nt^{2}}\) for all \(x\in S\). Choosing \(t\) to be a sufficiently large constant multiple of \(n^{-1/2}\), we obtain \[\int_{H}\mathbb{P}\left\{\mathcal{E}(x)\right\}\,\mathrm{d}P(x)\geq\int_{S} \mathbb{P}\left\{\mathcal{E}(x)\right\}\,\mathrm{d}P(x)\gtrsim_{d} \operatorname{vol}(S)\,.\] Since \(t\asymp n^{-1/2}\), we will have that \(t\ll\Delta_{n}\) for \(n\) sufficiently large (as \(d\geq 3\)). Therefore, for \(n\) large enough, the set \(S\) contains the set \[S^{\prime}=\{x\in\Omega:\nu_{0}-\Delta_{n}+t\leq\langle e_{1},x \rangle\leq\nu_{0}-\Delta_{n}/2+t,2\Delta_{n}\leq\langle e_{j},x\rangle\leq 1 -2\Delta_{n}\quad\forall j=2,\ldots,d\}\,.\] Since \(\operatorname{vol}(S^{\prime})\gtrsim_{d}\Delta_{n}\gtrsim n^{-1/d}\), the claim follows. ## Appendix H Auxiliary lemmas **Lemma H.1** (Young's inequality).: _Let \(Q_{0},Q_{1}\) be probability measures with \(Q_{1}\ll Q_{0}\) and let \(f\) be a function. Then, for \(\theta>0\),_ \[\int f(\,\mathrm{d}Q_{0}-\,\mathrm{d}Q_{1})\leq\frac{\theta \mathrm{Var}_{Q_{0}}(f)}{2}+\frac{\chi^{2}(Q_{1}\|Q_{0})}{2\theta}. \tag{73}\] Proof.: Recall Young's inequality: for \(a,b\in\mathbb{R}\), \(ab\leq\frac{a^{2}}{2}+\frac{b^{2}}{2}\). As the left-hand side is invariant by translation, we may assume without loss of generality that \(\int f\,\mathrm{d}Q_{0}=0\), so that \(\mathrm{Var}_{Q_{0}}(f)=\int f^{2}\,\mathrm{d}Q_{0}\). We write \[\int f(\,\mathrm{d}Q_{0}-\,\mathrm{d}Q_{1}) =\int(\sqrt{\theta}f)\frac{\left(1-\frac{\,\mathrm{d}Q_{1}}{ \,\mathrm{d}Q_{0}}\right)}{\sqrt{\theta}}\,\mathrm{d}Q_{0}\leq\frac{\theta}{2 }\int f^{2}\,\mathrm{d}Q_{0}+\frac{1}{2\theta}\int\left(1-\frac{\,\mathrm{d}Q _{1}}{\,\mathrm{d}Q_{0}}\right)^{2}\,\mathrm{d}Q_{0}\] \[=\frac{\theta\mathrm{Var}_{Q_{0}}(f)}{2}+\frac{\chi^{2}(Q_{1}\|Q _{0})}{2\theta}.\qed\] **Lemma H.2** (Expectation of empirical \(\chi^{2}\)-divergence).: _Let \(Q=\sum_{j=1}^{J}q_{j}\delta_{y_{j}}\) be a discrete measure supported on \(J\) atoms, and let \(Q_{n}\) denote its empirical measure, consisting of \(n\) i.i.d. samples. Then,_ \[\mathbb{E}[\chi^{2}(Q_{n}\|Q)]=\frac{J-1}{n}\,. \tag{74}\] Proof.: We can write \(Q_{n}=\sum_{j=1}^{J}\hat{q}_{j}\delta_{y_{j}}\), where \(\hat{q}_{j}\) is a binomial random variable with parameters \(n\) and \(q_{j}\). We obtain \[\chi^{2}(Q_{n}\|Q)=\sum_{j=1}^{J}\frac{(\hat{q}_{j}-q_{j})^{2}}{q_{j}}\,.\] Taking expectations, our bound reads \[\mathbb{E}[\chi^{2}(Q_{n}\|Q)]=\sum_{j=1}^{J}\frac{\operatorname{Var}(\hat{q}_ {j})}{q_{j}}=\sum_{j=1}^{J}\frac{q_{j}(1-q_{j})}{nq_{j}}=\frac{J-1}{n}.\] **Lemma H.3** (Control of suprema of empirical processes).: _Let \(X_{1},\ldots,X_{n}\) be an i.i.d. sample from some probability measure \(P\) on \(\mathbb{R}^{d}\), with \(P_{n}\) the associated empirical measure. Consider \(\mathcal{F}\) a class of functions \(\mathbb{R}^{d}\to\mathbb{R}\) with \(\|f\|_{\infty}\leq A\) for all \(f\in\mathcal{F}\). For \(u>0\), let \(N(u)\) be the \(u\)-covering numbers of \(\mathcal{F}\), that is the minimal number of balls of radius \(u\) for the \(\|\cdot\|_{\infty}\)-metric required to cover \(\mathcal{F}\). Then,_ \[\mathbb{E}\left[\sup_{f\in\mathcal{F}}\left|\int f\,\mathrm{d}(P_{n}-P)\right| \right]\leq\frac{C_{0}}{\sqrt{n}}\int_{0}^{C_{1}A}\sqrt{\log 2N(u)}\,\mathrm{d}u=: \frac{I}{\sqrt{n}} \tag{75}\] _for two positive absolute constants \(C_{0}\) and \(C_{1}\). Furthermore, for all \(t>0\),_ \[\mathbb{P}\left(\sup_{f\in\mathcal{F}}\left|\int f\,\mathrm{d}(P_{n}-P)\right| >t\right)\leq\exp\left(-\frac{C_{2}\sqrt{n}t}{I+A\log n}\right), \tag{76}\] _for some positive absolute constant \(C_{2}\). Eventually, for all \(p\geq 2\),_ \[\mathbb{E}\left[\sup_{f\in\mathcal{F}}\left|\int f\,\mathrm{d}(P_{n}-P)\right| ^{p}\right]^{1/p}\leq C_{p}\frac{I+A}{\sqrt{n}}. \tag{77}\] Proof.: See [23, Theorem 2.14.2 and Theorem 2.14.5].